Table of Contents
A OpenAI, a company responsible for artificial intelligence (AI) solutions, is offering up to $20 for reporting bugs on ChatGPT. To be able to alert about vulnerabilities, users must show errors through the platform of bug bounty bugcrowd, where the announcement was made.
While offering rewards, the OpenAI warns that attempts to operate the company's AI systems for malicious purposes will not receive any reward. Understand in this article how the company will give these awards to those who find the vices of AI services and the rules involved in obtaining them.
How does OpenAI reward users who find bugs in ChatGPT?

The company that announced the bug-finding award proposals said it's running this program because it understands that "substantial research and a broad approach" are necessary for some issues. Thus, the company intends to reward for detected errors that can "lighten the bar" and improve the functioning of services such as ChatGPT.
“Our rewards range from $200 for low-risk discoveries to $20 for exceptional discoveries”
OpenAI
In other words, taking the current exchange rate for the US currency as a reference, if a Brazilian user were to report an error considered to be more basic, his premium could exceed R$980. almost R$ 100 thousand.
The idea is that, with the rewards program, OpenAI will be able to reduce the possibility of failures in the processing of user data. Paying greater attention to the community that uses its solutions, the company wants people not to leave explicit vulnerability details.
Bugs and other issues not included in the bounty program

There are several reports and articles already showing the beneficial potential of ChatGPT — both for simple tasks, such as planning trips, and for other more complex activities, such as evaluating programming codes — and, on the other hand, reporting responses that may be more risky — the service can be used to create pieces of disinformation or even elaborate steps to create malware. As for these issues, Open AI has listed some guidelines for defining who can receive rewards for reporting bugs.

The list on the Bugcrowd page, called Out-of-Scope (out of scope, in free translation), indicates that errors by attack, whether by breaking passwords, destroying third-party data, leaking identities, among other types of offense, will not receive the announced rewards. The guidelines also limit any attempt to attack OpenAI itself, spam or fraud attempts. Still, the company will not reward those who plan to ask AI services to come up with methods to trick people through websites, with fake buttons and services.
Vulnerabilities alerted in ChatGPT with great repercussion: some examples

A few weeks ago, ChatGPT became news for showing harmful potential, as it was able to bypass verification systems, such as CAPTCHA. But these and other errors can be related to other causes, that is, they are not necessarily due to poor management by OpenAI.
In March, for example, a hacker named rezO even revealed 80 “secret plugins” within the API — short for Application Programming Interface, or Application Programming Interface, in free translation; serves to create an interaction between digital services, being important for the functioning of these systems for people to use. It turns out that these features were actually yet to be implemented for the AI service.
As a resolution, OpenAI managed to remedy the plugin problem on the same day that the occurrence was alerted. Also in March, OpenAI even apologized for being notified of another error in its AI quick response system. The warning was that a bug was happening that led to leaking payment details and even chat history for some users.
See also:
Google Bard: ChatGPT competitor is announced for “soon”
Bill Gates Quotes “AI Era Has Just Begun” and Reflects on ChatGPT
Discover more about Showmetech
Sign up to receive our latest news via email.