- Operai is increasing its error rewards payments
- Detecting high -impact vulnerabilities could generate $ 100K to researchers
- The movement occurs as more agents and AI systems develop
Operai hopes to encourage security researchers to identify security vulnerabilities by increasing their rewards to detect errors.
The AI giant has revealed that it is increasing its safety error reward program from $ 20K to $ 100K, and is expanding the scope of its cybersecurity subsidies program, as well as developing new tools to protect agents from AI from malicious threats.
This follows the recent warnings that AI agents can be kidnapped to write and send phishing attacks, and the company is anxious to describe its “commitment to reward significant and high -impact security research that helps us protect users and maintain confidence in our systems.”
Interrupt threats
Since the cybersecurity subsidies program was launched in 2023, Operai has reviewed thousands of applications and even financed 28 research initiatives, helping the company obtain valuable information about security subjects such as autonomous cyber security defenses, rapid injections and safe code generation.
Operai says that he continuously monitors the malicious actors who seek to exploit their systems and identify and interrupt the directed campaigns.
“We not only defend ourselves,” said the company, “we share tradecraft with other AI laboratories to strengthen our collective defenses. By sharing these emerging risks and collaborating throughout the industry and government, we help to ensure that the technologies of AI develop and disappoint safely.”
Operai is not the only company that increases its rewards program, with Google announcing in 2024 an increase of five factors in Bounty Bug rewards, arguing that the safest products make it difficult to search for errors, which is reflected in the highest compensations.
With more advanced models and agents, and more users and developments, there are inevitably more vulnerability points that could be exploited, so the relationship between researchers and software developers is more important than ever.
“We are involving researchers and professionals throughout the cyber security community,” Open AI confirmed.
“This allows us to take advantage of the last thought and share our findings with those who work towards a safer digital world. To train our models, we associate ourselves with experts in academic, government and commercial laboratories to compare skill gaps and obtain structured examples of advanced reasoning in cybersecurity domains.”
Through Cybernews