- Codemender automatically generates security patches reviewed by AI for open source projects
- Google Deepmind says that Codemender reduces vulnerability workloads by validating the code
- Deepmind plans a broader launch for developers once Codemender’s reliability is confirmed
Google Deepmind has revealed Codmender, an artificial intelligence agent who, he says, can automatically detect and repair software vulnerabilities before they are exploited by computer pirates.
Google’s artificial intelligence research arm says that the new tool can protect open source projects generating patches that can be applied once they have been reviewed by human researchers.
Codemender is based on the GEMINI Deep Think model in Deepmind and uses multiple analysis tools, including fuzzing, static analysis and differential tests, to identify the fundamental causes of errors and prevent regressions.
Help, not replace humans.
Raluca Ada Popa, Senior research scientist of Deepmind, and John “Four” Flynn, his vice president of security, said the system had already made dozens of corrections.
“During the last six months we have been building Codemender, we have already updated 72 safety corrections for open source projects, including some of up to 4.5 million lines of code,” they wrote stern and Flynn in a Deepmind blog post.
The company says that Codemender can act in a reactive and proactive manner, repairing discovered failures and rewriting the code to eliminate vulnerabilities classes completely.
Ultimately, the system should be able to reduce the safety maintenance workload validating its own patches before sending them for their human review.
The revision is something that Google wishes to emphasize, noting that Codmender is not there to replace humans, but to act as a useful agent and expand the growing volume of vulnerabilities that automated systems can detect.
In one case, the team says that Codemender automatically applied annotations -Fbounds -Safety to parts of the Libwebp Image Compression Library, a step that, according to Deepmind, would have avoided past exploits.
The annotations force the compiler to verify the limits of the buffer, which reduces the risk of overflow -based attacks.
Developers also recognize the increasing use of AI by malicious actors and argue that defenders need equivalent tools.
Deepmind plans to expand the tests with open source maintainers and, once its reliability is properly demonstrated, it hopes to launch Codemender for the wider use of developers.
Google also reviewed its Secure AI Framework and launched a new reward program for vulnerabilities for AI -related failures.
You may also like
Follow Techradar on Google News and Add us as a preferred source To receive news, reviews and opinions of our experts in their feeds. Be sure to click on the Follow button!
And of course you can also Keep PakGazette in Tiktok For news, reviews, video aboxings and receive periodic updates on our part in WhatsApp also.