Support our independent tech coverage. Chrome Unboxed is written by real people, for real people—not search algorithms. Join Chrome Unboxed Plus for just $2 a month to get an ad-free experience, access to our private Discord, and more. Learn more about membership here.
START FREE TRIAL (MONTHLY)START FREE TRIAL (ANNUAL)
As the world continues to marvel at the creative and productive power of AI, there’s another, darker side to the story: bad actors are already exploring ways to use this same technology to create faster and more sophisticated cyberattacks. In a new announcement, Google has laid out its strategy to get ahead of this threat – fight fire with fire.
Google is flipping the script, using its own powerful AI to create a “decisive advantage for cyber defenders.” The strategy includes an updated security framework and a new bug bounty program, but the star of the show is a stunning new AI-powered agent called CodeMender.
Meet CodeMender, the AI that finds and fixes code
At its core, CodeMender is an AI agent that can autonomously find, patch, and validate fixes for critical code vulnerabilities. This isn’t just about finding bugs; it’s about fixing them, automatically.
The process is a brilliant showcase of AI collaboration. First, CodeMender uses Gemini’s advanced reasoning to perform a root cause analysis and find the fundamental source of a vulnerability. Then, it autonomously generates and applies a code patch to fix it.
Finally, that patch is routed to specialized “critique” AI agents that act as automated peer reviewers, rigorously validating the fix for correctness and security before it’s ever proposed to a human for final sign-off. It’s a massive leap forward in proactive, automated cyber defense that could dramatically accelerate the time it takes to secure vulnerable software.
A new bug bounty program and a framework for the future
Alongside CodeMender, Google also announced two other key initiatives. First is a new, dedicated AI Vulnerability Reward Program (AI VRP), which will provide a clear and comprehensive set of rules and rewards to incentivize the global security research community to find and report high-impact flaws in Google’s AI systems.
Second, Google is updating its Secure AI Framework to version 2.0 (SAIF 2.0). This update specifically addresses the emerging risks of autonomous AI agents, with new guidance and a set of core principles to ensure they are built securely, including having well-defined human controllers and carefully limited powers.
This is a significant and proactive announcement from Google that shows they are thinking deeply about the security implications of the AI era. It’s a clear commitment to using the power of AI to tip the scales in favor of the defenders, not the attackers, and it’s fantastic to see them leading the charge in building a safer AI future.
SUBSCRIBE TO UPSTREAM
Get Chrome Unboxed delivered straight to your inbox
Upstream is our flagship, curated newsletter with the top stories, most click-worthy deals, giveaways, and trending articles from Chrome Unboxed sent directly to your inbox a few times a week. Join 31,000+ subscribers.

