Google’s latest threat intelligence report puts a sharper edge on a risk security teams have been discussing for years: AI is no longer just helping defenders write detections or summarize alerts. It is beginning to help attackers find and weaponize software flaws.
In a May 11 report, Google Threat Intelligence Group said it identified, for the first time, a threat actor using a zero-day exploit that Google believes was developed with AI assistance. Google said the actor was preparing for mass exploitation, but its proactive discovery and disclosure may have stopped the campaign before it spread.
Reuters reported that the planned attack targeted a widely used open-source system administration tool and was blocked before it became a mass exploitation event. CNBC reported that Google had high confidence the attackers used an AI model to find and exploit a zero-day, including a bypass for two-factor authentication.
The shift: AI as an exploit accelerator
The important part is not that criminals are experimenting with chatbots. That has been happening since consumer AI tools became widely available. The new signal is that AI appears to be moving closer to the core exploit chain: vulnerability discovery, exploit validation, obfuscation, target research, and operational planning.
Google’s report describes threat actors using expert personas, fabricated research narratives, specialized vulnerability datasets, and agentic tools to push models toward offensive security work. It also points to state-linked activity from China and North Korea, alongside financially motivated cybercrime.
That matters because zero-day development has historically required scarce expertise, time, and infrastructure. AI does not remove those constraints overnight. But it can compress parts of the workflow, help less experienced operators test ideas, and give capable teams more shots on goal.
Why defenders should treat this as a process problem
The practical takeaway is not panic. It is speed. If attackers can shorten the time between discovering a flaw and building a usable exploit, defenders need to shorten the time between exposure, detection, patching, and incident response.
That puts pressure on boring but critical work: asset inventory, dependency tracking, patch prioritization, logging, identity hardening, and tabletop planning. It also makes secure defaults more valuable. Two-factor authentication remains essential, but Google’s reported example shows why organizations cannot rely on a single control to absorb every failure.
Software vendors also face a harder disclosure environment. If AI-assisted exploit development becomes normal, the window between private discovery and broad abuse may shrink. Vendors will need faster coordinated disclosure processes, clearer customer guidance, and more automation around fix validation.
The AI supply chain becomes a target
Google’s report also frames AI systems themselves as attack surfaces. Agent frameworks, model connectors, API gateways, plugins, repositories, and dependency chains all create new places for attackers to hide.
That is especially relevant as companies wire AI agents into internal tools with access to tickets, code, cloud dashboards, and customer data. A compromised connector or poisoned dependency can turn an assistant into a path through the organization.
Security teams should treat AI integrations like production software, not side projects. They need owners, threat models, audit logs, permission boundaries, and rollback plans.
What to watch next
The key question is whether this remains a handful of advanced incidents or becomes a repeatable playbook. Watch for more reports of AI-assisted vulnerability discovery, especially where attackers combine public exploit data, agentic tooling, and automated target scanning.
Also watch the policy response. Governments want frontier AI models to improve cyber defense, but the same capabilities can help offensive teams. That tension will shape model access rules, safety testing, and partnerships between AI labs and security vendors.
For now, Google’s report is a useful warning: AI is not creating a brand-new cyber problem. It is increasing the tempo of the existing one. Organizations that already struggle to patch, monitor, and limit access will feel that acceleration first.



