Level 1 — Absolute Beginner
Google found bad people on the internet. They are called hackers. Hackers try to break into computers.
These hackers used artificial intelligence, or AI, to help them. AI is software that can think a little bit like a person.
The hackers wanted to use AI to attack many computers at the same time. Google saw this and stopped them.
Google's news is important. It is the first time we have seen hackers use AI in this way.
- Google
- A big company that makes search engines and other internet tools.
- hacker
- Someone who tries to break into computers without permission.
- internet
- The big network that connects computers around the world.
- AI
- Artificial intelligence; software that can do tasks that need thinking.
- attack
- A bad action against someone or something.
- software
- Programs that tell a computer what to do.
- stop
- To not let something happen.
- news
- Information about new events.
Level 2 — Elementary
On Monday, May 11, 2026, Google's Threat Intelligence Group announced that it had stopped a serious cyberattack. A criminal hacker group was using an artificial intelligence model to plan a 'mass exploitation event.'
The hackers used the AI to find and study a hidden flaw in a piece of software. The software is a common tool that lets people manage websites and servers. The flaw allowed them to bypass two-factor authentication, which is a security check that uses two steps.
Google described what they found as a 'zero-day' vulnerability. A zero-day is a hole in software that the makers don't even know about yet. Because of this, they have had zero days to fix it.
Google said it told the company that makes the software about the problem. They worked together to release a fix. Google also said it does not believe its own Gemini AI model was used by the criminals.
- cyberattack
- An attempt to damage or break into computer systems.
- criminal
- A person who breaks the law.
- model
- A computer program that has learned to do a task using data.
- flaw
- A mistake or weakness in something.
- server
- A powerful computer that delivers data and services to other computers.
- two-factor authentication
- A login system that requires two different proofs of identity.
- vulnerability
- A weakness that could be used to attack a system.
- fix
- A change that solves a problem in software.
Level 3 — Intermediate
Google's Threat Intelligence Group, known as GTIG, disclosed on Monday, May 11, 2026, what it described as the first publicly documented case of a criminal hacking group using a large language model to develop an end-to-end zero-day exploit. The team said it had 'high confidence' that the unnamed group had been using an AI assistant to find, write and weaponize a software flaw, then prepared to deploy the resulting exploit at scale.
According to the GTIG report, the vulnerability lived in a Python script bundled with an open-source web-based system administration tool widely used by small and medium-sized businesses. The script handled the two-factor authentication step, and the AI-generated exploit allowed attackers to bypass the second factor entirely, granting full administrative access. Google said it does not believe its homegrown Gemini model was used; analysts at Bloomberg and BleepingComputer have suggested an open-weight model marketed under the name 'OpenClaw' may have been involved.
Google worked with the impacted vendor to coordinate responsible disclosure and shipped an emergency patch before the threat actor could trigger the mass-exploitation operation. The patch is now available, and Google has published indicators of compromise so that defenders worldwide can scan for any prior probing.
The case has rattled the cybersecurity community. For more than a year, researchers had warned that frontier-grade AI assistants could shorten the cycle from vulnerability discovery to operational malware from weeks to hours. Until now, public evidence had been limited to academic demonstrations and small-scale criminal experimentation. Google's report signals that the era of fully AI-assisted cybercrime has now arrived, and it has reignited debate over export controls, model-access licensing and the responsibilities of model providers when their tools are misused.
- large language model
- An artificial intelligence system trained on massive amounts of text to understand and generate language.
- exploit
- A piece of code or technique that takes advantage of a flaw in software.
- weaponize
- To turn something into a usable tool for an attack.
- Python
- A widely used programming language popular in scripting, web tools and AI.
- open-source
- Software whose source code is freely available for anyone to inspect, modify or share.
- responsible disclosure
- A process of privately telling a vendor about a security flaw so they can fix it before it becomes public.
- indicators of compromise
- Technical clues that suggest a system has been hacked.
- export controls
- Government rules that restrict who may receive certain technologies.
Level 4 — Advanced
Google's Threat Intelligence Group on Monday, May 11, 2026, published what it characterized as the first publicly documented operational case of a criminal threat actor using a frontier large language model to construct and weaponize a previously unknown software vulnerability, and to stage that vulnerability for what GTIG bluntly called a 'mass exploitation event.' The Bloomberg-, Washington-Post- and CNBC-reported disclosure described the attack chain as fully AI-assisted, with the model handling reconnaissance, exploit drafting, and operational obfuscation under a now-classified codename Google has internally labeled 'Mythos.'
The vulnerability, a two-factor-authentication bypass embedded in a Python script that ships with a popular open-source web-based system-administration suite, allowed an attacker to retrieve full administrative access without triggering any of the suite's auditing hooks. According to GTIG, the AI model proposed an off-by-one race condition in the TOTP comparator, generated a polished proof-of-concept that exercised the race window reliably, and even produced an obfuscation harness with conditional decryption keyed on the victim host's locale settings — a level of operational craft that researchers describe as approaching state-sponsored toolkits.
Google said it does not believe its in-house Gemini family powered the operation, and Bloomberg, citing multiple sources familiar with the investigation, reported that an open-weight model distributed under the alias 'OpenClaw' is the leading suspect. The Pentagon has reportedly designated Anthropic — another frontier-AI developer — a supply-chain risk in a separate but parallel disclosure, intensifying scrutiny over whether closed-weight, open-weight or hybrid distribution models present the greatest exploitation surface. GTIG coordinated a responsible-disclosure cycle with the affected vendor, shipped a patch and seeded indicators of compromise into the Mandiant feed before announcing the activity publicly.
The episode is widely seen as the watershed inflection point for a debate that has been building for two years inside both the cybersecurity and AI-policy communities. Senators Mark Warner, Marsha Blackburn and Bobby Scott reintroduced a bipartisan bill on Monday afternoon that would empower the new federal AI commission to compel pre-deployment red-team disclosures from any developer offering a frontier-class model commercially. Industry response has been mixed: Microsoft and OpenAI publicly endorsed pre-deployment red-team standards; Meta and Mistral pushed back, warning that overly prescriptive controls would entrench incumbent labs and disadvantage the open-source ecosystem on which the global vulnerability-research community depends.
- threat actor
- An individual or group that conducts hostile actions against computer systems.
- operational obfuscation
- Techniques used to hide the workings of malicious code from defenders.
- auditing hooks