When Anthropic unveiled Claude Mythos Preview on April 7, 2026, it didn't launch to the public. Instead, the San Francisco AI company quietly handed access to a handful of Big Tech partners, called the move an "urgent" defensive measure, and warned governments around the world that a watershed moment for cybersecurity had arrived — whether they were ready or not.
Within days, financial regulators across India, Japan, Europe, Australia, and New Zealand were in emergency sessions. Central banks held calls with commercial lenders. Finance ministers sat down with tech officials. And one persistent question echoed through every briefing room: Why wasn't our country given access?
Claude Mythos Preview is Anthropic's most advanced AI model to date — described in internal documents as "by far the most powerful AI model" the company has ever built. Unlike prior AI tools that required significant human guidance, Mythos operates with striking autonomy in one particularly alarming domain: finding and exploiting software vulnerabilities.
Mythos Preview fully autonomously identified and exploited a 17-year-old remote code execution vulnerability in FreeBSD (CVE-2026-4747) — allowing complete root access to servers from an unauthenticated position anywhere on the internet. No human was involved after the initial prompt.
In testing, the model chained together multiple vulnerabilities in sequences that no single flaw would enable alone — writing complex JIT heap sprays to escape both renderer and OS-level sandboxes in web browsers. It also found a 27-year-old denial-of-service bug in OpenBSD and a 16-year-old flaw in the FFmpeg multimedia framework.
We did not explicitly train Mythos Preview to have these capabilities. Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy. The same improvements that make the model substantially more effective at patching vulnerabilities also make it substantially more effective at exploiting them.
— ANTHROPIC OFFICIAL STATEMENTThe UK AI Security Institute confirmed that Mythos is the first AI model able to complete its full network takeover simulation — though it noted the test environment lacked some real-world defences. Anthropic has also privately warned top US government officials that Mythos makes large-scale AI-driven cyberattacks significantly more likely this year.
Rather than release Mythos publicly, Anthropic launched Project Glasswing — a tightly controlled consortium that grants early access to select organisations to use Mythos defensively: scanning critical software, finding vulnerabilities, and patching them before hostile actors develop comparable capabilities.
The initiative carries a commitment of $100 million in model usage credits and $4 million in direct donations to open-source security organisations. Partners include:
Conspicuously absent from Project Glasswing: OpenAI (Anthropic's main rival), and every government and private organisation outside the United States. No Indian company is among the roughly 40 access-holders — a fact that has triggered urgent diplomatic and regulatory responses from New Delhi.
India's reaction has been swift and high-level. On April 23, Finance Minister Nirmala Sitharaman convened an emergency meeting with bank chiefs, RBI officials, and representatives from the Ministry of Electronics and Information Technology (MeitY) to assess systemic risks to the Indian financial sector.
The Reserve Bank of India has since been in discussions with international regulators — including the US Federal Reserve and the Bank of England — along with domestic lenders and government officials, to evaluate Mythos's potential to accelerate the discovery and exploitation of software vulnerabilities in Indian infrastructure.
As AI systems evolve to autonomously identify and chain vulnerabilities across platforms, the potential for cascading, cross-border risk becomes significantly higher. Given India's central role in maintaining sensitive data and operations for global clients, exclusion from early access could leave critical systems exposed.
— NASSCOM FORMAL STATEMENT TO ANTHROPICIndustry body Nasscom has formally written to Anthropic requesting inclusion in Project Glasswing, warning that Indian technology firms manage critical global infrastructure and should be part of the defensive consortium. The National Payments Corporation of India (NPCI) is separately exploring access channels to identify zero-day threats in payment systems.
MeitY is also in direct talks with Anthropic's US leadership, emphasising that India manages sensitive global systems — yet no Indian company currently holds Mythos access. The government's ask: at minimum, partial access for the banking and payments sector.
India is not alone. The Mythos announcement has triggered a coordinated regulatory response spanning multiple continents, with central banks and financial supervisors scrambling to assess exposure in their own systems.
| NATION / BODY | RESPONSE | STATUS |
|---|---|---|
| India — RBI + MeitY + FM Sitharaman | Emergency meeting with bank chiefs on Apr 23. RBI coordinating with Fed & BoE. Nasscom formally requesting Glasswing access. | HIGH ALERT |
| Japan — Financial Services Agency | High-level meeting with MUFG, SMFG, Mizuho, Bank of Japan & Tokyo Stock Exchange. | HIGH ALERT |
| Germany — Bundesbank | President Joachim Nagel called Mythos a "double-edged sword" — useful for defence but dangerous in wrong hands. | MONITORING |
| Australia — Reserve Bank of Australia | Formally stated it is "closely monitoring" and engaging with peer regulators and government entities. | COORDINATING |
| New Zealand — RBNZ | Described risks as "developing." Confirmed coordination with Australian counterparts. | COORDINATING |
| Canada — Finance Ministry | FM François-Philippe Champagne confirmed Mythos was discussed at the IMF meeting in Washington. | MONITORING |
| EU / Europe — Multiple regulators | Coordinating responses across member states. EU AI Act framework being assessed for applicability. | COORDINATING |
The urgency of regulators was validated by a real-world incident. A Discord group managed to gain unauthorised access to Mythos on the very day of its debut in February, using a combination of insider connections, web-scraping bots, and social engineering. One member of the group was linked to a third-party vendor of Anthropic.
The breach — reportedly without malicious intent — exposed a fundamental problem: if a group of AI enthusiasts could access the world's most restricted cybersecurity AI model in hours, what could a nation-state actor or professional ransomware group achieve?
Anthropic also suffered a separate incident on March 31 where nearly 2,000 source code files and over 500,000 lines of code associated with Claude Code were accidentally exposed for approximately three hours. A subsequent analysis found a security bypass in Claude Code triggered by commands with more than 50 subcommands — since patched in version 2.1.90.
Council on Foreign Relations fellow Gordon M. Goldstein called the Mythos announcement "an inflection point" — noting that global competition for scarce AI security resources will be fierce, US interests will be defended first, and most of the world will struggle to keep pace.
A Bain & Company report highlighted that sectors like energy, manufacturing, and transportation face heightened risk due to ageing infrastructure that is difficult to patch. Banks, too, are exposed given their reliance on interconnected legacy systems dating back decades.
The global Hunger Games for AI security has arrived. Project Glasswing is a responsible and necessary response to an unprecedented new risk. But it will initially touch only a tiny percentage of the world's vulnerable infrastructure.
— GORDON M. GOLDSTEIN, COUNCIL ON FOREIGN RELATIONSOne week after Anthropic's announcement, OpenAI announced a similarly limited rollout of its own cybersecurity-focused model — confirmation that the AI arms race in offensive and defensive cyber capabilities has formally begun. Anthropic has stated that Dario Amodei offered to collaborate with US authorities to "help defend against the risk of these models," and has committed to rolling out new safeguards in an upcoming Claude Opus model before any broader Mythos deployment.
For nations like India — managing critical global infrastructure yet locked out of Glasswing — the message is clear: the AI security race has started, and the starting gun was fired in Silicon Valley, not New Delhi.