Treasury and Fed Warn Major Banks of New AI-Driven Cyber Risks

14

U.S. financial leaders are facing a new digital frontier of risk. In a high-level meeting in Washington, D.C., top government officials warned executives from the nation’s largest banks that a breakthrough in artificial intelligence could inadvertently open the door to sophisticated cyberattacks.

A High-Stakes Briefing

On Tuesday, Treasury Secretary Scott Bessent convened an urgent meeting with a select group of chief executives, including leaders from Bank of America, Citi, and Wells Fargo. The primary concern involves the integration of advanced AI models into internal banking systems.

Joining the discussion was Federal Reserve Chair Jerome H. Powell, who has recently emphasized the growing vulnerability of the global financial infrastructure to cyber threats. The consensus among officials was clear: while AI offers immense potential, its current trajectory poses a direct threat to sensitive customer data and institutional security.

The “Claude Mythos” Dilemma

The core of the warning centers on a new intelligence model developed by Anthropic known as Claude Mythos Preview.

Unlike standard AI, this model is specifically engineered to identify software vulnerabilities. Anthropic has noted that the model’s capabilities are so advanced that it can detect security flaws that human developers—and even traditional automated tools—frequently miss.

This creates a “double-edged sword” scenario for the banking sector:
The Benefit: Banks could use the model to find and patch their own weaknesses before criminals do.
The Risk: If this technology is integrated into internal systems, it could become a roadmap for attackers. If hackers or “third-party bad actors” gain access to the model’s findings, they would possess a perfect guide to the bank’s most critical security gaps.

Controlled Release and “Project Glasswing”

The potential danger is significant enough that Anthropic has opted against a general public release of the model. Instead, the company is managing the risk through a restricted initiative called “Project Glasswing.”

Currently, access to the model is limited to a specialized coalition of 40 companies. This containment strategy reflects a growing trend in the AI industry: as models become more capable of “reasoning” through complex security problems, the industry is shifting toward highly controlled, gated environments to prevent widespread exploitation.


The Bottom Line: The emergence of specialized AI like Claude Mythos highlights a new paradox in cybersecurity: the very tools designed to find vulnerabilities may become the most powerful weapons for those seeking to exploit them.