HomeBlockchainRegulationAnthropic mythos tests across 50 orgs as US policy clashes with Pentagon...

Anthropic mythos tests across 50 orgs as US policy clashes with Pentagon AI stance

Senior US economic officials are urging Wall Street to explore anthropic mythos even as other agencies fight the company in court over its stance on AI safety.

The Trump administration6s conflicting stance on Anthropic

The Trump administration is quietly promoting Anthropic to America6s largest banks while simultaneously trying to cripple the company through the Pentagon. However, Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell have taken a different line.

According to Bloomberg, Bessent and Powell summoned executives from JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley this week. They urged the firms to deploy Anthropic6s new Mythos model to detect cybersecurity vulnerabilities inside their systems.

The recommendation is striking because Anthropic is currently battling the Department of Defense in federal court. Defense Secretary Pete Hegseth designated the company a supply chain risk, which blocks it from military contracts and instructs defense contractors to stop using its technology.

The designation followed Anthropic6s refusal to lift two core safety restrictions from its AI models: no use in fully autonomous weapons and no deployment for mass surveillance of American citizens. Moreover, those conditions sit at the heart of the company6s public commitment to AI safety guardrails.

Yet, two of the administration6s most senior economic policymakers are now telling Wall Street to adopt the very product the Pentagon has sought to blacklist.

Inside Claude Mythos Preview and Project Glasswing

Claude Mythos Preview is a frontier-scale model that Anthropic did not explicitly design for cybersecurity tasks. Instead, the vulnerability-hunting capability emerged as a downstream byproduct of broader advances in code reasoning and autonomous operation, according to the company.

During internal testing, Mythos reportedly identified thousands of previously unknown zero-day vulnerabilities across major operating systems and web browsers. However, these flaws were described as issues not yet recognized by the original software developers, raising alarms across security circles.

The capabilities appeared significant enough that Anthropic chose not to release the model openly. Instead, it launched Project Glasswing, a tightly controlled access programme that distributes the model to roughly 50 organisations under strict usage terms.

Named partners include Amazon Web Services, Apple, Google, Microsoft, Nvidia, Cisco, CrowdStrike, Palo Alto Networks, and JPMorgan Chase. Anthropic has also committed up to $100 million in usage credits and $4 million in direct donations to open-source security organisations under the initiative.

The too dangerous to release framing has attracted scepticism. Tom6s Hardware, for instance, noted that claims of thousands of critical zero-day discoveries rested on only 198 manual reviews. Moreover, many flagged issues were tied to older software or looked impractical to exploit in real-world attacks.

Critics across the security community argued the restricted roll-out could resemble a commercial tactic as much as responsible AI governance. That said, some see a classic enterprise play: manufacture scarcity, amplify perceived risk, and let major customers queue for access.

The Pentagon paradox and the courts

The collision between the Bessent-Powell endorsement and the Hegseth designation is not a small communication error. Instead, it illustrates how different branches of the same administration are pursuing openly contradictory AI policies toward a single company.

The Pentagon dispute dates back to February, when Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to drop key safety limits or lose the firm6s $200 million defense contract. Amodei refused to remove the restrictions.

Hegseth responded by labelling Anthropic a supply chain risk, and President Donald Trump then ordered federal agencies to halt use of its technology. A Pentagon official accused Amodei of having a God complex, while Trump branded Anthropic a radical left, woke company.

The courts have since delivered mixed rulings. A federal judge in California issued a preliminary injunction blocking the supply chain designation, sharply criticising the government6s approach in a widely quoted opinion.

The judge wrote that nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government. However, an appeals court in Washington, D.C. later rejected Anthropic6s request to pause the blacklisting while litigation continues.

The practical outcome is messy. Anthropic remains excluded from DoD contracts but can continue working with other US government agencies, including economic and financial regulators.

It is precisely into that institutional gap, barred from the Pentagon but not from the Treasury or the Fed, that Bessent and Powell moved this week.

How Wall Street is testing Mythos

JPMorgan Chase is the only bank listed publicly as an official Project Glasswing partner. However, Bloomberg reports that Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley are now running internal trials with Mythos as well.

The tests reportedly span multiple use cases: vulnerability detection across internal infrastructure, fraud-risk flagging in transaction flows, and automation of compliance workflows across complex financial systems. Moreover, banks see these pilots as early experiments ahead of broader deployment of AI-based cyber defenses.

The rush toward mythos cybersecurity testing reflects a clear fear scenario. If a model can uncover unknown flaws in operating systems and browsers, it can likely identify weaknesses inside banking infrastructure too.

The defensive logic is straightforward: it is safer for a bank6s own tools to discover those vulnerabilities before an adversary6s AI model does. That said, the same capability that strengthens defenses could also, in theory, be misused by attackers if similar systems escaped institutional control.

International regulatory reactions

The policy response is no longer just American. The Financial Times reported that UK officials across several key institutions are scrambling to understand Mythos6s risk profile, including potential systemic implications for finance.

Regulators at the Bank of England, the Financial Conduct Authority, and HM Treasury are reportedly in talks with the National Cyber Security Centre. Moreover, they aim to examine vulnerabilities highlighted by Mythos and assess whether similar models might expose weaknesses in British financial infrastructure.

Within the next fortnight, representatives from major UK banks, insurance companies, and exchanges are expected to receive detailed briefings. The goal is to map how large language models with advanced code reasoning could reshape cyber risk and operational resilience.

A structural problem in US AI strategy

The fight around anthropic mythos reveals a deeper fault line in the administration6s AI strategy. The same government that framed Anthropic as a national security threat for refusing to drop guardrails is now urging the financial system to rely on its technology for protection.

The resulting signal to Anthropic is internally inconsistent. On one side, the Pentagon calls the company too dangerous to trust with defense work. On the other, the Treasury Secretary personally phones bank CEOs to recommend Anthropic6s most controversial product for core cybersecurity operations.

For Anthropic, that contradiction may prove strategically useful. Every major bank that integrates Mythos into its infrastructure strengthens the company6s role in critical national systems, making the Pentagon6s supply chain designation look increasingly detached from operational reality.

For the administration, the episode highlights the cost of letting personal grievance and ideological rhetoric steer national security policy. When one part of government blacklists a vendor that another part is actively deploying, coordination and credibility both suffer.

The banks, meanwhile, appear largely unfazed by the political drama. When the Treasury Secretary and the Fed Chair tell the biggest institutions in finance to test a powerful new cybersecurity tool, they test it, regardless of what the Pentagon thinks about the company that built it.

In the end, the Mythos saga underscores how powerful AI systems are forcing governments, regulators, and critical industries to confront trade-offs between security, safety, and political control far faster than existing policy frameworks can adapt.

Alessia Pannone
Graduated in communication sciences, currently student of the master's degree course in publishing and writing. Writer of articles from an SEO perspective, with care for indexing in search engines.
RELATED ARTICLES

Stay updated on all the news about cryptocurrencies and the entire world of blockchain.

Featured video

LATEST