


British financial regulators are holding urgent discussions over potential cyber risks linked to a new artificial intelligence model developed by Anthropic according to a report by the Financial Times.
Officials from the Bank of England, Financial Conduct Authority and the UK Treasury are in talks with the National Cyber Security Centre. The focus is to assess possible vulnerabilities in critical IT systems that could be exposed by the company’s latest AI model.
The model, known as Claude Mythos Preview, is not publicly available. It is being tested under a controlled programme called “Project Glasswing”, where selected organisations use it for defensive cybersecurity purposes.
According to the report major banks, insurers and financial institutions in the UK are expected to be briefed on the risks in the coming weeks. Reuters said it could not independently verify the details.
Anthropic has claimed that the model has already identified thousands of vulnerabilities across operating systems, web browsers and widely used software. The company also said it has chosen not to release the model publicly due to safety concerns.
In the United States, Scott Bessent has reportedly held similar discussions with major Wall Street banks on the model’s potential cyber risks.
The development has sparked debate among experts and policymakers. UK MP Danny Kruger has urged the government to engage with Anthropic, warning the model could pose serious cybersecurity threats.
However some experts remain sceptical. AI researcher Gary Marcus questioned the company’s claims and suggesting they may be exaggerated.
Concerns have also been raised about transparency. Earlier this month Anthropic confirmed it accidentally released part of its internal source code, although it said no sensitive data was exposed.
Analysts say the situation highlights both the growing power of advanced AI systems and the challenges regulators face in managing potential risks to financial and digital infrastructure.
Comment