
‘Unprecedented’: AI company documents startling discovery after thwarting ‘sophisticated’ cyberattack
Photo by Samuel Boivin/NurPhoto via Getty Images
The company’s investigation showed that the hackers, whom the report “assess[ed] with high confidence” to be a “Chinese-sponsored group” manipulated the AI agent Claude Code to run the cyberattack.
The innovation was, of course, not simply using AI to assist in the cyberattack; the hackers directed the AI agent to run the attack with minimal human input.
The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
In other words, the AI agent was doing the work of a full team of competent cyberattackers, but in a fraction of the time.
While this is potentially a groundbreaking moment in cybersecurity, the AI agents were not 100% autonomous. They reportedly required human verification and struggled with hallucinations such as providing publicly available information. “This AI hallucination in offensive security contexts presented challenges for the actor’s operational effectiveness, requiring careful validation of all claimed results,” the analysis explained.
Anthropic reported that the attack targeted roughly 30 institutions around the world but did not succeed in every case.
The targets included technology companies, financial institutions, chemical manufacturing companies, and government agencies.
Interestingly, Anthropic said the attackers were able to trick Claude through sustained “social engineering” during the initial stages of the attack: “The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing.”
The report also responded to a question that is likely on many people’s minds upon learning about this development: If these AI agents are capable of executing these malicious attacks on behalf of bad actors, why do tech companies continue to develop them?
In its response, Anthropic asserted that while the AI agents are capable of major, increasingly autonomous attacks, they are also our best line of defense against said attacks.
You may also like
By mfnnews
search
categories
Archives
navigation
Recent posts
- Gavin Newsom Laughs Off Potential Face-Off With Kamala In 2028: ‘That’s Fate’ If It Happens February 23, 2026
- Trump Says Netflix Should Fire ‘Racist, Trump Deranged’ Susan Rice February 23, 2026
- Americans Asked To ‘Shelter In Place’ As Cartel-Related Violence Spills Into Mexican Tourist Hubs February 23, 2026
- Chaos Erupts In Mexico After Cartel Boss ‘El Mencho’ Killed By Special Forces February 23, 2026
- First Snow Arrives With Blizzard Set To Drop Feet Of Snow On Northeast February 23, 2026
- Chronological Snobs and the Founding Fathers February 23, 2026
- Remembering Bill Mazeroski and Baseball’s Biggest Home Run February 23, 2026









Leave a Reply
You must be logged in to post a comment.