‘Unprecedented’: AI company documents startling discovery after thwarting ‘sophisticated’ cyberattack
Photo by Samuel Boivin/NurPhoto via Getty Images
The company’s investigation showed that the hackers, whom the report “assess[ed] with high confidence” to be a “Chinese-sponsored group” manipulated the AI agent Claude Code to run the cyberattack.
The innovation was, of course, not simply using AI to assist in the cyberattack; the hackers directed the AI agent to run the attack with minimal human input.
The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
In other words, the AI agent was doing the work of a full team of competent cyberattackers, but in a fraction of the time.
While this is potentially a groundbreaking moment in cybersecurity, the AI agents were not 100% autonomous. They reportedly required human verification and struggled with hallucinations such as providing publicly available information. “This AI hallucination in offensive security contexts presented challenges for the actor’s operational effectiveness, requiring careful validation of all claimed results,” the analysis explained.
Anthropic reported that the attack targeted roughly 30 institutions around the world but did not succeed in every case.
The targets included technology companies, financial institutions, chemical manufacturing companies, and government agencies.
Interestingly, Anthropic said the attackers were able to trick Claude through sustained “social engineering” during the initial stages of the attack: “The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing.”
The report also responded to a question that is likely on many people’s minds upon learning about this development: If these AI agents are capable of executing these malicious attacks on behalf of bad actors, why do tech companies continue to develop them?
In its response, Anthropic asserted that while the AI agents are capable of major, increasingly autonomous attacks, they are also our best line of defense against said attacks.
You may also like
By mfnnews
search
categories
Archives
navigation
Recent posts
- New York DA Opens Investigation Into Eric Swalwell Sex Assault Allegations April 12, 2026
- Chick-fil-A worker on why he didn’t keep $10K cash left in restroom: ‘That’s not what Jesus would’ve done’ April 12, 2026
- PBA: San Miguel in talks with Bennie Boatwright after Justin Patton no-show April 12, 2026
- Karl Eldrew Yulo finishes 8th in horizontal bar finals at Gymnastics World Cup Croatia leg April 12, 2026
- Sinner beats Alcaraz in Monte Carlo final to top rankings April 12, 2026
- Andreeva fights back to beat Potapova in Linz Open final April 12, 2026
- Iran Guards threaten to trap enemies in ‘deadly vortex” in Hormuz April 12, 2026











Leave a Reply
You must be logged in to post a comment.