
‘Unprecedented’: AI company documents startling discovery after thwarting ‘sophisticated’ cyberattack
Photo by Samuel Boivin/NurPhoto via Getty Images
The company’s investigation showed that the hackers, whom the report “assess[ed] with high confidence” to be a “Chinese-sponsored group” manipulated the AI agent Claude Code to run the cyberattack.
The innovation was, of course, not simply using AI to assist in the cyberattack; the hackers directed the AI agent to run the attack with minimal human input.
The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
In other words, the AI agent was doing the work of a full team of competent cyberattackers, but in a fraction of the time.
While this is potentially a groundbreaking moment in cybersecurity, the AI agents were not 100% autonomous. They reportedly required human verification and struggled with hallucinations such as providing publicly available information. “This AI hallucination in offensive security contexts presented challenges for the actor’s operational effectiveness, requiring careful validation of all claimed results,” the analysis explained.
Anthropic reported that the attack targeted roughly 30 institutions around the world but did not succeed in every case.
The targets included technology companies, financial institutions, chemical manufacturing companies, and government agencies.
Interestingly, Anthropic said the attackers were able to trick Claude through sustained “social engineering” during the initial stages of the attack: “The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing.”
The report also responded to a question that is likely on many people’s minds upon learning about this development: If these AI agents are capable of executing these malicious attacks on behalf of bad actors, why do tech companies continue to develop them?
In its response, Anthropic asserted that while the AI agents are capable of major, increasingly autonomous attacks, they are also our best line of defense against said attacks.
You may also like
By mfnnews
search
categories
Archives
navigation
Recent posts
- Former New Jersey Governor Who Took Over For Scandal-Plagued Predecessor Dies January 11, 2026
- Philadelphia Sheriff Goes Viral For Threatening ICE January 11, 2026
- The Obamacare subsidy fight exposes who Washington really serves January 11, 2026
- The crisis of ‘trembling pastors’: Why church leaders are ignoring core theology because it’s ‘political’ January 11, 2026
- Dobol B TV Livestream: January 12, 2026 January 11, 2026
- Ogie Diaz ukol kay Liza Soberano: ‘Wala siyang sama ng loob sa akin. Ako rin naman…’ January 11, 2026
- LOOK: Another ‘uson” descends Mayon Volcano January 11, 2026








Leave a Reply
You must be logged in to post a comment.