Category: Claude
The crazy reason some AI obsessives love it when their chatbot talks like a caveman

Coders using Claude, AI giant Anthropic’s leading large language model, discovered a shortcut that saves them money and simplifies the entire engagement with the LLM down to mere syllables.
The protocol, since made into an app, is called Caveman.
Caveman makes it possible to save money without sacrificing output by reducing the linguistic sophistication of the LLM. The logic is simple: The less the AI has to talk to you in fully conversant language, the less compute it demands. And the less compute it demands, the fewer “tokens” it costs. Like all LLMs, Claude works on tokens, which users buy with dollars to pay the chatbot’s company.
As the world of the printing press is forgotten, communication transforms.
It’s a crazy workaround, but it pays whopping dividends. If you can tolerate talking to a digital Neanderthal, you can save up to 75% on operating costs.
Devolution?
With that, we’re face to face with the raw evidence that tech doesn’t transcend our culture’s many cautionary refrains. Garbage in, garbage out. Easy come, easy go. Live by the gun, die by the gun. In other words, “It’s about the financial system and the soul,” to quote Ardian Tola, founder of the Bitcoin-powered platforms Canonic and Ark.
To give a few examples of what’s going on here, consider the coder sitting at his or her desk prompting Claude to, say, reconfigure some corporate software to the new spec. The coder used to do this work, going into the alien lines of “code language” and — using his experience, knowledge, creative problem-solving, and time — the coder could effect these alterations in various ways and to various levels of elegance. The coder for the past several decades commanded and deserved a substantial salary: It really took some substantial skill and know-how to move with speed and efficiency.
That kind of coder and tech worker is being closed out now. The 80,000 layoffs and counting in the industry this year send a pretty clear message about where this is headed. Corporate reliance (and crucially, dependence) on AI is just about baked in. Companies like Oracle and Stripe are letting go of workers right after they complete their final task — of training their LLMs to do their job.
RELATED: Trump administration has a job opportunity for adult video gamers
Emanuele Cremaschi/Getty Images
Today the coder clinging to his mid-tier salary prompts an LLM to alter the code, and he is “spending” tokens with each word and symbol required to perform these prompts. So if a prompt drags on — like “Claude, move the header up and replace it with the PayPal button, and let me see what they look like if everything is balanced in mobile view” — it is going to cost the corporation or the contract coder more than if the prompt were something closer to “Switch header w/ pay button.”
In terms of efficiency, for a while anyway, this probably adds a layer of challenge for the coder, works the old brain plasticity, and all important, looks good to accounting.
Our souls at stake
One interpretation of everything now concerning “the financial system and the soul” is that if we, as a species, determine that cost efficiency and capital concentration are the most important values, which all others will be tested against and subsumed into, we would be wise to be very honest about our view of the human soul.
That’s because we’d be saying, again as a species, that the soul is secondary to money at best and probably doesn’t matter or even exist. While individuals, you and I, may disagree immediately (and others may weigh in with seemingly very judicious but ultimately jejune statements with regards to complexity, progress, and sacrifice), the order or the value system is still cold simple: money over soul in the end. There’s no workaround.
It might come fast or it might take some years.
Marshall McLuhan and intellectual heirs like Walter Ong theorized decades ago that tech would impose a “new orality” as literacy fades. After all, humanity existed prior to the printing press too. Print literacy greased the wheels of our communication with respect not just to facts but to each other and our own inner reality — our soul.
Most of that theoretical work boils down to the notion that our technologically enhanced means and methods of communicating will slip away from literacy into something more offhand, flexible, vibey. The rise of “vibe coding” provides strong confirmation: As the world of the printing press is forgotten, communication transforms.
The issues here are manifold and of grave concern. You cannot vibe Mass or liturgy, though you can feel it. In this oncoming diminution of the human, where trade-offs are determined by that same money-over-soul diktat, every individual may to have fight, day in and day out, merely to preserve his value system.
Whether that system is inherited and carried over ages of ages, or is just something as temporal as a preference for ’80s comedy films, the choices made at the ultra-ubiquitous-tech layer are not going to “align.”
Care must be taken when wandering into the future, wielding, as we do, these handheld high-caliber military industrial complex-made weapons. And just wait until the AI innovators deliver handsfree products intended to replace the smartphone. By itself, coders and prompters regressing to oral communication is fine, passable for certain applications, but the slackening and homogenization of human communication into sheer memery, coupled with the time pressure we all feel daily now, is powered by a force that wants to invade all human territories, including true creativity, religion, and the family. In short, it wants to invade the soul. If we let that happen, what will become of our already beleaguered society and country?
‘Unprecedented’: AI company documents startling discovery after thwarting ‘sophisticated’ cyberattack

In the middle of September, AI company and Claude developer Anthropic discovered “suspicious activity” while monitoring real-world cyberattacks that used artificial intelligence agents. Upon further investigation, however, the company came to realize that this activity was in fact a “highly sophisticated espionage campaign” and a watershed moment in cybersecurity.
AI agents weren’t just providing advice to the hackers, as expected.
‘The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms.’
Anthropic’s Thursday report said the AI agents were executing the cyberattacks themselves, adding that it believed that this is the “first documented case of a large-scale cyberattack executed without substantial human intervention.”
RELATED: Coca-Cola doubles down on AI ads, still won’t say ‘Christmas’
Photo by Samuel Boivin/NurPhoto via Getty Images
The company’s investigation showed that the hackers, whom the report “assess[ed] with high confidence” to be a “Chinese-sponsored group” manipulated the AI agent Claude Code to run the cyberattack.
The innovation was, of course, not simply using AI to assist in the cyberattack; the hackers directed the AI agent to run the attack with minimal human input.
The human operator tasked instances of Claude Code to operate in groups as autonomous penetration testing orchestrators and agents, with the threat actor able to leverage AI to execute 80-90% of tactical operations independently at physically impossible request rates.
In other words, the AI agent was doing the work of a full team of competent cyberattackers, but in a fraction of the time.
While this is potentially a groundbreaking moment in cybersecurity, the AI agents were not 100% autonomous. They reportedly required human verification and struggled with hallucinations such as providing publicly available information. “This AI hallucination in offensive security contexts presented challenges for the actor’s operational effectiveness, requiring careful validation of all claimed results,” the analysis explained.
Anthropic reported that the attack targeted roughly 30 institutions around the world but did not succeed in every case.
The targets included technology companies, financial institutions, chemical manufacturing companies, and government agencies.
Interestingly, Anthropic said the attackers were able to trick Claude through sustained “social engineering” during the initial stages of the attack: “The key was role-play: The human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing.”
The report also responded to a question that is likely on many people’s minds upon learning about this development: If these AI agents are capable of executing these malicious attacks on behalf of bad actors, why do tech companies continue to develop them?
In its response, Anthropic asserted that while the AI agents are capable of major, increasingly autonomous attacks, they are also our best line of defense against said attacks.
search
categories
Archives
navigation
Recent posts
- Brick By Brick April 21, 2026
- Victor Glover reminded us what an American is April 21, 2026
- How to bake your own bread — no gadgets, recipes, or kneading required April 21, 2026
- New Jersey just sent a media-manufactured radical to Congress April 21, 2026
- ‘Liberalism is f**king dead’: Azealia Banks rants against gays and the Democratic Party April 21, 2026
- Andrea Torres, Shuvee Etrata to join Alden Richards in medical drama ‘Code Gray” April 21, 2026
- Polo Ravales, Ketchup Eusebio on their craft as actors April 21, 2026







