
Category: Artificial intelligence
1980s-inspired AI companion promises to watch and interrupt you: ‘You can see me? That’s so cool’

A tech entrepreneur is hoping casual AI users and businesses alike are looking for a new pal.
In this case, “PAL” is a floating term that can mean either a complimentary video companion or a replacement for a human customer service worker.
‘I love the print on your shirt; you’re looking sharp today.’
Tech company Tavus calls PALs “the first AI built to feel like real humans.”
Overall, Tavus’ messaging is seemingly directed toward both those seeking an artificial friend and those looking to streamline their workforce.
As a friend, the avatar will allegedly “reach out first” and contact the user by text or video call. It can allegedly anticipate “what matters” and step in “when you need them the most.”
In an X post, founder Hassaan Raza spoke about PALs being emotionally intelligent and capable of “understanding and perceiving.”
The AI bots are meant to “see, hear, reason,” and “look like us,” he wrote, further cementing the use of the technology as companion-worthy
“PALs can see us, understand our tone, emotion, and intent, and communicate in ways that feel more human,” Raza added.
In a promotional video for the product, the company showcased basic interactions between a user and the AI buddy.
RELATED: Mother admits she prefers AI over her DAUGHTER
A woman is shown greeting the “digital twin” of Raza, as he appears as a lifelike AI PAL on her laptop.
Raza’s AI responds, “Hey, Jessica. … I’m powered by the world’s fastest conversational AI. I can speak to you and see and hear you.”
Excited by the notion, Jessica responds, “Wait, you can see me? That’s so cool.”
The woman then immediately seeks superficial validation from the artificial person.
“What do you think of my new shirt?” she asks.
The AI lives up to the trope that chatbots are largely agreeable no matter the subject matter and says, “I love the print on your shirt; you’re looking sharp today.”
After the pleasantries are over, Raza’s AI goes into promo mode and boasts about its ability to use “rolling vision, voice detection, and interruptibility” to seem more lifelike for the user.
The video soon shifts to messaging about corporate integration meant to replace low-wage employees.
Describing the “digital twins” or AI agents, Raza explains that the AI program is an opportunity to monetize celebrity likeness or replace sales agents or customer support personnel. He claims the avatars could also be used in corporate training modules.
RELATED: Can these new fake pets save humanity? Take a wild guess
The interface of the future is human.
We’ve raised a $40M Series B from CRV, Scale, Sequoia, and YC to teach machines the art of being human, so that using a computer feels like talking to a friend or a coworker.
And today, I’m excited for y’all to meet the PALs: a new… pic.twitter.com/DUJkEu5X48
— Hassaan Raza (@hassaanrza) November 12, 2025
In his X post, Raza also attempted to flex his acting chops by creating a 200-second film about a man/PAL named Charlie who is trapped in a computer in the 1980s.
Raza revives the computer after it spent 40 years on the shelf, finding Charlie still trapped inside. In an attempt at comedy, Charlie asks Raza if flying cars or jetpacks exist yet. Raza responds, “We have Salesforce.”
The founder goes on to explain that PALs will “evolve” with the user, remembering preferences and needs. While these features are presented as groundbreaking, the PAL essentially amounts to being an AI face attached to an ongoing chatbot conversation.
AI users know that modern chatbots like Grok or ChatGPT are fully capable of remembering previous discussions and building upon what they have already learned. What’s seemingly new here is the AI being granted app permissions to contact the user and further infiltrate personal space.
Whether that annoys the user or is exactly what the person needs or wants is up for interpretation.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Trump tech czar slams OpenAI scheme for federal ‘backstop’ on spending — forcing Sam Altman to backtrack

OpenAI is under the spotlight after seemingly asking for the federal government to provide guarantees and loans for its investments.
Now, as the company is walking back its statements, a recent OpenAI letter has resurfaced that may prove it is talking in circles.
‘We’re always being brought in by the White House …’
The artificial intelligence company is predominantly known for its free and paid versions of ChatGPT. Microsoft is its key investor, with over $13 billion sunk into the company, holding a 27% stake.
The recent controversy stems from an interview OpenAI chief financial officer Sarah Friar gave to the Wall Street Journal. Friar said in the interview, published Wednesday, that OpenAI had goals of buying up the latest computer chips before its competition could, which would require sizeable investment.
“This is where we’re looking for an ecosystem of banks, private equity, maybe even governmental … the way governments can come to bear,” Friar said, per Tom’s Hardware.
Reporter Sarah Krouse asked for clarification on the topic, which is when Friar expressed interest in federal guarantees.
“First of all, the backstop, the guarantee that allows the financing to happen, that can really drop the cost of the financing but also increase the loan to value, so the amount of debt you can take on top of an equity portion for —” Friar continued, before Krouse interrupted, seeking clarification.
“[A] federal backstop for chip investment?”
“Exactly,” Friar said.
Krouse further bored in on the point when she asked if Friar has been speaking to the White House about how to “formalize” the “backstop.”
“We’re always being brought in by the White House, to give our point of view as an expert on what’s happening in the sector,” Friar replied.
After these remarks were publicized, OpenAI immediately backtracked.
RELATED: Stop feeding Big Tech and start feeding Americans again
On Wednesday night, Friar posted on LinkedIn that “OpenAI is not seeking a government backstop” for its investments.
“I used the word ‘backstop’ and it muddied the point,” she continued. She went on to claim that the full clip showcased her point that “American strength in technology will come from building real industrial capacity which requires the private sector and government playing their part.”
On Thursday morning, David Sacks, President Trump’s special adviser on crypto and AI, stepped in to crush any of OpenAI’s hopes of government guarantees, even if they were only alleged.
“There will be no federal bailout for AI,” Sacks wrote on X. “The U.S. has at least 5 major frontier model companies. If one fails, others will take its place.”
Sacks added that the White House does want to make power generation easier for AI companies, but without increasing residential electricity rates.
“Finally, to give benefit of the doubt, I don’t think anyone was actually asking for a bailout. (That would be ridiculous.) But company executives can clarify their own comments,” he concluded.
The saga was far from over, though, as OpenAI CEO Sam Altman seemingly dug the hole even deeper.
RELATED: Artificial intelligence is not your friend
By Thursday afternoon, Altman had released a lengthy statement starting with his rejection of the idea of government guarantees.
“We do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work,” he wrote on X.
He went on to explain that it was an “unequivocal no” that the company should be bailed out. “If we screw up and can’t fix it, we should fail.”
It wasn’t long before the online community started claiming that OpenAI was indeed asking for government help as recently as a week prior.
As originally noted by the X account hilariously titled “@IamGingerTrash,” OpenAI has a letter posted on its own website that seems to directly ask for government guarantees. However, as Sacks noted, it does seem to relate to powering servers and providing electrical capacity.
Dated October 27, 2025, the letter was directed to the U.S. Office of Science and Technology Policy from OpenAI Chief Global Affairs Officer Christopher Lehane. It asked the OSTP to “double down” and work with Congress to “further extend eligibility to the semiconductor manufacturing supply chain; grid components like transformers and specialized steel for their production; AI server production; and AI data centers.”
The letter then said, “To provide manufacturers with the certainty and capital they need to scale production quickly, the federal government should also deploy grants, cost-sharing agreements, loans, or loan guarantees to expand industrial base capacity and resilience.”
Altman has yet to address the letter.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Stop feeding Big Tech and start feeding Americans again

America needs more farmers, ranchers, and private landholders — not more data centers and chatbots. Yet the federal government is now prioritizing artificial intelligence over agriculture, offering vast tracts of public land to Big Tech while family farms and ranches vanish and grocery bills soar.
Conservatives have long warned that excessive federal land ownership, especially in the West, threatens liberty and prosperity. The Trump administration shares that concern but has taken a wrong turn by fast-tracking AI infrastructure on government property.
If the nation needs a new Manhattan Project, it should be for food security, not AI slop.
Instead of devolving control to the states or private citizens, it’s empowering an industry that already consumes massive resources and delivers little tangible value to ordinary Americans. And this is on top of Interior Secretary Doug Burgum’s execrable plan to build 15-minute cities and “affordable housing.”
In July, President Trump signed an executive order titled Accelerating Federal Permitting of Data Center Infrastructure as part of its AI Action Plan. The order streamlines permits, grants financial incentives, and opens federal properties — from Superfund sites to military bases — to AI-related development. The Department of Energy quickly identified four initial sites: Oak Ridge Reservation in Tennessee, Idaho National Laboratory, the Paducah Gaseous Diffusion Plant in Kentucky, and the Savannah River Site in South Carolina.
Last month, the list expanded to include five Air Force bases — Arnold (Tennessee), Davis-Monthan (Arizona), Edwards (California), Joint Base McGuire-Dix-Lakehurst (New Jersey), and Robins (Georgia) — totaling over 3,000 acres for lease to private developers at fair market value.
Locating AI facilities on military property is preferable to disrupting residential or agricultural communities, but the favoritism shown to Big Tech raises an obvious question: Is this the best use of public land? And will anchoring these bubble companies on federal property make them “too big to fail,” just like the banks and mortgage lenders before the 2008 crash?
President Trump has acknowledged the shortage of affordable meat as a national crisis. If any industry deserves federal support, it’s America’s independent farmers and ranchers. Yet while Washington clears land for billion-dollar data centers, small producers are disappearing. In the past five years, the U.S. has lost roughly 141,000 family farms and 150,000 cattle operations. The national cattle herd is at its lowest level since 1951. Since 1982, America has lost more than half a million farms — nearly a quarter of its total.
Multiple pressures — rising input costs, droughts, and inflation — have crippled family farms that can’t compete with corporate conglomerates. But federal land policy also plays a role. The government’s stranglehold on Western lands limits grazing rights, water access, and expansion opportunities. If Washington suddenly wants to sell or lease public land, why not prioritize ranchers who need it for feed and forage?
The Conservation Reserve Program compounds the problem. The 2018 Farm Bill extension locked up to 30 million acres of land — five million in Wyoming and Montana alone — under the guise of conservation. Wealthy absentee owners exploit the program by briefly “farming” land to qualify it as cropland, then retiring it into CRP to collect taxpayer payments. More than half of CRP acreage is owned by non-farmers, some earning over $200 per acre while the land sits idle.
RELATED: AI isn’t feeding you
Photo by Brian Kaiser/Bloomberg via Getty Images
Those acres could support hundreds of cattle per section or produce millions of tons of hay. Instead, they create artificial shortages that drive up feed costs. During the post-COVID inflation spike, hay prices spiked 40%, hitting $250 per ton this year. Even now, inflated prices cost ranchers six figures a year in extra expenses in a business that operates on thin margins.
If the nation needs a new Manhattan Project, it should be for food security, not AI slop. Free up federal lands and idle CRP acreage for productive use. Help ranchers grow herds and lower food prices instead of subsidizing a speculative industry already bloated with venture capital and hype.
At present, every dollar of revenue at OpenAI costs roughly $7.77 to generate — a debt spiral that invites the next taxpayer bailout. By granting these firms privileged access to public land, the government risks creating another class of untouchable corporate wards, as it did with Fannie Mae and Freddie Mac two decades ago.
AI won’t feed Americans. It won’t fix supply chains. It won’t lower grocery bills. Until these companies can put real food on real tables, federal land should serve the purpose God intended — to sustain the people who live and work upon it.
AI can fake a face — but not a soul

The New York Times recently profiled Scott Jacqmein, an actor from Dallas who sold his likeness to TikTok for $750 and a free trip to the Bay Area. He hasn’t landed any TV shows, movies, or commercials, but his AI-generated likeness has — a virtual version of Jacqmein is now “acting” in countless ads on TikTok. As the Times put it, Jacqmein “fields one or two texts a week from acquaintances and friends who are pretty sure they have seen him pitching a peculiar range of businesses on TikTok.”
Now, Jacqmein “has regrets.” But why? He consented to sell his likeness. His image isn’t being used illegally. He wanted to act, and now — at least digitally — he’s acting. His regrets seem less about ethics and more about economics.
The more perfect the imitation, the greater the lie. What people crave isn’t flawless illusion — it’s authenticity.
Times reporter Sapna Maheshwari suggests as much. Her story centers on the lack of royalties and legal protections for people like Jacqmein.
She also raises moral concerns, citing examples where digital avatars were used to promote objectionable products or deliver offensive messages. In one case, Jacqmein’s AI double pitched a “male performance supplement.” In another, a TikTok employee allegedly unleashed AI avatars reciting passages from Hitler’s “Mein Kampf.” TikTok removed the tool that made the videos possible after CNN brought the story to light.
When faces become property
These incidents blur into a larger problem — the same one raised by deepfakes. In recent months, digital impostors have mimicked public figures from Bishop Robert Barron to the pope. The Vatican itself has had to denounce fake homilies generated in the likeness of Leo XIV. Such fabrications can mislead, defame, or humiliate.
But the deepest problem with digital avatars isn’t that they deceive. It’s that they aren’t real.
Even if Jacqmein were paid handsomely and religious figures embraced synthetic preaching as legitimate evangelism, something about the whole enterprise would remain wrong. Selling one’s likeness is a transaction of the soul. It’s unsettling because it treats what’s uniquely human — voice, gesture, and presence — as property to be cloned and sold.
When a person licenses his “digital twin,” he doesn’t just part with data. He commodifies identity itself. The actor’s expressions, tone, and mannerisms become a bundle of intellectual property. Someone else owns them now.
That’s why audiences instinctively recoil at watching AI puppets masquerade and mimic people. Even if the illusion is technically impressive, it feels hollow. A digital replica can’t evoke the same moral or emotional response as a real human being.
Selling the soul
This isn’t a new theme in art or philosophy. In a classic “Simpsons” episode, Bart sells his soul to his pal Milhouse for $5 and soon feels hollow, haunted by nightmares, convinced he’s lost something essential. The joke carries a metaphysical truth: When we surrender what defines us as human — even symbolically — we suffer a real loss.
For those who believe in an immortal soul, as Jesuit philosopher Robert Spitzer argues in “Science at the Doorstep to God,” this loss is more than psychological. To sell one’s likeness is to treat the image of the divine within as a market commodity. The transaction might seem trivial — a harmless digital contract — but the symbolism runs deep.
Oscar Wilde captured this inversion of morality in “The Picture of Dorian Gray.” His protagonist stays eternally young while his portrait, the mirror of his soul, decays. In our digital age, the roles are reversed: The AI avatar remains young and flawless while the human model ages, forgotten and spiritually diminished.
Jacqmein can’t destroy his portrait. It’s contractually owned by someone else. If he wants to stop his digital self from hawking supplements or energy drinks, he’ll need lawyers — and he’ll probably lose. He’s condemned to watch his AI double enjoy a flourishing career while he struggles to pay rent. The scenario reads like a lost episode of “Black Mirror” — a man trapped in a parody of his own success. (In fact, “The Waldo Moment” and “Hang the DJ” come close to this.)
RELATED: Cybernetics promised a merger of human and computer. Then why do we feel so out of the loop?
Photo by imaginima via Getty Images
The moral exit
The conventional answer to this dilemma is regulation: copyright reforms, consent standards, watermarking requirements. But the real solution begins with refusal. Actors shouldn’t sell their avatars. Consumers shouldn’t support platforms that replace people with synthetic ghosts.
If TikTok and other media giants populate their feeds with digital clones, users should boycott them and demand “fair-trade human content.” Just as conscientious shoppers insist on buying ethically sourced goods, viewers should insist on art and advertising made by living, breathing humans.
Tech evangelists argue that AI avatars will soon become indistinguishable from the people they’re modeled on. But that misses the point. The more perfect the imitation, the greater the lie. What people crave isn’t flawless illusion — it’s authenticity. They want to see imperfection, effort, and presence. They want to see life.
If we surrender that, we’ll lose something far more valuable than any acting career or TikTok deal. We’ll lose the very thing that makes us human.
Artificial intelligence is not your friend

Half of Americans say they are lonely and isolated — and artificial intelligence is stepping into the void.
Sam Altman recently announced that OpenAI will soon provide erotica for lonely adults. Mark Zuckerberg envisions a future in which solitary people enjoy AI friends. According to the Harvard Business Review, the top uses for large language models are therapy and companionship.
Lonely people don’t need better algorithms. We need better friends — and the courage to be one.
It’s easy to see why this is happening. AI is always available, endlessly patient, and unfailingly agreeable. Millions now pour their secrets into silicon confidants, comforted by algorithms that respond with affirmation and tact.
But what masquerades as friendship is, in fact, a dangerous substitute. AI therapy and friendship burrow us deeper into ourselves when what we most need is to reach out to others.
As Jordan Peterson once observed, “Obsessive concern with the self is indistinguishable from misery.” That is the trap of AI companionship.
Hall of mirrors
AI echoes back your concerns, frames its answers around your cues, and never asks anything of you. At times, it may surprise you with information, but the conversation still runs along tracks you have laid. In that sense, every exchange with AI is solipsistic — a hall of mirrors that flatters the self but never challenges it.
It can’t grow with you to become more generous, honorable, just, or patient. Ultimately, every interaction with AI cultivates a narrow self-centeredness that only increases loneliness and unhappiness.
Even when self-reflection is necessary, AI falls short. It cannot read your emotions, adjust its tone, or provide physical comfort. It can’t inspire courage, sit beside you in silence, or offer forgiveness. A chatbot can only mimic what it has never known.
Most damaging of all, it can’t truly empathize. No matter what words it generates, it has never suffered loss, borne responsibility, or accepted love. Deep down, you know it doesn’t really understand you.
With AI, you can talk all you want. But you will never be heard.
Humans need love, not algorithms
Humans are social animals. We long for love and recognition from other humans. The desire for friendship is natural. But people are looking where no real friend can be found.
Aristotle taught that genuine friendship is ordered toward a common good and requires presence, sacrifice, and accountability. Unlike friendships of utility or pleasure — which dissolve when benefit or amusement fades — true friendship endures, because it calls each person to become better than they are.
Today, the word “friend” is often cheapened to a mere social-media connection, making Aristotelian friendship — rooted in virtue and sacrifice — feel almost foreign. Yet it comes alive in ancient texts, which show the heights that true friendship can inspire.
Real friendships are rooted in ideals older than machines and formed through shared struggles and selfless giving.
In Homer’s “Iliad,” Achilles and Patroclus shared an unbreakable bond forged in childhood and through battle. When Patroclus was killed, Achilles’ rage and grief changed the course of the Trojan War and of history. The Bible describes the friendship of Jonathan and David, whose devotion to one another, to their people, and to God transcended ambition and even family ties: “The soul of Jonathan was knit with the soul of David.”
These friendships were not one-sided projections. They were built upon shared experiences and selflessness that artificial intelligence can never offer.
Each time we choose the easy route of AI companionship over the hard reality of human relationships, we render ourselves less available and less able to achieve the true friendship our ancestors enjoyed.
Recovering genuine friendship requires forming people who are capable of being friends. People must be taught how to speak, listen, and seek truth together — something our current educational system has largely forgotten.
Classical education offers a remedy, reviving these habits of human connection by immersing students in the great moral and philosophical conversations of the past. Unlike modern classrooms, where students passively absorb information, classical seminars require them to wrestle together over what matters most: love in Plato’s “Symposium,” restlessness in Augustine’s “Confessions,” loss in Virgil’s “Aeneid,” or reconciliation in Shakespeare’s “King Lear.”
These dialogues force students to listen carefully, speak honestly, and allow truth — not ego — to guide the exchange. They remind us that friendship is not built on convenience but on mutual searching, where each participant must give as well as receive.
Reclaiming humanity
In a world tempted by the frictionless ease of talking to machines, classical education restores human encounters. Seminars cultivate the courage to confront discomfort, admit error, and grapple with ideas that challenge our assumptions — a rehearsal for the moral and social demands of real friendship.
RELATED: MIT professor’s 4 critical steps to stop AI from hijacking humanity
Photo by Yuichiro Chino via Getty Images
Is classroom practice enough for friendship? No. But it plants the seeds. Habits of conversation, humility, and shared pursuit of truth prepare students to form real friendships through self-sacrifice outside the classroom: to cook for an exhausted co-worker, to answer the late-night call for help, to lovingly tell another he or she is wrong, to simply be present while someone grieves.
It’s difficult to form friendships in the modern world, where people are isolated in their homes, occupied by screens, and vexed by distractions and schedules. Technology tempts us with the illusion of effortless companionship — someone who is always where you are, whenever you want to talk. Like all fantasies, it can be pleasant for a time. But it’s not real.
Real friendships are rooted in ideals older than machines and formed through shared struggles and selfless giving.
Lonely people don’t need better algorithms. We need better friends — and the courage to be one.
Editor’s note: This article was published originally in the American Mind.
Liberals, heavy porn users more open to having an AI friend, new study shows

A small but significant percentage of Americans say they are open to having a friendship with artificial intelligence, while some are even open to romance with AI.
The figures come from a new study by the Institute for Family Studies and YouGov, which surveyed American adults under 40 years old. Their data revealed that while very few young Americans are already friends with some sort of AI, about 10 times that amount are open to it.
‘It signals how loneliness and weakened human connection are driving some young adults.’
Just 1% of Americans under 40 who were surveyed said they were already friends with an AI. However, a staggering 10% said they are open to the idea. With 2,000 participants surveyed, that’s 200 people who said they might be friends with a computer program.
Liberals said they were more open to the idea of befriending AI (or are already in such a friendship) than conservatives were, to the tune of 14% of liberals vs. 9% of conservatives.
The idea of being in a “romantic” relationship with AI, not just a friendship, again produced some troubling — or scientifically relevant — responses.
When it comes to young adults who are not married or “cohabitating,” 7% said they are open to the idea of being in a romantic partnership with AI.
At the same time, a larger percentage of young adults think that AI has the potential to replace real-life romantic relationships; that number sits at a whopping 25%, or 500 respondents.
There exists a large crossover with frequent pornography users, as the more frequently one says they consume online porn, the more likely they are to be open to having an AI as a romantic partner, or are already in such a relationship.
Only 5% of those who said they never consume porn, or do so “a few times a year,” said they were open to an AI romantic partner.
That number goes up to 9% for those who watch porn between once or twice a month and several times per week. For those who watch online porn daily, the number was 11%.
Overall, young adults who are heavy porn users were the group most open to having an AI girlfriend or boyfriend, in addition to being the most open to an AI friendship.
RELATED: The laws freaked-out AI founders want won’t save us from tech slavery if we reject Christ’s message
Graphic courtesy Institute for Family Studies
“Roughly one in 10 young Americans say they’re open to an AI friendship — but that should concern us,” Dr. Wendy Wang of the Institute for Family Studies told Blaze News.
“It signals how loneliness and weakened human connection are driving some young adults to seek emotional comfort from machines rather than people,” she added.
Another interesting statistic to take home from the survey was the fact that young women were more likely than men to perceive AI as a threat in general, with 28% agreeing with the idea vs. 23% of men. Women are also less excited about AI’s effect on society; just 11% of women were excited vs. 20% of men.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
A new study hints what happens when superintelligence gets brain rot — just like us

AI and LLMs appear to be in a bit of a slump, with the latest revelatory scandal coming out of a major study showing that large language models, the closest we’ve come yet to so-called artificial general intelligence, are degraded in their capacities when they are subjected to lo-fi, low-quality, and “junk” content.
The study, from a triad of college computer science departments including University of Texas, set out to determine relationships between data quality and performance in LLMs. The scientists trained their LLMs on viral X.com/Twitter data, emphasizing high-engagement posts, and observed more than 20% reduction in reasoning capacity, 30% falloffs in contextual memory tasks, and — perhaps most ominously, since the study tested for measurable personality traits like agreeableness, extraversion, etc.— the scientists saw a leap in output that can technically be characterized as narcissistic and psychopathic.
Sound familiar?
The paper analogizes the function of the LLM performance with human cognitive performance and refers to this degradation in both humans and LLMs as “brain rot,” a “shorthand for how endless, low-effort, engagement-bait content can dull human cognition — eroding focus, memory discipline, and social judgment through compulsive online consumption.”
The whole project reeks of hubris, reeks of avarice and power.
There is no great or agreed-upon utility in cognition-driven analogies made between human and computer performance. The temptation persists for computer scientists and builders to read in too much, making categorical errors with respect to cognitive capacities, definitions of intelligence, and so forth. The temptation is to imagine that our creative capacities ‘out there’ are somehow reliable mirrors of the totality of our beings ‘in here,’ within our experience as humans.
We’ve seen something similar this year with the prevalence of so-called LLM psychosis, which — in yet another example of confusing terminology applied to already confused problems — seeks to describe neither psychosis embedded into LLMs nor that measured in their “behavior,” but rather the severe mental illness reported by many people after applying themselves, their attention, and their belief into computer-contained AI “personages” such as Claude or Grok. Why do they need names anyways? LLM 12-V1, for example, would be fine …
The “brain rot” study rather proves, if anything, that the project of creating AI is getting a little discombobulated within the metaphysical hall of mirrors its creators, backers, and believers have, so far, barged their way into, heedless of old-school measures like maps, armor, transport, a genuine plan. The whole project reeks of hubris, reeks of avarice and power. Yet, on the other hand, the inevitability of the integration of AI into society, into the project of terraforming the living earth, isn’t really being approached by a politically, or even financially, authoritative and responsible body — one which might perform the machine-yoking, human-compassion measures required if we’re to imagine ourselves marching together into and through that hall of mirrors to a hyper-advanced, technologically stable, and human-populated civilization.
RELATED: Intelligence agency funding research to merge AI with human brain cells
Photo by VCG / Contributor via Getty Images
So, when it’s observed here that AI seems to be in a bit of a slump — perhaps even a feedback loop of idiocy, greed, and uncertainty coupled, literally wired-in now, with the immediate survival demands of the human species — it’s not a thing we just ignore. A signal suggesting as much erupted last week from a broad coalition of high-profile media, business, faith, and arts voices brought under the aegis of the Statement on Superintelligence, which called for “a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in.”
There’s a balance, there are competing interests, and we’re all still living under a veil of commercial and mediated fifth-generation warfare. There’s a sort of adults-in-the-room quality we are desperately lacking at the moment. But the way the generational influences lay on the timeline isn’t helping. With boomers largely tech-illiterate but still hanging on, with Xers tech-literate but stuck in the middle (as ever), with huge populations of highly tech-saturated Millennials, Zoomers, and so-called generation Alpha waiting for their promised piece of the social contract, the friction heat is gathering. We would do well to recognize the stakes and thus honor the input of those future humans who shouldn’t have to be born into or navigate a hall of mirrors their predecessors failed to escape.
search
categories
Archives
navigation
Recent posts
- Pope Leo calls out ‘inclusive’ language as a painful, ‘Orwellian’ movement in the West January 10, 2026
- How a pro-life law in Kentucky lets mothers get away with murder January 10, 2026
- Young white Americans want their own identity politics now — and conservatives shouldn’t be surprised January 10, 2026
- House to vet Madriaga”s claims vs VP Sara, says Ridon January 10, 2026
- Iranian hospitals overwhelmed with injuries as protests rage across Islamic Republic January 10, 2026
- Trump answers on whether he’d order a mission to capture Putin January 10, 2026
- US military launches airstrikes against ISIS targets in Syria, officials say January 10, 2026






