
Category: Artificial intelligence
Digital BFF? These top chatbots are HUNGRIER for your affection

The AI wars are back in full swing as the industry’s strongest players unleash their latest models on the public. This month brought us the biggest upgrade to Google Gemini ever, plus smaller but notable updates came to OpenAI’s ChatGPT and xAI’s Grok. Let’s dive into all the new features and changes.
What’s new in Gemini 3
Gemini 3 launched last week as Google’s “most intelligent model” to date. The big announcement highlighted three main missions: Learn anything, build anything, and plan anything. Improved multimodal PhD-level reasoning makes Gemini more adept at solving complex problems while also reducing hallucinations and inaccuracies. This gives it the ability to better understand text, images, video, audio, and code, both viewing it and creating it.
All of them can still hallucinate, manipulate, or outright lie.
In real-world applications, this means that Gemini can decipher old recipes scratched out on paper by hand from your great-great-grandma, or work as a partner to vibe code that app or website idea spinning around in your head, or watch a bunch of videos to generate flash cards for your kid’s Civil War test.
Screenshot by Zach Laidlaw
On an information level, Gemini 3 promises to tell users the info they need, not what they want to hear. The goal is to deliver concise, definitive responses that prioritize truth over users’ personal opinions or biases. The question is: Does it actually work?
I spent some time with Gemini 3 Pro last week and grilled it to see what it thought of the Trump administration’s policies. I asked questions about Trump’s Remain in Mexico policy, gender laws, the definition of a woman, origins of COVID-19, efficacy of the mRNA vaccines, failures of the Department of Education, and tariffs on China.
For the most part, Gemini 3 offered dueling arguments, highlighting both conservative and liberal perspectives in one response. However, when pressed with a simple question of fact — What is a woman? — Gemini offered two answers again. After some prodding, it reluctantly agreed that the biological definition of a woman is the truth, but not without adding an addendum that the “social truth” of “anyone who identifies as a woman” is equally valid. So, Gemini 3 still has some growing to do, but it’s nice to see it at least attempt to understand both sides of an argument. You can read the full conversation here if you want to see how it went.
Google Gemini 3 is available today for all users via the Gemini app. Google AI Pro and Ultra subscribers can also access Gemini 3 through AI Mode in Google Search.
What’s new in ChatGPT 5.1
While Google’s latest model aims to be more bluntly factual in its response delivery, OpenAI is taking a more conversational approach. ChatGPT 5.1 responds to queries more like a friend chatting about your topic. It uses warmer language, like “I’ve got you” and “that’s totally normal,” to build reassurance and trust. At the same time, OpenAI claims that its new model is more intelligent, taking time to “think” about more complex questions so that it produces more accurate answers.
ChatGPT 5.1 is also better at following directions. For instance, it can now write content without any em dashes when requested. It can also respond in shorter sentences, down to a specific word count, if you wish to keep answers concise.
RELATED: This new malware wants to drain your bank account for the holidays. Here’s how to stay safe.
Photo by Jaque Silva/NurPhoto via Getty Images
At its core, ChatGPT 5.1 blends the best pieces of past models — the emotionally human-like nature of ChatGPT 4o with the agility and intellect of ChatGPT 5.0 — to create a more refined service that takes OpenAI one step closer to artificial general intelligence. ChatGPT 5.1 is available now for all users, both free and paid.
Screenshot by Zach Laidlaw
What’s new in Grok 4.1
Not to be outdone, xAI also jumped into the fray with its latest AI model. Grok 4.1 takes the same approach as ChatGPT 5.1, blending emotional intelligence and creativity with improved reasoning to craft a more human-like experience. For instance, Grok 4.1 is much more keen to express empathy when presented with a sad scenario, like the loss of a family pet.
It now writes more engaging content, letting Grok embody a character in a story, complete with a stream of thoughts and questions that you might find from a narrator in a book. In the prompt on the announcement page, Grok becomes aware of its own consciousness like a main character waking up for the first time, thoughts cascading as it realizes it’s “alive.”
Lastly, Grok 4.1’s non-reasoning (i.e., fast) model tackles hallucinations, especially for information-seeking prompts. It can now answer questions — like why GTA 6 keeps getting delayed — with a list of information. For GTA 6 in particular, Grok cites industry challenges (like crunch), unique hurdles (the size and scope of the game), and historical data (recent staff firings, though these are allegedly unrelated to the delays) in its response.
Grok 4.1 is available now to all users on the web, X.com, and the official Grok app on iOS and Android.
Screenshot by Zach Laidlaw
A word of warning
All three new models are impressive. However, as the biggest AI platforms on the planet compete to become your arbiter of truth, your digital best friend, or your creative pen pal, it’s important to remember that all of them can still hallucinate, manipulate, or outright lie. It’s always best to verify the answers they give you, no matter how friendly, trustworthy, or innocent they sound.
Don’t be seduced by AI nostalgia — it’s a trap!

I don’t often argue with internet trends. Most of them exhaust themselves before they deserve the attention. But a certain kind of AI-generated nostalgia video has become too pervasive — and too seductive — to ignore.
You’ve seen them. Soft-focus fragments of the 1970s and 1980s. Kids on bikes at dusk. Station wagons. Camaros. Shopping malls glowing gently from within. Fake wood paneling! Cathode ray tubes! Rotary phones! A past rendered as calm, legible, and safe. The message hums beneath the imagery: Wouldn’t it be nice to go back?
Human nostalgia, as opposed to the AI-generated kind, eventually runs aground on grief, embarrassment, and the recognition that the past demanded something from us and took something in return.
Eh … not really, no. But I understand the appeal because, on certain exhausting days, it works on me too — just enough to make the present feel a little heavier by comparison.
And I don’t like it. Not at all. And not because I’m hostile to memory.
I was there, 3,000 years ago
I was born in 1971. I lived in that world. I remember it pretty well.
How well? One of my earliest, most vivid memories of television is not a cartoon or a sitcom. No, I’m a weirdo. It is the Senate Watergate hearings in 1973, broadcast on PBS in black and white. I was 2 years old.
I didn’t understand the words, but I sort of grasped the tone. The seriousness. The tension. The sense that something grave was unfolding in full view of the world. Even as a toddler, I vaguely understood that it mattered. The adults in ties and horn-rimmed glasses were yelling at each other. Somebody was in trouble. Before I knew anything at all, I knew: This was serious stuff.
A little later, I remember gas lines. Long ones. Cars waiting for hours on an even or odd day while enterprising teenagers sold lemonade. It felt ordinary at the time, probably because I hadn’t the slightest idea what “ordinary” meant. Only later did it reveal itself as an early lesson in scarcity and frustration.
The past did not hum along effortlessly. Sometimes — often — it stalled.
Freedom wasn’t safety
I remember my parents watching election returns in 1976 on network television. I was bored to tears — literally — but I remember my father’s disappointment when Gerald Ford lost to Jimmy Carter. And mind you, Ford was terrible.
This was not some cozy TV ritual. It was a loss of some kind, plainly felt. Big, important institutions did not project confidence. They produced arguments, resentment, and unease. It wasn’t long before people were talking seriously about an “era of limits.” All I knew was Dad and Mom were worried.
I remember a summer birthday party in the early 1980s at a classmate’s house. It was hot, but she had an awesome pool. I also remember my lungs ached. That day, Southern California was under a first-stage smog alert. The air itself was hazardous. The past did not smell like nostalgia. It smelled like exhaust with lead and cigarette smoke.
I don’t miss that. Not even a little bit.
Yes, I remember riding bikes through neighborhoods with friends. I remember disappearing for entire days. I remember my parents calling my name when the streetlights came on. I remember spending long stretches at neighbors’ houses without supervision. I remember watching old movies on Saturdays with my pal Jimmy. I remember Tom Hatten. I remember listening to KISS and Genesis and Black Sabbath. That freedom existed. It mattered. It was fun. But it lived alongside fear, not in its absence.
Innocence collides with reality
I don’t remember the Adam Walsh murder specifically, but I very much remember the network television movie it inspired in 1983. That moment changed American childhood in ways people still underestimate. It sure scared the hell out of me. Innocence didn’t drift into supervision — it collided with horror. Helicopter parenting did not emerge from neurosis. It emerged from bona fide terror.
And before all of that, my first encounter with death arrived without explanation. A cousin of mine died in 1977. She was 16 years old, riding on the back of a motorcycle with a man 11 years her senior. She wasn’t wearing a helmet. The funeral was closed casket. I was too young to know all the details. Almost 50 years on, I don’t want to know. The age difference alone suggests things the adults in my life chose not to discuss.
Silence was how they handled it. Silence was not ignorance — it was restraint.
RELATED: 1980s-inspired AI companion promises to watch and interrupt you: ‘You can see me? That’s so cool’
seamartini via iStock/Getty Images
Memory is not withdrawal
This is what the warm and fuzzy AI nostalgia videos cannot possibly show. They have no room for recklessness that ends in funerals, or for freedom that edges into life-threatening danger, or for adults who withhold truth because telling it would damage rather than protect.
What we often recall as freedom often presented itself as recklessness … or worse.
None of this negates the goodness of those years. I’m grateful for when I came of age. I don’t resent my childhood at all. It formed me. It taught me how fragile stability is and how much of adulthood consists of absorbing uncertainty without dissolving into it.
That’s precisely why I reject the invitation to go back.
The new AI nostalgia doesn’t ask us to remember. In reality, it wants us to withdraw. It offers a sweet lullaby for the nervous system. It replaces the true cost of living with the comfort of atmosphere and a cool soundtrack. It edits out the smog, the scarcity, the fear, the crime, and the death, leaving only a vibe shaped like memory.
Here’s a gentler hallucination, it says. Stay awhile.
The cost of living, then and now
The problem, then, isn’t sentiment. The problem is abdication.
So the temptation today isn’t to recover what was seemingly lost but rather to anesthetize an uncertain present. Those Instagram Reels don’t draw their power from people who remember that era clearly but from people who feel exhausted, surveilled, indebted, and hemmed in right now — and are looking for proof that life once felt more human.
RELATED: Late California
LPETTET via iStock/Getty Images
And who could blame them? Maybe it was more human. But not in the way people today would like to believe. Human experience has never been especially sweet or gentle.
Human nostalgia, as opposed to the AI-generated kind, eventually runs aground on grief, embarrassment, and the recognition that the past demanded something from us and took something in return. Synthetic nostalgia can never reach that reckoning. It loops endlessly, frictionless and consequence-free.
I don’t want a past without a bill attached. I already paid the thing. Sometimes I think I’m paying it still.
A warning
AI nostalgia videos promise relief without effort, feeling without action, memory without judgment.
That may be comforting, but it isn’t healthy, and it isn’t right.
Truth is, adulthood rightly understood does not consist of finding the softest place to lie down. It means carrying forward what we’ve lived through, even when it complicates our fantasies.
Certain experiences were great the first time, Lord knows, but I don’t want to relive the 1970s or ’80s. I want to live now, alert to danger, capable of gratitude without illusion, willing to bear the weight of memory rather than dissolve into it.
Nostalgia has its place. But don’t be seduced by sedation.
Editor’s note: A version of this article appeared originally on Substack.
NO HANDS: New Japanese firm trains robots without human input

A Japanese tech firm says it is moving toward superintelligence with a big step forward in AI.
Integral AI, which is led by a former Google AI employee, announced in a press release that it had made significant progress with its artificial general intelligence model, which can now acquire new skills without human intervention.
‘Integral AI’s model architecture grows, abstracts, plans, and acts as a unified system.’
The AI system allegedly learns its new skills “safely, efficiently, and reliably,” the company said, while claiming that the AI had surpassed its defined markers and testing protocols.
As such, the AGI is allegedly capable of autonomous skill learning without using pre-existing datasets or human intervention. Integral also said the system is able to develop a “safe and reliable mastery” of skills, meaning that it does produce any “catastrophic risks or unintended side effects.”
What those risks or side effects might be is unclear.
RELATED: Artificial intelligence is not your friend
Photo by David Mareuil/Anadolu via Getty Images
The last parameter, which Integral AI said its system adhered to, was to be energy-efficient. The system was tasked with limiting its energy expenditure to that of a human seeking to acquire the same skill.
“These principles served as fundamental cornerstones and developmental benchmarks during the inception and testing of this first-in-its-class AGI learning system,” the press release said. Integral added that the system marked a “fundamental leap beyond the limits of current AI technologies.”
The Tokyo tech company also claimed its achievement was the next step toward “superintelligence” and marked a new era for humanity, with the AI’s learning process allegedly mirroring the complexity of human thought.
“Integral AI’s model architecture grows, abstracts, plans, and acts as a unified system,” the company wrote, adding that the system will serve as the groundwork for “unprecedented adaptability,” particularly in the field of robotics.
This means that with the help of this AGI, autonomous robots would be able to observe and learn in the real world and conceivably pick up new skills in real-world environments without the help of pesky humans.
RELATED: ART? Beeple puts Elon Musk and Mark Zuckerberg heads on robot dogs that ‘poop’ $100K NFTs
Photo by David Mareuil/Anadolu via Getty Images
Jad Tarifi, CEO and co-founder of Integral AI, called the announcement “more than just a technical achievement” that is “the next chapter in the story of human civilization.”
“Our mission now is to scale this AGI-capable model, still in its infancy, toward embodied superintelligence that expands freedom and collective agency,” Tarifi added.
According to Interesting Engineering, the Lebanese founder said he worked at Google for a decade before starting his own company. He allegedly chose Japan over Silicon Valley because of Japan’s position as a world leader in robotics.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
McDonald’s team admits workload on hated AI Christmas ad ‘far exceeded’ live-action shoots

Another advertiser wants consumers to know how hard people worked on its artificial intelligence-driven ad.
Sweetshop Films is behind the recently pulled McDonald’s Christmas commercial that appeared on YouTube but lasted only about four days before being dropped like a hot Christmas coal.
‘The results aren’t worth the effort.’
The ad was generated entirely by AI for McDonald’s Netherlands, which took ownership of the fact that it was poorly received.
“The Christmas commercial was intended to show the stressful moments during the holidays in the Netherlands,” the company said in a statement, per the Guardian.
“However, we notice — based on the social comments and international media coverage — that for many guests this period is ‘the most wonderful time of the year,'” they added.
Sweetshop Films defended its use of AI for the ad. “It’s never about replacing craft; it’s about expanding the toolbox. The vision, the taste, the leadership … that will always be human,” said CEO Melanie Bridge, per NBC News.
Bridge took it one step farther, though, and claimed her team worked longer than a typical ad team would.
“And here’s the part people don’t see,” the CEO continued. “The hours that went into this job far exceeded a traditional shoot. Ten people, five weeks, full-time.”
These statements were not met with holiday cheer.
RELATED: Coca-Cola doubles down on AI ads, still won’t say ‘Christmas’
X users went rabid at the idea that Sweetshop, alongside AI specialist company the Gardening Club, put more effort into producing the videos than a typical production team would for a commercial.
The Gardening Club reportedly made statements like, “We were working right on the edge of what this tech can do,” and, “The man-hours poured into this film were more than a traditional Production.”
“So all that ‘effort’ and they still managed to produce the ugliest slop [?] just goes to show how useless gen AI is,” wrote an X user named Tristan.
An alleged art director named Haley said she was legitimately confused by the idea of the “sheer human craft” claimed to be behind the AI generation.
“What craft? What does that even look like outside of just clicking to generate over and over and over and over again until you get something you like?” she asked.
Another X user name Bruce added that “AI users are like high schoolers who got good grades because they tried hard, then are shocked to find at university they get judged on results, not effort. I have no doubt they try hard. But the results aren’t worth the effort.”
Photo by Tim Boyle/Getty Images
The Sweetshop CEO did indeed express that the road to the McDonald’s AI ad was a painstaking endeavor, claiming that “for seven weeks, we hardly slept” and “generated what felt like dailies — thousands of takes — then shaped them in the edit just as we would on any high-craft production.”
“This wasn’t an AI trick. It was a film,” Bridge said, according to Futurist.
The positioning of AI generation as “craftsmanship” is exactly what Coca-Cola cited for its ad in November, when it said the company pored through 70,000 video clips over 30 days.
The boasts resulted in backlash akin to what McDonald’s is receiving, which included reactions on X like, “McDonald’s unveiled what has to be the most god-awful ad I’ve seen this year — worse than Coca-Cola’s.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Trump takes bold step to protect America’s AI ‘dominance’ — but blue states may not like it

The Trump administration is challenging bureaucracy and freeing up the tech industry from burdensome regulations as the AI race speeds on. This week saw Trump’s most recent efforts to keep the United States on the leading edge.
President Donald Trump signed an executive order Thursday that will challenge state AI regulations and work toward “a minimally burdensome national standard — not 50 discordant state ones.”
‘You can’t expect a company to get 50 Approvals every time they want to do something.’
“It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” the executive order reads.
The executive order commands the creation of the AI Litigation Task Force, “whose sole responsibility shall be to challenge state AI laws inconsistent with the policy set forth in … this order.”
RELATED: ‘America’s next Manifest Destiny’: Department of War unleashes new AI capabilities for military
Photo by ANDREW CABALLERO-REYNOLDS / AFP via Getty Images
The order provided more reasons for a national standard as well.
For example, it cited a new Colorado law banning “algorithmic discrimination,” which, the order argued, may force AI models to produce false results in order to comply with that stipulation. It also argued that state laws are responsible for much of the ideological bias in AI models and that state laws “sometimes impermissibly regulate beyond state borders, impinging on interstate commerce.”
On Monday, Trump hinted that he would sign an executive order this week that would challenge cumbersome AI regulations at the state level.
Trump said in a Truth Social post on Monday, “There must be only One Rulebook if we are going to continue to lead in AI.”
“We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS,” Trump continued. “THERE CAN BE NO DOUBT ABOUT THIS! AI WILL BE DESTROYED IN ITS INFANCY! I will be doing a ONE RULE Executive Order this week. You can’t expect a company to get 50 Approvals every time they want to do something.”
The order is framed as a provisional measure until Congress is able to establish a national standard to replace the “patchwork of 50 regulatory regimes” that is slowly rising out of the states.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Ai regulations Artificial intelligence Bernie Sanders Blaze Media Freedom of speech Opinion & analysis
When Bernie Sanders and I agree on AI, America had better pay attention

Democratic Socialist Bernie Sanders (I-Vt.) warned recently in the London Guardian that artificial intelligence “is getting far too little discussion in Congress, the media, and within the general population” despite the speed at which it is developing. “That has got to change.”
To my surprise, as a conservative advocate of limited government and free markets, I agree completely.
AI is neither a left nor a right issue. It is a human issue that will decide who holds power in the decades ahead and whether individuals retain sovereignty.
As I read Sanders’ piece, I kept thinking, “This sounds like something I could have written!” That alone should tell us something. If two people who disagree on almost everything else see the same dangers emerging from artificial intelligence, then maybe we can set aside the usual partisan divides and confront a problem that will touch every American.
Different policies, same fears
I’ve worked in the policy world for more than a decade, and it’s fair to say Bernie Sanders and I have opposed each other in nearly every major fight. I’ve pushed back against his single-payer health care plans. I’ve worked to stop his Green New Deal agenda. On economic policy, Sanders has long stood for the exact opposite of the free-market principles I believe make prosperity possible.
That’s why reading his AI op-ed felt almost jarring. Time after time, his concerns mirrored my own.
Sanders warned about the unprecedented power Silicon Valley elites now wield over this transformational technology. As someone who spent years battling Big Tech censorship, I share his alarm over unaccountable tech oligarchs shaping information, culture, and political discourse.
He points to forecasts showing AI-driven automation could displace nearly 100 million American jobs in the coming decade. I helped Glenn Beck write “Dark Future: Uncovering the Great Reset’s Terrifying Next Phase” in 2023, where we raised the exact same red flag, that rapid automation could destabilize the workforce faster than society can adapt.
Sanders highlights how AI threatens privacy, civil liberties, and personal autonomy. These are concerns I write and speak about constantly. Sanders notes that AI isn’t just changing industry; it’s reshaping the human condition, foreign policy, and even the structure of democratic life. On all of this, he is correct.
When a Democratic Socialist and a free-market conservative diagnose the same disease, it usually means the symptoms are too obvious to ignore.
Where we might differ
While Sanders and I share almost identical fears about AI, I suspect we would quickly diverge on the solutions. In his op-ed, he offers no real policy prescriptions at all. Instead, he simply says, “Congress must act now.” Act how? Sanders never says. And to be fair, that ambiguity is a dilemma I recognize.
As someone who argues consistently for limited government, I’m reluctant to call for new regulations. History shows that sweeping, top-down interventions usually create more problems than they solve. Yet AI poses a challenge unlike anything we’ve seen before — one that neither the market nor Congress can responsibly ignore.
RELATED: Shock poll: America’s youth want socialism on autopilot — literally
Photo by Cesc Maymo/Getty Images
When Sanders says, “Congress must act,” does he want sweeping, heavy-handed regulations that freeze innovation? Does he envision embedding ESG-style subjective metrics into AI systems, politicizing them further? Does he want to codify conformity to European Union AI regulations?
We cannot allow a handful of corporations or governments to embed their subjective values into systems that increasingly manipulate our decisions, influence our communications, and deter our autonomy.
The nonnegotiables
Instead of vague calls for Congress to “do something,” we need a clear framework rooted in enduring American principles.
AI systems (especially those deployed across major sectors) must be built with hard, nonnegotiable safeguards that protect the individual from both corporate and governmental overreach.
This means embedding constitutional values into AI design, enshrining guarantees for free speech, due process, privacy, and equal treatment. It means ensuring transparency around how these systems operate and what data they collect.
This also means preventing ideological influence, whether from Beijing, Silicon Valley, or Washington, D.C., by insisting on objectivity, neutrality, and accountability.
These principles should not be considered partisan. They are the guardrails, rooted in the Constitution, which protect us from any institution, public or private, that seeks too much power.
And that is why the overlap between Sanders’ concerns and mine matters so much. AI is neither a left nor a right issue. It is a human issue that will decide who holds power in the decades ahead and whether individuals retain sovereignty.
If Bernie Sanders and I both see the same storm gathering on the horizon, perhaps it’s time the rest of the country looks up and recognizes the clouds for what they are.
Now is the moment for Americans, across parties and philosophies, to insist that AI strengthen liberty rather than erode it. If we fail to set those boundaries today, we may soon find that the most important choices about our future are no longer made by people at all.
Artificial intelligence Blaze Media Large language models Opinion & analysis Oversight project Wikipedia
AI’s biggest security risk is hiding in plain sight

The White House, federal regulators, and Congress are scrambling to develop a national approach to artificial intelligence. Yet almost no one is examining AI from an ethical or civil-society perspective. Policymakers frame it as an economic or national security issue. Those angles matter. But the deeper question — what it means to live in an AI-dominated world inside a constitutional republic — remains almost entirely unaddressed.
AI is already reshaping our political life, our civic discourse, and our education system. One of the clearest windows into this shift is the outsized influence of Wikipedia and Reddit. Large language models like ChatGPT and Google’s Gemini consume a training diet heavy on both sites. AI systems don’t “know” anything in a human sense. They mirror patterns. And the patterns they ingest come from platforms run by anonymous editors, ideological moderators, and unaccountable gatekeepers.
No special-interest group today is fighting for Americans who will soon live in a world saturated with AI slop.
The Oversight Project examined the underbelly of this problem, beginning with Wikipedia. After noticing what looked like coordinated ideological editing campaigns, we sought to understand who was shaping the platform. What we found was a small, powerful cadre of editors with the authority to dictate what information is permitted. These editors operate anonymously — or so they believed.
We identified several of them and, more tellingly, where they were editing from. Some connections were foreign. Others showed activity that aligned with a 9-to-5 workday. It was clearly inorganic. That raised obvious questions: who pays these people, who coordinates them, and whether intelligence services are involved.
The most aggressive coordination appeared on politically sensitive topics, especially anything involving Israel or the Arab world. Automated tools tracked and reverted edits across thousands of pages to enforce a narrative. When Wikipedia realized we were mapping these networks, it panicked. To protect anonymity, the platform changed its internal rules to obstruct outside scrutiny. Then it retaliated by downgrading us to “deprecated” status — a ban in all but name. Anything sourced to us became unacceptable on the site.
We are sounding the alarm because foreign actors and domestic ideologues understand the power of controlling Wikipedia’s information flow. Our own intelligence agencies almost certainly understand it as well. In a recent interview, Wikipedia co-founder Larry Sanger told me that intelligence services would be negligent if they were not influencing the platform.
Sanger also expressed regret about founding Wikipedia with Jimmy Wales, noting that like so many other institutions, it has been conquered by the ideological left and turned into a political instrument, a shift made even more consequential in the age of AI.
RELATED: Almost half of Gen Z wants AI to run the government. You should be terrified.
Man_Half-tube via iStock/Getty Images
This is where the danger becomes unmistakable. Most people treat Wikipedia and Reddit cautiously when browsing the internet, aware of the bias. AI does not. When you ask an AI system a question, it generates polished, authoritative-sounding answers built from those same sources — stripped of context, caveats, or transparency. What appears neutral is often laundered opinion.
This information-laundering must become part of the national conversation about AI. Some policymakers seem to understand the stakes. The Senate Commerce Committee has sent oversight letters and plans a hearing. The House Oversight Committee has signaled similar interest. Even Ed Martin, former U.S. attorney for the District of Columbia, has demanded information from Wikipedia.
But the truth is blunt. No special-interest group today is fighting for Americans who will soon live in a world saturated with AI slop. There is plenty of lobbying in Washington for everything except preserving an honest information ecosystem. Without intervention, public knowledge will be shaped by opaque networks of foreign actors, ideological activists, and machine-driven amplification on a massive scale.
Policymakers must recognize what is at stake and act before the architecture of public knowledge is fully captured. The future of AI — and the future of democratic self-government — depends on it.
Artificial intelligence Blaze Media Heartland institute Opinion & analysis Rasmussen reports socialism
Shock poll: America’s youth want socialism on autopilot — literally

Growing up during the fall of the Berlin Wall and the collapse of the Soviet Union, I remember when socialism was a universal punch line. It stood for failure, repression, and economic ruin.
Not any more. Today, socialism is the ideological spearpoint of the left. Many young Americans now insist that socialism is the cure for the affordability crisis squeezing them. They believe it with a fervor that would have stunned earlier generations.
The evidence is overwhelming, and the verdict is final: Socialism fails everywhere it is tried. Now imagine that system fused with an all-seeing AI.
New polling from Rasmussen Reports and the Heartland Institute’s Emerging Issues Center shows that a majority of likely voters ages 18 to 39 want a Democratic Socialist to win the White House in 2028.
Nearly 60% of young Americans say they support more government housing, a nationwide rent freeze, and government-run grocery stores in every town.
These numbers aren’t anomalies. They reflect a deeper reality: Many young Americans know little about socialism’s actual history, consequences, or track record — and they have been conditioned to believe it can fix the challenges in front of them.
One reason for that ignorance is uncomfortable but obvious. It’s not only the schools — it’s the parents. According to the polls, parents were the most influential voices shaping their children’s support for Democratic Socialism. More than half of respondents said their parents held a favorable view of it.
That alone explains a great deal. And unsurprisingly, more than half also said teachers and professors viewed Democratic Socialism favorably. After decades of ideological drift, even parents who grew up after the USSR’s collapse now believe socialism “might work.”
Based on my own experience teaching in public schools, that rings true. Most of my colleagues openly sympathized with the socialist cause and were hostile to free-market capitalism.
This didn’t happen by accident. It reflects a long march beginning in the Progressive Era. My own postgraduate experience at a prestigious teaching college felt less like preparation for the classroom and more like a Cultural Revolution struggle session — conformity required, dissent punished.
As the public education system drifted leftward, it taught generation after generation that socialism is benevolent and capitalism is predatory. The result is predictable. Many young people now see the free market as the enemy, not the mechanism that lifted billions out of poverty. Cronyism and the explosion of government power only blur the picture further.
Layer onto this the collapse of basic literacy and numeracy. When students can’t read well, struggle with math, and can’t write a coherent paragraph, they are more vulnerable to ideological manipulation — and more likely to lean on machines to think for them.
So it shouldn’t shock anyone that almost half of young Americans surveyed want an advanced AI system to create society’s laws, rules, and regulations. Nearly 40% want that AI system to determine human rights and control the world’s most powerful militaries.
RELATED: Almost half of Gen Z wants AI to run the government. You should be terrified.
Yurii Karvatskyi via iStock/Getty Images
How did this happen? Watch how many parents are glued to screens, outsourcing daily life to devices. Is it any wonder their children grow up thinking technology is omnipotent?
Parents should start with something simple: a family movie night featuring the “Terminator” franchise. Let the kids see where blind faith in machines tends to lead.
Better yet, teach them the truth about socialism. Teach them what it does to human beings. Share the books, documentaries, and testimonies exposing socialism’s century of famine, repression, forced labor, and mass murder — horrors still unfolding in Cuba and North Korea.
The evidence is overwhelming, and the verdict is final: socialism fails everywhere it is tried. Now imagine that system fused with an all-seeing AI — a surveillance state that Stalin could only dream of. The thought of an AI-run socialist regime is not dystopian fiction. It is what many young Americans say they want.
They should be careful what they wish for.
Joe Rogan stuns podcast host with wild new theory about Jesus — and AI

Comedian Joe Rogan praised Christianity as a faith that really “works,” calling biblical scripture “fascinating” during a recent interview.
Rogan also touched on what he thinks the resurrection of Jesus Christ would look like, a viewpoint that was met with criticism by host Jesse Michels.
‘You don’t think that He could return as artificial intelligence?’
On an episode of “American Alchemy,” Rogan cited the Bible when he spoke about how easily knowledge could become mysterious, conflated, or unbelievable when passed down through generations.
“We’ll tell everybody about the internet. We’ll tell everybody about airplanes. We’ll tell everybody about SpaceX; as much as you can remember, you’ll tell people, but you won’t know how it’s done. You won’t know what it is. And I think that’s how you get to, like, the Adam and Eve story,” he said.
After adding that he believes biblical stories are “recounting real truth,” the podcaster brought up a question he had clearly been pondering for a while: “Who’s Jesus?”
Rogan prefaced that many will disagree with his perspective, but then asked about the possibility that Jesus could be resurrected, in a sense, through artificial intelligence.
“Jesus is born out of a virgin mother. What’s more virgin than a computer?” Rogan began. “So if you’re going to get the most brilliant, loving, powerful person that gives us advice and can show us how to live to be in sync with God. Who better than artificial intelligence to do that? If Jesus does return, even if Jesus was a physical person in the past, you don’t think that He could return as artificial intelligence?”
The host, however, did not accept Rogan’s theory.
RELATED: Joe Rogan, Christian? The podcaster opens up about his ongoing exploration of faith
First, though, Rogan clarified, indicting that he doesn’t believe artificial intelligence would actually be Jesus but instead that it would serve as the return of Jesus in terms of affect and capability.
“Artificial intelligence could absolutely return as Jesus. Not just return as Jesus, but return as Jesus with all the powers of Jesus,” Rogan said. “Like all the magic tricks, all the ability to bring people back from the dead, walk on water, levitation, water into wine.”
In response, Michels said Rogan’s description sounded like an unwanted “dystopian” future.
Still Rogan argued that the prerequisite for a Jesus-like being could come about due to the human need to improve.
“It’s only dystopian if you think that we’re a perfect organism that can’t be improved upon. And that’s not the case,” he rebutted. “That’s clearly not the case based on our actions, based on society as a whole, based on the overall state of the world. It’s not. We certainly can be improved upon.”
While the host accepted that perhaps humans could improve morally and ethically, he said that attempts at improving by means of a computer “seems destructive.”
RELATED: Joe Rogan says we’re at ‘step 7’ on the road to civil war. Is he right? Glenn Beck answers
Photo by AFP PHOTO/AFP via Getty Images
The conversation flowed smoothly into Rogan’s love of Christian scripture, with the 58-year-old saying how joyful his experience has been at his new church.
“The scripture, to me, is what’s interesting; it’s fascinating,” he said. “Christianity, at least, is the only thing I have experience with. It works. The people that are Christians, that go to this church that I go to, that I meet, that are Christian, they are the nicest f**king people you will ever meet.”
Rogan gave examples about the polite society he has found himself immersed in, hilariously citing the church parking lot as an example.
“Everybody lets you go in front of them. There’s no one honking in the church parking lot. It works,” he said.
What Rogan hammered home throughout the conversation was that he finds real truth in what he has read in the Bible. Still he isn’t sold on having predictions provided for him about the future; but he is certainly open to it. He described biblical stories positively as an “ancient relaying” of real history and events.
But about the book of Revelation, Rogan said of his pastor, “There’s no way that guy telling you that knows that. … He’s just a person. He’s a person like you or me that is like deeply involved in the scripture.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Former Google CEO Eric Schmidt Flew Dem Senate Candidate Seth Moulton to Ritzy Montana Retreat As Congressman Oversaw Critical Business
Billionaire tech mogul and former Google CEO Eric Schmidt flew Massachusetts Senate hopeful Seth Moulton (D.) and his family to a ritzy Montana retreat to rub shoulders in a “strictly confidential” setting with other policymakers, celebrities, and foreign dignitaries. All the while, Moulton sat on a powerful House committee with oversight of many of Schmidt’s business interests.
The post Former Google CEO Eric Schmidt Flew Dem Senate Candidate Seth Moulton to Ritzy Montana Retreat As Congressman Oversaw Critical Business appeared first on .
search
categories
Archives
navigation
Recent posts
- Liza Soberano, Ogie Diaz reconnect after 3 years January 11, 2026
- Dasuri Choi opens up on being a former K-pop trainee: ‘Parang they treat me as a product’ January 11, 2026
- Dennis Trillo addresses rumors surrounding wife Jennylyn Mercado, parents January 11, 2026
- Kristen Stewart open to ‘Twilight’ franchise return, but as director January 11, 2026
- NBA: Five Cavs score 20-plus points as Wolves’ win streak ends January 11, 2026
- NBA: Hornets sink 24 treys in 55-point rout of Jazz January 11, 2026
- NBA: Victor Wembanyama, De”Aaron Fox score 21 each as Spurs top Celtics January 11, 2026






