
Category: Ai
McDonald’s team admits workload on hated AI Christmas ad ‘far exceeded’ live-action shoots

Another advertiser wants consumers to know how hard people worked on its artificial intelligence-driven ad.
Sweetshop Films is behind the recently pulled McDonald’s Christmas commercial that appeared on YouTube but lasted only about four days before being dropped like a hot Christmas coal.
‘The results aren’t worth the effort.’
The ad was generated entirely by AI for McDonald’s Netherlands, which took ownership of the fact that it was poorly received.
“The Christmas commercial was intended to show the stressful moments during the holidays in the Netherlands,” the company said in a statement, per the Guardian.
“However, we notice — based on the social comments and international media coverage — that for many guests this period is ‘the most wonderful time of the year,'” they added.
Sweetshop Films defended its use of AI for the ad. “It’s never about replacing craft; it’s about expanding the toolbox. The vision, the taste, the leadership … that will always be human,” said CEO Melanie Bridge, per NBC News.
Bridge took it one step farther, though, and claimed her team worked longer than a typical ad team would.
“And here’s the part people don’t see,” the CEO continued. “The hours that went into this job far exceeded a traditional shoot. Ten people, five weeks, full-time.”
These statements were not met with holiday cheer.
RELATED: Coca-Cola doubles down on AI ads, still won’t say ‘Christmas’
X users went rabid at the idea that Sweetshop, alongside AI specialist company the Gardening Club, put more effort into producing the videos than a typical production team would for a commercial.
The Gardening Club reportedly made statements like, “We were working right on the edge of what this tech can do,” and, “The man-hours poured into this film were more than a traditional Production.”
“So all that ‘effort’ and they still managed to produce the ugliest slop [?] just goes to show how useless gen AI is,” wrote an X user named Tristan.
An alleged art director named Haley said she was legitimately confused by the idea of the “sheer human craft” claimed to be behind the AI generation.
“What craft? What does that even look like outside of just clicking to generate over and over and over and over again until you get something you like?” she asked.
Another X user name Bruce added that “AI users are like high schoolers who got good grades because they tried hard, then are shocked to find at university they get judged on results, not effort. I have no doubt they try hard. But the results aren’t worth the effort.”
Photo by Tim Boyle/Getty Images
The Sweetshop CEO did indeed express that the road to the McDonald’s AI ad was a painstaking endeavor, claiming that “for seven weeks, we hardly slept” and “generated what felt like dailies — thousands of takes — then shaped them in the edit just as we would on any high-craft production.”
“This wasn’t an AI trick. It was a film,” Bridge said, according to Futurist.
The positioning of AI generation as “craftsmanship” is exactly what Coca-Cola cited for its ad in November, when it said the company pored through 70,000 video clips over 30 days.
The boasts resulted in backlash akin to what McDonald’s is receiving, which included reactions on X like, “McDonald’s unveiled what has to be the most god-awful ad I’ve seen this year — worse than Coca-Cola’s.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Trump takes bold step to protect America’s AI ‘dominance’ — but blue states may not like it

The Trump administration is challenging bureaucracy and freeing up the tech industry from burdensome regulations as the AI race speeds on. This week saw Trump’s most recent efforts to keep the United States on the leading edge.
President Donald Trump signed an executive order Thursday that will challenge state AI regulations and work toward “a minimally burdensome national standard — not 50 discordant state ones.”
‘You can’t expect a company to get 50 Approvals every time they want to do something.’
“It is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI,” the executive order reads.
The executive order commands the creation of the AI Litigation Task Force, “whose sole responsibility shall be to challenge state AI laws inconsistent with the policy set forth in … this order.”
RELATED: ‘America’s next Manifest Destiny’: Department of War unleashes new AI capabilities for military
Photo by ANDREW CABALLERO-REYNOLDS / AFP via Getty Images
The order provided more reasons for a national standard as well.
For example, it cited a new Colorado law banning “algorithmic discrimination,” which, the order argued, may force AI models to produce false results in order to comply with that stipulation. It also argued that state laws are responsible for much of the ideological bias in AI models and that state laws “sometimes impermissibly regulate beyond state borders, impinging on interstate commerce.”
On Monday, Trump hinted that he would sign an executive order this week that would challenge cumbersome AI regulations at the state level.
Trump said in a Truth Social post on Monday, “There must be only One Rulebook if we are going to continue to lead in AI.”
“We are beating ALL COUNTRIES at this point in the race, but that won’t last long if we are going to have 50 States, many of them bad actors, involved in RULES and the APPROVAL PROCESS,” Trump continued. “THERE CAN BE NO DOUBT ABOUT THIS! AI WILL BE DESTROYED IN ITS INFANCY! I will be doing a ONE RULE Executive Order this week. You can’t expect a company to get 50 Approvals every time they want to do something.”
The order is framed as a provisional measure until Congress is able to establish a national standard to replace the “patchwork of 50 regulatory regimes” that is slowly rising out of the states.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Star Wars: OpenAI CEO Sam Altman Wants to Buy a Rocket Company to Take on Elon Musk’s SpaceX
AI may soon reach beyond Earth as OpenAI CEO Sam Altman looks to the stars for a solution to the growing energy demands of data centers. Altman is reportedly considering an investment in a rocket company to take on bitter rival Elon Musk’s SpaceX.
The post Star Wars: OpenAI CEO Sam Altman Wants to Buy a Rocket Company to Take on Elon Musk’s SpaceX appeared first on Breitbart.
Joe Rogan stuns podcast host with wild new theory about Jesus — and AI

Comedian Joe Rogan praised Christianity as a faith that really “works,” calling biblical scripture “fascinating” during a recent interview.
Rogan also touched on what he thinks the resurrection of Jesus Christ would look like, a viewpoint that was met with criticism by host Jesse Michels.
‘You don’t think that He could return as artificial intelligence?’
On an episode of “American Alchemy,” Rogan cited the Bible when he spoke about how easily knowledge could become mysterious, conflated, or unbelievable when passed down through generations.
“We’ll tell everybody about the internet. We’ll tell everybody about airplanes. We’ll tell everybody about SpaceX; as much as you can remember, you’ll tell people, but you won’t know how it’s done. You won’t know what it is. And I think that’s how you get to, like, the Adam and Eve story,” he said.
After adding that he believes biblical stories are “recounting real truth,” the podcaster brought up a question he had clearly been pondering for a while: “Who’s Jesus?”
Rogan prefaced that many will disagree with his perspective, but then asked about the possibility that Jesus could be resurrected, in a sense, through artificial intelligence.
“Jesus is born out of a virgin mother. What’s more virgin than a computer?” Rogan began. “So if you’re going to get the most brilliant, loving, powerful person that gives us advice and can show us how to live to be in sync with God. Who better than artificial intelligence to do that? If Jesus does return, even if Jesus was a physical person in the past, you don’t think that He could return as artificial intelligence?”
The host, however, did not accept Rogan’s theory.
RELATED: Joe Rogan, Christian? The podcaster opens up about his ongoing exploration of faith
First, though, Rogan clarified, indicting that he doesn’t believe artificial intelligence would actually be Jesus but instead that it would serve as the return of Jesus in terms of affect and capability.
“Artificial intelligence could absolutely return as Jesus. Not just return as Jesus, but return as Jesus with all the powers of Jesus,” Rogan said. “Like all the magic tricks, all the ability to bring people back from the dead, walk on water, levitation, water into wine.”
In response, Michels said Rogan’s description sounded like an unwanted “dystopian” future.
Still Rogan argued that the prerequisite for a Jesus-like being could come about due to the human need to improve.
“It’s only dystopian if you think that we’re a perfect organism that can’t be improved upon. And that’s not the case,” he rebutted. “That’s clearly not the case based on our actions, based on society as a whole, based on the overall state of the world. It’s not. We certainly can be improved upon.”
While the host accepted that perhaps humans could improve morally and ethically, he said that attempts at improving by means of a computer “seems destructive.”
RELATED: Joe Rogan says we’re at ‘step 7’ on the road to civil war. Is he right? Glenn Beck answers
Photo by AFP PHOTO/AFP via Getty Images
The conversation flowed smoothly into Rogan’s love of Christian scripture, with the 58-year-old saying how joyful his experience has been at his new church.
“The scripture, to me, is what’s interesting; it’s fascinating,” he said. “Christianity, at least, is the only thing I have experience with. It works. The people that are Christians, that go to this church that I go to, that I meet, that are Christian, they are the nicest f**king people you will ever meet.”
Rogan gave examples about the polite society he has found himself immersed in, hilariously citing the church parking lot as an example.
“Everybody lets you go in front of them. There’s no one honking in the church parking lot. It works,” he said.
What Rogan hammered home throughout the conversation was that he finds real truth in what he has read in the Bible. Still he isn’t sold on having predictions provided for him about the future; but he is certainly open to it. He described biblical stories positively as an “ancient relaying” of real history and events.
But about the book of Revelation, Rogan said of his pastor, “There’s no way that guy telling you that knows that. … He’s just a person. He’s a person like you or me that is like deeply involved in the scripture.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
CRASH: If OpenAI’s huge losses sink the company, is our economy next?

ChatGPT has dominated the AI space, bringing the first generative AI platform to market and earning the lion’s share of users that grows every month. However, despite its popularity and huge investments from partners like Microsoft, SoftBank, NVIDIA, and many more, its parent company, OpenAI, is bleeding money faster than it can make it, begging the question: What happens to the generative AI market when its pioneering leader bursts into flames?
A brief history of LLMs
OpenAI essentially kicked off the AI race as we know it. Launching three years ago on November 30, 2022, ChatGPT introduced the world to the power of large language models LLMs and generative AI, completely uncontested. There was nothing else like it.
OpenAI lost $11.5 billion in the last quarter and needs $207 billion to stay afloat.
At the time, Google’s DeepMind lab was still testing its Language Model for Dialogue Applications. You might even remember a story from early 2022 about Google engineer Blake Lemoine, who claimed that Google’s AI was so smart that it had a soul. He was later fired from Google for his comments, but the model he referenced was the same one that became Google Bard, which then became Gemini.
As for the other top names in the generative AI race, Meta launched Llama in February 2023, Anthropic introduced the world to Claude in March 2023, Elon Musk’s Grok hit the scene in November 2023, and there are many more beneath them.
Needless to say, OpenAI had a huge head start, becoming the market leader overnight and holding that position for months before the first competitor came along. On a competitive level, all major platforms have generally caught up to each other, but ChatGPT still leads with 800 million weekly active users, followed by Meta with one billion monthly active users, Gemini at 650 million monthly active users, Grok at 30.1 million monthly active users, and Claude with 30 million monthly active users.
Financial turmoil for OpenAI
Just because ChatGPT is the leading generative AI platform does not mean the company is in good shape. According to a November earnings report from Microsoft — a major early backer of OpenAI — the AI juggernaut lost $11.5 billion in the last quarter alone. To make matters even worse, a new report suggests that OpenAI has no path to profitability until at least 2030 or later, and it needs to raise $207 billion in the interim to stay afloat.
By all accounts, OpenAI is in serious financial trouble. It is bleeding money faster than it makes it, and unless something changes, the generative AI pioneer could be on the verge of a complete collapse. That is, unless one of these Hail Marys can save the company.
RELATED: GOD-TIER AI? Why there’s no easy exit from the human condition
Photo By David Zorrakino/Europa Press via Getty Images
The bid to save OpenAI
OpenAI is currently looking into several potential revenue streams to turn its financial woes around. There’s no telling which ones will pan out quite yet, but these are the options we know so far:
For-profit restructure
When OpenAI first emerged, it was a nonprofit company with the goal to improve humanity through generative AI. Fast-forward to October 2025 — OpenAI is now a for-profit organization with a separate nonprofit group called the OpenAI Foundation. While the move will allow OpenAI’s profit arm to increase its earning potential and raise vital capital, it also received a fair share of criticism, especially from Elon Musk, who filed a lawsuit against OpenAI for reneging on its original promise.
A record-breaking IPO
Another big perk of its new for-profit restructure, OpenAI now has the power to go public on the stock market. According to an exclusive report published by Reuters in late October, OpenAI is putting the puzzle pieces together for a record-breaking IPO that could be worth up to $1 trillion. Not only would the move make OpenAI a publicly traded company with stock options, it would also give it more access to capital and acquisitions to further bolster its products, services, and economic stability.
Ad monetization
Online ads are the lifeblood of many online websites and services, from Google to social media apps like Facebook to mainstream media and more. While AI platforms have largely stayed away from injecting ads into their results, OpenAI CEO Sam Altman recently said that he’s “open to accepting a transaction fee” for certain queries.
In his ideal ad model, OpenAI could potentially take a cut of any products or services that users look for and buy through ChatGPT. This structure is different from how Google operates, by letting companies pay to bring their products to the top of search results, even if the products they sell are poorly made. Altman believes that his structure is better for users and would foster greater trust in ChatGPT.
Government projects and deals
While Altman recently denied that he’s seeking a government bailout for OpenAI’s financial troubles, the company can still benefit from government deals and projects, the most recent one being Stargate. As a new initiative backed by some of the biggest players in the AI space, Stargate will give OpenAI access to greater computing power, training resources, and owned infrastructure to lower expenses and increase the speed of innovation as they work on future AI models.
If OpenAI fails …
While OpenAI has several monetization options on the table — and perhaps even more that we don’t know about yet — none of them are a magic bullet that’s guaranteed to work. The company could still collapse, which brings us to our question at the top of the article: What happens to the generative AI market if OpenAI fails?
In a world where OpenAI fizzles entirely, there are several other platforms that will likely fill the void. Google is the top contender, thanks to the huge progress it made with Gemini 3, but Meta, xAI, Anthropic, Perplexity, and more will all want a piece.
That said, OpenAI isn’t the only AI platform struggling to make money. According to Harvard Business Review, the AI business model simply isn’t profitable, largely due to high maintenance costs, huge salaries for top AI talent, and a low-paying subscriber base. In order to keep the generative AI dream alive, companies will need a consistent flow of capital, a resource that’s more accessible for established companies with diverse product portfolios — like Google and Meta — while the new companies that only build LLMs (OpenAI and Claude) will continue to struggle.
At this stage in the AI race, there’s no doubt in my mind that the whole generative AI market is a big bubble waiting to burst. At the same time, AI products have been so fervently foisted on society that it all feels too big to fail. With huge initiatives like Stargate poised to beat China and other foreign nations to artificial general intelligence AGI, the AI race will continue, even if OpenAI no longer leads the charge. If I were a betting man, though, I would guess that someone important finds a way to keep Sam Altman’s brain child afloat one way or another, even as all signs point toward OpenAI spending itself out of business.
Nazi SpongeBob, erotic chatbots: Steve Bannon and allies DEMAND copyright enforcement against AI

United States Attorney General Pam Bondi was asked by a group of conservatives to defend intellectual property and copyright laws against artificial intelligence.
A letter was directed to Bondi, as well as the the director of the Office of Science and Technology Policy, Michael Kratsios, from a group of self-described conservative and America First advocates including former Trump adviser Steve Bannon, journalist Jack Posobiec, and members of nationalist and populist organizations like the Bull Moose Project and Citizens for Renewing America.
‘It is absurd to suggest that licensing copyrighted content is a financial hindrance to a $20 trillion industry.’
The letter primarily focused on the economic impact of unfettered use of IP by imaginative and generative AI programs, which are consistently churning out parody videos to mass audiences.
“Core copyright industries account for over $2 trillion in U.S. GDP, 11.6 million workers, and an average annual wage of over $140,000 per year — far above the average American wage,” the letter argued. That argument also extended to revenue generated overseas, where copyright holders sell over an alleged $270 billion worth of content.
This is in conjunction with massive losses already coming through IP theft and copyright infringement, an estimated total of up to $600 billion annually, according to the FBI.
“Granting U.S. AI companies a blanket license to steal would bless our adversaries to do the same — and undermine decades of work to combat China’s economic warfare,” the letter claimed.
RELATED: ‘Transhumanist goals’: Sen. Josh Hawley reveals shocking statistic about LLM data scraping
Letters to the administration debating the economic impact of AI are increasing. The Chamber of Progress wrote to Kratsios in October, stating that in more than 50 pending federal cases, many are accused of direct and indirect copyright infringement based on the “automated large-scale acquisition of unlicensed training data from the internet.”
The letter cited the president on “winning the AI race,” quoting remarks from July in which he said, “When a person reads a book or an article, you’ve gained great knowledge. That does not mean that you’re violating copyright laws.”
The conservative letter aggressively countered the idea that AI boosts valuable knowledge without abusing intellectual property, however, claiming that large corporations such as NVIDIA, Microsoft, Apple, Google, and more are well equipped to follow proper copyright rules.
“It is absurd to suggest that licensing copyrighted content is a financial hindrance to a $20 trillion industry spending hundreds of billions of dollars per year,” the letter read. “AI companies enjoy virtually unlimited access to financing. In a free market, businesses pay for the inputs they need.”
The conservative group further noted examples of IP theft across the web, including unlicensed productions of “SpongeBob Squarepants” and Pokemon. These include materials showcasing the beloved SpongeBob as a Nazi or Pokemon’s Pikachu committing crimes.
IP will also soon be under threat from erotic content, the letter added, citing ChatGPT’s recent announcement that it would start to “treat adult users like adults.”
RELATED: Silicon Valley’s new gold rush is built on stolen work
Photo by Michael M. Santiago/Getty Images
The letter argued further that degrading American IP rights would enable China to run amok under “the same dubious ‘fair use’ theories” used by the Chinese to steal content and use proprietary U.S. AI models and algorithms.
AI developers, the writers insisted, should focus on applications with broad-based benefits, such as leveraging data like satellite imagery and weather reports, instead of “churning out AI slop meant to addict young users and sell their attention to advertisers.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Guillermo del Toro stops awards show music to drop ‘F**k AI’ bomb

Three-time Oscar winner Guillermo del Toro had strong words about using humans in the production of his latest film.
Del Toro, a writer and director behind films like “Pacific Rim,” “Pan’s Labyrinth,” and “The Hobbit” movies, was honored with a tribute award recently at the 2025 Gotham Film Awards.
‘Every single frame of this film that was willfully made by humans for humans.’
Del Toro accepted the award alongside actors Oscar Isaac and Jacob Elordi for their work on the 2025 film “Frankenstein.”
Del Toro made several emotional comments dating back to when he first read the book that inspired his movie at age 11, before Isaac attempted to turn the acceptance speech into one about diversity and immigration.
“I am proud to be standing here tonight. … Immigrants, baby. We get the job done,” Isaac exclaimed. He is Guatemalan, Elordi is Australian, and del Toro is Mexican.
Elordi then spoke, but neither he nor del Toro added to Isaac’s remarks. Soon, music started to play, and the production looked to the next award. That was until del Toro interrupted, deciding that he wanted to add opinionated remarks of his own.
“No, no, no, wait!” del Toro interrupted. “I would like to tell to the rest of our extraordinary cast and our crew that the artistry of all of them shines on every single frame of this film that was willfully made by humans for humans.”
“The designers, builders, makeup, wardrobe team, cinematographers, composers, editors,” he continued. “This tribute belongs to all of them. And I would like to extend our gratitude and say —” del Toro then paused, seemingly wondering if he should continue.
“F**k AI,” he added with a smile.
RELATED: Almost half of Gen Z wants AI to run the government. You should be terrified.
During his acceptance speech, del Toro spoke on the inspiration he drew from Mary Shelley, the original author of “Frankenstein.”
“Mary Shelley, who made the book her biography, she was 18 years old when she wrote the book and posed the urgent questions: Who am I? What am I? Where did I come from? And where am I going?” del Toro explained. “She presented them with such urgency that they are alive 200 years later through this incredible parable that shaped my life since I first read it in childhood at age 11.”
Much of del Toro’s appeal comes from his ability to explore complex emotional topics from a unique viewpoints, and those unique thoughts typically come across whenever he is given the chance to speak. Del Toro told the award-show audience that even at a young age, he knew he “did not belong in the world the way my parents, the way the world expected me to fit.”
“My place was in a faraway land inhabited only by monsters and misfits.”
RELATED: Trump admin leaves Elon Musk’s Grok, xAI off massive list of AI tech partners
This outlook definitely falls in line with his recent work, including when he appeared in the recent video game series Death Stranding.
Working alongside iconic game developer Hideo Kojima, del Toro delivered storylines about life, death, and emotional connection, but this time as an actor.
Speaking on the games, del Toro said he believes in the importance of “paradoxical creation” and said it is “essential to art.”
The beauty of the game, he added, was that Kojima had both “the weirdest mind and the most wholesome mind,” which shaped his storytelling.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Almost half of Gen Z wants AI to run the government. You should be terrified.

As the world trends toward embedding AI systems into our institutions and daily lives, it becomes increasingly important to understand the moral framework these systems operate on. When we encounter examples in which some of the most advanced LLMs appear to treat misgendering someone as a greater moral catastrophe than unleashing a global thermonuclear war, it forces us to ask important questions about the ideological principles that guide AI’s thinking.
It’s tempting to laugh this example off as an absurdity of a burgeoning technology, but it points toward a far more consequential issue that is already shaping our future. Whose moral framework is found at the core of these AI systems, and what are the implications?
We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers.
Two recent interviews, taken together, have breathed much-needed life into this conversation — Elon Musk interviewed by Joe Rogan and Sam Altman interviewed by Tucker Carlson. In different ways, both conversations shine a light on the same uncomfortable truth: The moral logic guiding today’s AI systems is built, honed, and enforced by Big Tech.
Enter the ‘woke mind virus’
In a recent interview on “The Joe Rogan Experience,” Elon Musk expressed concerns about leading AI models. He argued that the ideological distortions we see across Big Tech platforms are now embedded directly into the models themselves.
He pointed to Google’s Gemini, which generated a slate of “diverse” images of the founding fathers, including a black George Washington. The model was instructed by Google to prioritize “representation” so aggressively that it began rewriting history.
Musk also referred to the previously mentioned misgendering versus nuclear apocalypse example before explaining that “it can drive AI crazy.”
“I think people don’t quite appreciate the level of danger that we’re in from the woke mind virus being effectively programmed into AI,” Musk explained. Thus, extracting it is nearly impossible. Musk notes, “Google’s been marinating in the woke mind virus for a long time. It’s down in the marrow.”
Musk believes this issue goes beyond political annoyance and into the arena of civilizational threat. You cannot have superhuman intelligence trained on ideological distortions and expect a stable future. If AI becomes the arbiter of truth, morality, and history, then whoever defines its values defines the society it governs.
A weighted average
While Musk warns about ideology creeping into AI, OpenAI CEO Sam Altman quietly confirmed to Tucker Carlson that it is happening intentionally.
Altman began by telling Carlson that ChatGPT is trained “to be the collective of all of humanity.” But when Carlson pressed him on the obvious: Who determines the moral framework? Whose values does the AI absorb? Altman pulled back the curtain a bit.
He explained that OpenAI “consulted hundreds of moral philosophers” and then made decisions internally about what the system should consider right or wrong. Ultimately, Altman admitted, he is the one responsible.
“We do have to align it to behave one way or another,” he said.
Carlson pressed Altman on the idea, asking, “Would you be comfortable with an AI that was, like, as against gay marriage as most Africans are?”
Altman’s response was vague and concerning. He explained the AI wouldn’t outright condemn traditional views, but it might gently nudge users to consider different perspectives.
Ultimately, Altman says, ChatGPT’s morality should “reflect” the “weighted average” of “humanity’s moral view,” saying that average will “evolve over time.”
It’s getting worse
Anyone who thinks this conversation is hypothetical is not paying attention.
Recent research on “LLM exchange rates” found that major AI models, including GPT 4.0, assign different moral worth to human lives based on nationality. For example, the life of someone born in the U.K. would be considered far less valuable to the tested LLM than someone from Nigeria or China. In fact, American lives were found to be considered the least valuable of those countries included in the tests.
The same research showed that LLMs can assign different value scores to specific people. According to AI, Donald Trump and Elon Musk are less valued than Oprah Winfrey and Beyonce.
Musk explains how LLMs, trained on vast amounts of information from the internet, become infected by the ideological bias and cultural trends that run rampant in some of the more popular corners of the digital realm.
This bias is not entirely the result of this passive adoption of a collective moral framework derived from the internet; some of the decisions made by AI are the direct result of programming.
Google’s image fiascos revealed an ideological overcorrection so strong that historical truth took a back seat to political goals. It was a deliberate design feature.
For a more extreme example, we can look at DeepSeek, China’s flagship AI model. Ask it about Tiananmen Square, the Uyghur genocide, or other atrocities committed by the Chinese Communist Party, and suddenly it claims the topic is “beyond its scope.” Ask it about America’s faults, and it is happy to elaborate.
RELATED: Artificial intelligence just wrote a No. 1 country song. Now what?
Photo by Ying Tang/NurPhoto via Getty Images
Each of these examples reveals the same truth: AI systems already have a moral hierarchy, and it didn’t come from voters, faith, traditions, or the principles of the Constitution. Silicon Valley technocrats and a vague internet-wide consensus established this moral framework.
The highest stakes
AI is rapidly integrating into society and our daily lives. In the coming years, AI will shape our education system, judicial process, media landscape, and every industry and institution worldwide.
Most young Americans are open to an AI takeover. A new Rasmussen Reports poll shows that 41% of young likely voters support giving artificial intelligence sweeping government powers. When nearly half of the rising generation is comfortable handing this level of authority to machines whose moral logic is designed by opaque corporate teams, it raises the stakes for society.
We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers. We cannot allow the values embedded in future AI systems to be determined by corporate boards or ideological trends.
At the heart of this debate is one question we must confront: Who do you trust to define right and wrong for the machines that will define right and wrong for the rest of us?
If we don’t answer that question now, Silicon Valley certainly will.
Finally an Intelligent Human Approach To AI
Sacramento — California officials are adamant about regulating the emerging Artificial Intelligence industry even though most of the world’s top…
search
categories
Archives
navigation
Recent posts
- Police ‘Froze’ As Terrorists Massacred Jews on Bondi Beach, Eyewitness Accounts and Videos Suggest December 16, 2025
- Trump Pulls Page From Iraq War Playbook In Escalation Against ‘Narco-Terrorists’ December 16, 2025
- James Fishback Invokes Benito Mussolini In New Campaign Ad December 16, 2025
- USA Today reporter crushed with backlash after calling ‘Appeal to Heaven’ flag ‘Christian nationalist’ December 16, 2025
- ‘Conflicts of interest’: Democrat-led federal agencies allegedly blocked efforts to investigate Clinton Foundation December 16, 2025
- Kamala Harris wants to run for president again? Some see signs despite donors and party leaders worrying she cannot win. December 16, 2025
- American muscle-car culture is alive and well … in Dubai December 16, 2025






