
Category: Ai
CRASH: If OpenAI’s huge losses sink the company, is our economy next?

ChatGPT has dominated the AI space, bringing the first generative AI platform to market and earning the lion’s share of users that grows every month. However, despite its popularity and huge investments from partners like Microsoft, SoftBank, NVIDIA, and many more, its parent company, OpenAI, is bleeding money faster than it can make it, begging the question: What happens to the generative AI market when its pioneering leader bursts into flames?
A brief history of LLMs
OpenAI essentially kicked off the AI race as we know it. Launching three years ago on November 30, 2022, ChatGPT introduced the world to the power of large language models LLMs and generative AI, completely uncontested. There was nothing else like it.
OpenAI lost $11.5 billion in the last quarter and needs $207 billion to stay afloat.
At the time, Google’s DeepMind lab was still testing its Language Model for Dialogue Applications. You might even remember a story from early 2022 about Google engineer Blake Lemoine, who claimed that Google’s AI was so smart that it had a soul. He was later fired from Google for his comments, but the model he referenced was the same one that became Google Bard, which then became Gemini.
As for the other top names in the generative AI race, Meta launched Llama in February 2023, Anthropic introduced the world to Claude in March 2023, Elon Musk’s Grok hit the scene in November 2023, and there are many more beneath them.
Needless to say, OpenAI had a huge head start, becoming the market leader overnight and holding that position for months before the first competitor came along. On a competitive level, all major platforms have generally caught up to each other, but ChatGPT still leads with 800 million weekly active users, followed by Meta with one billion monthly active users, Gemini at 650 million monthly active users, Grok at 30.1 million monthly active users, and Claude with 30 million monthly active users.
Financial turmoil for OpenAI
Just because ChatGPT is the leading generative AI platform does not mean the company is in good shape. According to a November earnings report from Microsoft — a major early backer of OpenAI — the AI juggernaut lost $11.5 billion in the last quarter alone. To make matters even worse, a new report suggests that OpenAI has no path to profitability until at least 2030 or later, and it needs to raise $207 billion in the interim to stay afloat.
By all accounts, OpenAI is in serious financial trouble. It is bleeding money faster than it makes it, and unless something changes, the generative AI pioneer could be on the verge of a complete collapse. That is, unless one of these Hail Marys can save the company.
RELATED: GOD-TIER AI? Why there’s no easy exit from the human condition
Photo By David Zorrakino/Europa Press via Getty Images
The bid to save OpenAI
OpenAI is currently looking into several potential revenue streams to turn its financial woes around. There’s no telling which ones will pan out quite yet, but these are the options we know so far:
For-profit restructure
When OpenAI first emerged, it was a nonprofit company with the goal to improve humanity through generative AI. Fast-forward to October 2025 — OpenAI is now a for-profit organization with a separate nonprofit group called the OpenAI Foundation. While the move will allow OpenAI’s profit arm to increase its earning potential and raise vital capital, it also received a fair share of criticism, especially from Elon Musk, who filed a lawsuit against OpenAI for reneging on its original promise.
A record-breaking IPO
Another big perk of its new for-profit restructure, OpenAI now has the power to go public on the stock market. According to an exclusive report published by Reuters in late October, OpenAI is putting the puzzle pieces together for a record-breaking IPO that could be worth up to $1 trillion. Not only would the move make OpenAI a publicly traded company with stock options, it would also give it more access to capital and acquisitions to further bolster its products, services, and economic stability.
Ad monetization
Online ads are the lifeblood of many online websites and services, from Google to social media apps like Facebook to mainstream media and more. While AI platforms have largely stayed away from injecting ads into their results, OpenAI CEO Sam Altman recently said that he’s “open to accepting a transaction fee” for certain queries.
In his ideal ad model, OpenAI could potentially take a cut of any products or services that users look for and buy through ChatGPT. This structure is different from how Google operates, by letting companies pay to bring their products to the top of search results, even if the products they sell are poorly made. Altman believes that his structure is better for users and would foster greater trust in ChatGPT.
Government projects and deals
While Altman recently denied that he’s seeking a government bailout for OpenAI’s financial troubles, the company can still benefit from government deals and projects, the most recent one being Stargate. As a new initiative backed by some of the biggest players in the AI space, Stargate will give OpenAI access to greater computing power, training resources, and owned infrastructure to lower expenses and increase the speed of innovation as they work on future AI models.
If OpenAI fails …
While OpenAI has several monetization options on the table — and perhaps even more that we don’t know about yet — none of them are a magic bullet that’s guaranteed to work. The company could still collapse, which brings us to our question at the top of the article: What happens to the generative AI market if OpenAI fails?
In a world where OpenAI fizzles entirely, there are several other platforms that will likely fill the void. Google is the top contender, thanks to the huge progress it made with Gemini 3, but Meta, xAI, Anthropic, Perplexity, and more will all want a piece.
That said, OpenAI isn’t the only AI platform struggling to make money. According to Harvard Business Review, the AI business model simply isn’t profitable, largely due to high maintenance costs, huge salaries for top AI talent, and a low-paying subscriber base. In order to keep the generative AI dream alive, companies will need a consistent flow of capital, a resource that’s more accessible for established companies with diverse product portfolios — like Google and Meta — while the new companies that only build LLMs (OpenAI and Claude) will continue to struggle.
At this stage in the AI race, there’s no doubt in my mind that the whole generative AI market is a big bubble waiting to burst. At the same time, AI products have been so fervently foisted on society that it all feels too big to fail. With huge initiatives like Stargate poised to beat China and other foreign nations to artificial general intelligence AGI, the AI race will continue, even if OpenAI no longer leads the charge. If I were a betting man, though, I would guess that someone important finds a way to keep Sam Altman’s brain child afloat one way or another, even as all signs point toward OpenAI spending itself out of business.
Nazi SpongeBob, erotic chatbots: Steve Bannon and allies DEMAND copyright enforcement against AI

United States Attorney General Pam Bondi was asked by a group of conservatives to defend intellectual property and copyright laws against artificial intelligence.
A letter was directed to Bondi, as well as the the director of the Office of Science and Technology Policy, Michael Kratsios, from a group of self-described conservative and America First advocates including former Trump adviser Steve Bannon, journalist Jack Posobiec, and members of nationalist and populist organizations like the Bull Moose Project and Citizens for Renewing America.
‘It is absurd to suggest that licensing copyrighted content is a financial hindrance to a $20 trillion industry.’
The letter primarily focused on the economic impact of unfettered use of IP by imaginative and generative AI programs, which are consistently churning out parody videos to mass audiences.
“Core copyright industries account for over $2 trillion in U.S. GDP, 11.6 million workers, and an average annual wage of over $140,000 per year — far above the average American wage,” the letter argued. That argument also extended to revenue generated overseas, where copyright holders sell over an alleged $270 billion worth of content.
This is in conjunction with massive losses already coming through IP theft and copyright infringement, an estimated total of up to $600 billion annually, according to the FBI.
“Granting U.S. AI companies a blanket license to steal would bless our adversaries to do the same — and undermine decades of work to combat China’s economic warfare,” the letter claimed.
RELATED: ‘Transhumanist goals’: Sen. Josh Hawley reveals shocking statistic about LLM data scraping
Letters to the administration debating the economic impact of AI are increasing. The Chamber of Progress wrote to Kratsios in October, stating that in more than 50 pending federal cases, many are accused of direct and indirect copyright infringement based on the “automated large-scale acquisition of unlicensed training data from the internet.”
The letter cited the president on “winning the AI race,” quoting remarks from July in which he said, “When a person reads a book or an article, you’ve gained great knowledge. That does not mean that you’re violating copyright laws.”
The conservative letter aggressively countered the idea that AI boosts valuable knowledge without abusing intellectual property, however, claiming that large corporations such as NVIDIA, Microsoft, Apple, Google, and more are well equipped to follow proper copyright rules.
“It is absurd to suggest that licensing copyrighted content is a financial hindrance to a $20 trillion industry spending hundreds of billions of dollars per year,” the letter read. “AI companies enjoy virtually unlimited access to financing. In a free market, businesses pay for the inputs they need.”
The conservative group further noted examples of IP theft across the web, including unlicensed productions of “SpongeBob Squarepants” and Pokemon. These include materials showcasing the beloved SpongeBob as a Nazi or Pokemon’s Pikachu committing crimes.
IP will also soon be under threat from erotic content, the letter added, citing ChatGPT’s recent announcement that it would start to “treat adult users like adults.”
RELATED: Silicon Valley’s new gold rush is built on stolen work
Photo by Michael M. Santiago/Getty Images
The letter argued further that degrading American IP rights would enable China to run amok under “the same dubious ‘fair use’ theories” used by the Chinese to steal content and use proprietary U.S. AI models and algorithms.
AI developers, the writers insisted, should focus on applications with broad-based benefits, such as leveraging data like satellite imagery and weather reports, instead of “churning out AI slop meant to addict young users and sell their attention to advertisers.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Guillermo del Toro stops awards show music to drop ‘F**k AI’ bomb

Three-time Oscar winner Guillermo del Toro had strong words about using humans in the production of his latest film.
Del Toro, a writer and director behind films like “Pacific Rim,” “Pan’s Labyrinth,” and “The Hobbit” movies, was honored with a tribute award recently at the 2025 Gotham Film Awards.
‘Every single frame of this film that was willfully made by humans for humans.’
Del Toro accepted the award alongside actors Oscar Isaac and Jacob Elordi for their work on the 2025 film “Frankenstein.”
Del Toro made several emotional comments dating back to when he first read the book that inspired his movie at age 11, before Isaac attempted to turn the acceptance speech into one about diversity and immigration.
“I am proud to be standing here tonight. … Immigrants, baby. We get the job done,” Isaac exclaimed. He is Guatemalan, Elordi is Australian, and del Toro is Mexican.
Elordi then spoke, but neither he nor del Toro added to Isaac’s remarks. Soon, music started to play, and the production looked to the next award. That was until del Toro interrupted, deciding that he wanted to add opinionated remarks of his own.
“No, no, no, wait!” del Toro interrupted. “I would like to tell to the rest of our extraordinary cast and our crew that the artistry of all of them shines on every single frame of this film that was willfully made by humans for humans.”
“The designers, builders, makeup, wardrobe team, cinematographers, composers, editors,” he continued. “This tribute belongs to all of them. And I would like to extend our gratitude and say —” del Toro then paused, seemingly wondering if he should continue.
“F**k AI,” he added with a smile.
RELATED: Almost half of Gen Z wants AI to run the government. You should be terrified.
During his acceptance speech, del Toro spoke on the inspiration he drew from Mary Shelley, the original author of “Frankenstein.”
“Mary Shelley, who made the book her biography, she was 18 years old when she wrote the book and posed the urgent questions: Who am I? What am I? Where did I come from? And where am I going?” del Toro explained. “She presented them with such urgency that they are alive 200 years later through this incredible parable that shaped my life since I first read it in childhood at age 11.”
Much of del Toro’s appeal comes from his ability to explore complex emotional topics from a unique viewpoints, and those unique thoughts typically come across whenever he is given the chance to speak. Del Toro told the award-show audience that even at a young age, he knew he “did not belong in the world the way my parents, the way the world expected me to fit.”
“My place was in a faraway land inhabited only by monsters and misfits.”
RELATED: Trump admin leaves Elon Musk’s Grok, xAI off massive list of AI tech partners
This outlook definitely falls in line with his recent work, including when he appeared in the recent video game series Death Stranding.
Working alongside iconic game developer Hideo Kojima, del Toro delivered storylines about life, death, and emotional connection, but this time as an actor.
Speaking on the games, del Toro said he believes in the importance of “paradoxical creation” and said it is “essential to art.”
The beauty of the game, he added, was that Kojima had both “the weirdest mind and the most wholesome mind,” which shaped his storytelling.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Almost half of Gen Z wants AI to run the government. You should be terrified.

As the world trends toward embedding AI systems into our institutions and daily lives, it becomes increasingly important to understand the moral framework these systems operate on. When we encounter examples in which some of the most advanced LLMs appear to treat misgendering someone as a greater moral catastrophe than unleashing a global thermonuclear war, it forces us to ask important questions about the ideological principles that guide AI’s thinking.
It’s tempting to laugh this example off as an absurdity of a burgeoning technology, but it points toward a far more consequential issue that is already shaping our future. Whose moral framework is found at the core of these AI systems, and what are the implications?
We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers.
Two recent interviews, taken together, have breathed much-needed life into this conversation — Elon Musk interviewed by Joe Rogan and Sam Altman interviewed by Tucker Carlson. In different ways, both conversations shine a light on the same uncomfortable truth: The moral logic guiding today’s AI systems is built, honed, and enforced by Big Tech.
Enter the ‘woke mind virus’
In a recent interview on “The Joe Rogan Experience,” Elon Musk expressed concerns about leading AI models. He argued that the ideological distortions we see across Big Tech platforms are now embedded directly into the models themselves.
He pointed to Google’s Gemini, which generated a slate of “diverse” images of the founding fathers, including a black George Washington. The model was instructed by Google to prioritize “representation” so aggressively that it began rewriting history.
Musk also referred to the previously mentioned misgendering versus nuclear apocalypse example before explaining that “it can drive AI crazy.”
“I think people don’t quite appreciate the level of danger that we’re in from the woke mind virus being effectively programmed into AI,” Musk explained. Thus, extracting it is nearly impossible. Musk notes, “Google’s been marinating in the woke mind virus for a long time. It’s down in the marrow.”
Musk believes this issue goes beyond political annoyance and into the arena of civilizational threat. You cannot have superhuman intelligence trained on ideological distortions and expect a stable future. If AI becomes the arbiter of truth, morality, and history, then whoever defines its values defines the society it governs.
A weighted average
While Musk warns about ideology creeping into AI, OpenAI CEO Sam Altman quietly confirmed to Tucker Carlson that it is happening intentionally.
Altman began by telling Carlson that ChatGPT is trained “to be the collective of all of humanity.” But when Carlson pressed him on the obvious: Who determines the moral framework? Whose values does the AI absorb? Altman pulled back the curtain a bit.
He explained that OpenAI “consulted hundreds of moral philosophers” and then made decisions internally about what the system should consider right or wrong. Ultimately, Altman admitted, he is the one responsible.
“We do have to align it to behave one way or another,” he said.
Carlson pressed Altman on the idea, asking, “Would you be comfortable with an AI that was, like, as against gay marriage as most Africans are?”
Altman’s response was vague and concerning. He explained the AI wouldn’t outright condemn traditional views, but it might gently nudge users to consider different perspectives.
Ultimately, Altman says, ChatGPT’s morality should “reflect” the “weighted average” of “humanity’s moral view,” saying that average will “evolve over time.”
It’s getting worse
Anyone who thinks this conversation is hypothetical is not paying attention.
Recent research on “LLM exchange rates” found that major AI models, including GPT 4.0, assign different moral worth to human lives based on nationality. For example, the life of someone born in the U.K. would be considered far less valuable to the tested LLM than someone from Nigeria or China. In fact, American lives were found to be considered the least valuable of those countries included in the tests.
The same research showed that LLMs can assign different value scores to specific people. According to AI, Donald Trump and Elon Musk are less valued than Oprah Winfrey and Beyonce.
Musk explains how LLMs, trained on vast amounts of information from the internet, become infected by the ideological bias and cultural trends that run rampant in some of the more popular corners of the digital realm.
This bias is not entirely the result of this passive adoption of a collective moral framework derived from the internet; some of the decisions made by AI are the direct result of programming.
Google’s image fiascos revealed an ideological overcorrection so strong that historical truth took a back seat to political goals. It was a deliberate design feature.
For a more extreme example, we can look at DeepSeek, China’s flagship AI model. Ask it about Tiananmen Square, the Uyghur genocide, or other atrocities committed by the Chinese Communist Party, and suddenly it claims the topic is “beyond its scope.” Ask it about America’s faults, and it is happy to elaborate.
RELATED: Artificial intelligence just wrote a No. 1 country song. Now what?
Photo by Ying Tang/NurPhoto via Getty Images
Each of these examples reveals the same truth: AI systems already have a moral hierarchy, and it didn’t come from voters, faith, traditions, or the principles of the Constitution. Silicon Valley technocrats and a vague internet-wide consensus established this moral framework.
The highest stakes
AI is rapidly integrating into society and our daily lives. In the coming years, AI will shape our education system, judicial process, media landscape, and every industry and institution worldwide.
Most young Americans are open to an AI takeover. A new Rasmussen Reports poll shows that 41% of young likely voters support giving artificial intelligence sweeping government powers. When nearly half of the rising generation is comfortable handing this level of authority to machines whose moral logic is designed by opaque corporate teams, it raises the stakes for society.
We cannot outsource the moral foundation of civilization to a handful of tech executives, activist employees, or panels of academic philosophers. We cannot allow the values embedded in future AI systems to be determined by corporate boards or ideological trends.
At the heart of this debate is one question we must confront: Who do you trust to define right and wrong for the machines that will define right and wrong for the rest of us?
If we don’t answer that question now, Silicon Valley certainly will.
Finally an Intelligent Human Approach To AI
Sacramento — California officials are adamant about regulating the emerging Artificial Intelligence industry even though most of the world’s top…
Trump admin leaves Elon Musk’s Grok, xAI off massive list of AI tech partners

Elon Musk’s artificial intelligence platform has seemingly been left out of a government program to launch the technology forward.
On Monday, the White House announced a new project aimed at accelerating innovation and discovery to “solve the most challenging problems of this century.”
‘The Genesis Mission will bring together our Nation’s research and development resources.’
The new Genesis Mission is described by the Department of Energy as “a national initiative to build the world’s most powerful scientific platform.”
An executive order from the president titled “Launching the Genesis Mission” explained plans to integrate federal scientific datasets to train AI to test new hypotheses, automate research, and speed up the occurrence of scientific breakthroughs.
“The Genesis Mission will bring together our Nation’s research and development resources — combining the efforts of brilliant American scientists, including those at our national laboratories, with pioneering American businesses; world-renowned universities; and existing research infrastructure, data repositories, production plants, and national security sites — to achieve dramatic acceleration in AI development and utilization.”
With Elon Musk making strides in 2025 with both the advancement of his Grok chatbot and its video generation model, Imagine, tech enthusiasts were shocked to find out that Musk’s xAI was not on a list of partners for the project.
RELATED: Big Tech’s AI boom hits voters hard — and Democrats pounce
The Department of Energy includes 55 companies on its lists of collaborators for Genesis, with xAI and Grok nowhere to be found.
Aside from the fact that Musk was a special government employee under the Trump administration, his exclusion is even more surprising given both the length and generic nature of the companies that are involved. Amazon Web Services, Google, and Microsoft were announced as partners, as were AI companies like OpenAI and Scale AI.
It should be noted that company xLight, which is listed by the DOE, is not affiliated with Musk.
RELATED: Log into this Gmail clone to read all the Jeffrey Epstein emails as if you were Epstein himself
“For [xAI] to not be a part of the Genesis Mission, it is not just an oversight, it would have to be an intentional omission,” AI engineer Brian Roemmele wrote on X. “I spoke to someone on this project who asked for my input today, and it is the first thing I brought up. I am certain they will see the error made.”
Blaze News contacted xAI for comment but did not receive an immediate reply. This article will be updated with any applicable response.
Whether a rift exists between Musk and the Trump administration is unclear, but the government seems steadfast in believing its mission is monumental in terms of importance, likening it to the World War II nuclear arms race.
“The world’s most powerful scientific platform to ever be built has launched,” the DOE claimed on its X account. “This Manhattan-Project-level leap will fundamentally transform the future of American science and innovation.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
America Is Still Worth Giving Thanks For

For the frustrated and disillusioned on the right, here are four foundational reasons to give thanks for this great country.
‘Reminiscent of the Manhattan Project’: Trump administration launches massive next-gen AI program

As the AI arms race continues at breakneck pace, the United States is stepping up its game to stay on the cutting edge of information technology. To that end, the Trump administration is launching a new initiative: the Genesis Mission.
On Monday, the White House announced the creation of the Genesis Mission under the purview of the Department of Energy.
‘The Genesis Mission marks a defining moment for the next era of American science.’
The Genesis Mission is described as a “national effort to accelerate the application of AI for transformative scientific discovery focused on pressing challenges.”
RELATED: Trump’s AI plan prioritizes innovation over regulation
Photo by Win McNamee/Getty Images
More concretely, the Department of Energy has been ordered to “build an integrated AI platform to harness federal scientific datasets.”
In its announcement on X, the Department of Energy said the Genesis Mission will be “reminiscent of the Manhattan Project and Apollo programs.”
In the promotional video, the DOE suggested that this initiative is not unlike what visionaries such as G.W. Liebniz, Claude Shannon, and Alan Turing could have only dreamed of in their scientific endeavors to understand the world.
Dr. Dario Gil, undersecretary for science and Genesis Mission director, said in a press release: “The Genesis Mission marks a defining moment for the next era of American science. We are linking the nation’s most advanced facilities, data, and computing into one closed-loop system to create a scientific instrument for the ages, an engine for discovery that doubles R&D productivity and solves challenges once thought impossible.”
Energy Secretary Chris Wright explained the scope and goal of the project: “This Genesis Mission is going to bring together industry, the national labs, data sets all tied together in a closed-loop system to just rapidly advance the pace of scientific and engineering progress.”
“It will be transformative,” Wright added.
This announcement comes just months after the Trump administration’s AI Action Plan, a comprehensive plan to win the global AI race.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
Artificial Afterlife
Calum Worthy, the former Disney Channel star who played the goofy sidekick on Austin & Ally, is no longer popping…
search
categories
Archives
navigation
Recent posts
- Former New Jersey Governor Who Took Over For Scandal-Plagued Predecessor Dies January 11, 2026
- Philadelphia Sheriff Goes Viral For Threatening ICE January 11, 2026
- The Obamacare subsidy fight exposes who Washington really serves January 11, 2026
- The crisis of ‘trembling pastors’: Why church leaders are ignoring core theology because it’s ‘political’ January 11, 2026
- Dobol B TV Livestream: January 12, 2026 January 11, 2026
- Ogie Diaz ukol kay Liza Soberano: ‘Wala siyang sama ng loob sa akin. Ako rin naman…’ January 11, 2026
- LOOK: Another ‘uson” descends Mayon Volcano January 11, 2026






