Category: Ai
Anthropic says its own new model is too dangerous for the public — but not these Big Tech companies

Anthropic is sending out a warning that its artificial intelligence model is sophisticated enough to undo decades of research.
The company operates Claude, the AI chatbot that has been ripped off and turned into a free, public model, and is hoping to get together with a consortium of tech companies to button up the security measures ahead of its release.
‘It has found vulnerabilities, and in some cases crafted exploits.’
Anthropic’s Mythos model of Claude AI will only be available to 40 select companies to be used for the power of good, the company claims.
It represents “the starting point for what we think will be an industry change point, or reckoning, with what needs to happen now,” said Logan Graham, head of Anthropic’s vulnerability testing team.
The company fears that its new AI model is so good at finding cracks in cybersecurity that it must only be shared with companies it deems capable and responsible enough to prepare for possible attacks when Mythos goes public.
“This model is good at finding vulnerabilities that would be well understood and findable by security researchers,” Graham said. “At the same time, it has found vulnerabilities, and in some cases crafted exploits, sophisticated enough that they were both missed by literally decades of security researchers, as well as all the automated tools designed to find them.”
RELATED: How to power the AI race without losing control
Samyukta Lakshmi/Bloomberg/Getty Images
Anthropic will reportedly commit up to $100 million in credits for the project, meaning the amount of money it would typically charge for such a volume of its chatbot’s usage.
Labeled Project Glasswing, the initiative to shore up cybersecurity will grant Mythos access to handpicked companies chosen largely from Big Tech like Amazon, Apple, Google, and Microsoft. The group is rounded out by internet infrastructure and cybersecurity giants like Broadcom, Cisco, CrowdStrike, Nvidia, and Palo Alto Networks, along with financial titan JPMorgan Chase and key open-source nonprofit the Linux Foundation.
This is not the first time an AI company has warned its product is too dangerous for the public, and looking back, readers can gauge whether or not Claude may be as dangerous as its creators purport it to be.
In 2019, OpenAI sent out a warning ahead of its release of GPT-2, claiming that its capabilities — now vastly eclipsed by later models — could be used to mass-produce propaganda or misleading text.
As Wired reported at the time, OpenAI said GPT-2 was too risky to be released to the general public.
RELATED: Claude, Anthropic’s AI assistant, slammed by Elon Musk for anti-white responses to simple prompts
Claude has been in the news for alleged missteps, leaks, and accidental postings throughout the past year, and while it may not be a household name yet, it has raced its way through the tech sector as a go-to for “agentic” work building software, apps, and even companies.
In addition to its model being open-sourced and used by the general public for free, the company has been noted for “accidental” postings of its own code.
Anthropic “accidentally uploaded a file to a public repository that’s just meant to help developers understand how to use their product” and “exposed some of the source code of Claude,” reporter Aaron Holmes explained recently.
Proprietary information was further leaked in another alleged accidental posting, this time through a blog draft that revealed “internal source code.”
The company seems poised for consistent marketing battles, both willing and unwilling, from its high-stakes lawsuit against the federal government labeling it a supply chain risk to the blowback it has received from putting a woman closely linked to the cultish Effective Altruism movement in charge of its AI’s “Constitution.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
The top 5 dangers of UBI

Social media is rife with warnings that AI will take everyone’s jobs within the next one to five years. If true, mass unemployment will become a mainstay of modern life, sparking questions as to how civilization as we know it will survive. The big-brained elite think they have a solution through universal basic income — with some optimists like Elon Musk claiming that high basic income is the wave of the future — but this idealistic concept poses several dangers severe enough that they could dismantle America and bring about the end of the world.
1. The death of capitalism
Let’s get this one out of the way first: UBI is a gateway to socialism. In a world where the people earn nothing and everything of value is handed down from on high, the capitalist system that made this country great ceases to exist.
Forced dependence, by any other name, is a form of slavery.
Without a consistent job or a way to earn a steady salary, the people must become dependent on the elite who control the money and dole it out at their discretion. Who exactly is expected to do this honestly and fairly? The government has shown itself to be an unreliable steward, especially on the left as the pursuit of equity ensures some groups — like white, straight men — are intentionally marginalized in favor of minority groups. Private companies don’t seem like good benefactors either, as many of them are currently firing employees in favor of AI, simply to keep more money for themselves.
Even if the UBI rollout magically goes off without a hitch, capitalism stands to face another hurdle. People are less likely to buy products and services when they live on a basic fixed income. In a study conducted in 2024, UBI recipients were most likely to spend UBI on necessities, like food and transportation, while withholding their dollars from what can be seen as more frivolous expenses that drive the American economy.
2. Financial inequity
The left’s disdain for wealthy Americans is well-known, with politicians regularly calling for the rich to “pay their fair share,” because why should you keep your money when the government can have it instead? Right now, the left tries to confiscate as much of the people’s earnings as possible through taxes — like California’s outrageous wealth tax — and if given the chance, they’d gladly redistribute those funds to groups that didn’t earn it.
RELATED: Why doesn’t money make you happy?
MicrovOne/Getty Images
Universal basic income would install a fast lane to the left’s unofficial wealth redistribution program. Once in power, they would get to decide which groups receive UBI, as well as the amounts that are distributed. In a left-leaning world, that could mean minority groups get more basic income while “privileged” groups receive less, finally giving them the power to push the “equity” they’ve chased since the Biden administration.
3. The end of the American dream
While Elon Musk’s “high basic income” is a novel idea, the reality of a socialist system means that most of us will get a meager allowance while the elite keep the lion’s share for themselves. In doing so, this will create a larger divide between the upper class and lower class. At the same time, the middle class who can’t work, can’t earn money, and can’t get a leg up will also fall into the lower-class bracket.
Under UBI, the middle class will be hollowed out, permanently relegating the majority of Americans to poverty. Even worse, this new system will ensure that no one can escape the lower class simply because they don’t have a way to earn more money than the elites are willing to give. Job scarcity and financial dependence will keep the poor in check, and the American dream will cease to exist.
4. Freedom isn’t free
Our forefathers promised the people life, liberty, and the pursuit of happiness. They made a social contract, one that still stands to this day. But if the jobs go away, UBI is instated, and the people must depend on someone else for their next paycheck, the Declaration of Independence loses its power.
Simply put, the people can’t be free if we’re forced to depend on politicians, benefactors, or elitists to provide our way of life. Forced dependence, by any other name, is a form of slavery. Universal basic income gives the elite the power to take our rights and render our founding documents null and void.
5. One step closer to the end times
Last but not least, UBI is one of the final levers required to spread the mark of the beast, the precursor to the end times.
In the New International Version of the Bible, Revelation 13:16-17 says: “It also forced all people, great and small, rich and poor, free and slave, to receive a mark on their right hands or on their foreheads, so that they could not buy or sell unless they had the mark, which is the name of the beast or the number of its name.”
This doesn’t just mean you can’t buy or sell products unless someone says so. It also means you would need the mark to receive UBI payments.
To put it bluntly, it’s easier to force the people to sell their souls when their means to work, earn money, and be free are all taken away. Even if UBI isn’t the mark itself, it’s a Trojan horse that will usher in top-down control that can be exploited by the most evil forces our world has ever known. It’s exactly what the devil wants and needs before the book of Revelation comes to pass.
Is universal basic income inevitable?
In a word, no, not yet. The things above can only happen if the two things below about the ongoing AI race are true:
- AI will be effective enough to fully replace human jobs, a feat that’s proving difficult with continuous hallucinations, mistakes, and more.
- AI will have the power to produce endless mountains of cash. There can only be enough basic income for everybody — even in small amounts — if AI can print infinite money.
Assuming these are true, more roadblocks stand in the way of an AI-controlled economy.
A crippled economy
Businesses are currently run by people who buy products and services from other human-led companies. Some businesses sell products to each other (B2B), while other businesses sell straight to consumers (B2C). This cycle is the beating heart of capitalism.
If companies are suddenly all run by the same AI platforms, they’ll no longer need to buy digital services from each other to get work done. They can simply use AI to build custom versions for their own companies at little or no extra cost, thus cutting out third-party vendors and partners, which will ultimately make some companies obsolete. In fact, this loophole has the power to take down the entire digital B2B market.
On the commerce side, consumers face a different problem. They can’t use AI to manufacture physical products for themselves — like iPhones, PCs, and game consoles — but under the universal basic income strategy, they are more likely to hold their money for necessary purchases than to spend it like they do today. This monumental shift in spending habits could also cripple companies and the market, or at the very least, it could stifle year-over-year growth.
In short, universal basic income, ushered in by the revolution of AI, would be a huge disaster for American workers, the American economy, and the American dream. All of it is in jeopardy unless the government passes regulations that prevent mass job loss. Luckily, after kneecapping the states’ ability to regulate AI via executive order, the federal government is finally stepping up by introducing the National AI Legislative Framework and the Trump America AI Act. More on that soon.
West Virginia Republicans are betraying their voters for AI special interests

There is a reason why most red-state Republican leaders fail to reflect the political values of their constituents. They represent the special interests they work for rather than the whole of the people.
Nowhere is this more evident than with the ravaging of West Virginia by generative AI data centers, promoted by people like House of Delegates Speaker Roger Hanshaw, who legally represents special interest groups fighting poor, local communities in court.
The same man who was instrumental in stripping localities of their ability to block data centers is now representing the people behind those data centers in court.
Remember the provision in the One Big Beautiful Bill Act of 2025 that originally attempted to strip all state and local governments of any ability to block data centers from being built? Well, last year, West Virginia enacted just such a ban at the state level. Hanshaw shepherded HB 2014 to Republican Gov. Patrick Morrisey’s desk.
Among many special tax and regulatory favors offered to data centers, this bill removed local jurisdiction over the siting, zoning, and operating of certified high-impact data centers and microgrids.
Thus, companies like Google, Meta, and OpenAI could work with state politicians bought into their pay-for-play and force their way into any community. And what better person to be fighting for them than the speaker of the House?
While serving as speaker, Hanshaw filed a notice of appearance in the appeal to the Department of Evironmental Protection’s Air Quality Board on behalf of his client MGS CNP1 LLC, which is an affiliate of Houston-based Fidelis New Energy working on a data center project in Mason County.
This was in the middle of the session and just one week after the state House of Delegates passed legislation making it easier for these projects to obtain certification with the Department of Commerce.
Then, just two days after the session ended, Hanshaw took on a case through his work at Bowles Rice for Fundamental Data, the company working on powering the data center bonanza in Tucker County.
So the same man who was instrumental in stripping localities of their ability to block data centers is now representing the people behind those data centers in court against local community groups appealing the DEP’s permit issuance.
It was the Tucker County fight that led me to speak out nationally against this mindless business model of raping red-state land, power, and water for a form of generative AI that serves nothing but chatslop and the surveillance state.
Last August, I vacationed in Tucker County, home to the gorgeous Blackwater Falls State Park and Canaan Valley. A county that voted for Trump by a 50-vote margin, these people are the forgotten men that MAGA was supposed to represent.
RELATED: How to power the AI race without losing control
Rudall30/Getty Images
I spoke with several locals who were irate beyond words about the injustice occurring in a state with barely any Democrat elected officials.
What’s worse is that West Virginia is also being violated with endless transmission lines to power the blue-state “data center alley” in northern Virginia. According to a report from the Institute for Energy Economics and Financial Analysts, West Virginia energy consumers will be expected to pay $572 million in higher rates to fund the rope to hang themselves.
What is so offensive is that these projects are not even creating jobs. According to the February JOLT report from BLS, construction remains in the greatest recession since the Great Recession, despite these so-called data center projects. Oracle, which is at the center of the cloud computing in the data centers, is laying off 18% of its workforce.
Shockingly, Henshaw and his minions attempted to pass even greater handouts for data centers offered to no other industry, in addition to what was in HB 2014.
This session, they introduced SB 623, which offered a complete property tax exemption and sales tax exemption on all data center equipment. They also introduced HB 4013, which would have created a new tax credit available to data centers to offset all state income, sales/use, franchise, and payroll withholding taxes based on capital investments, construction costs, and wages.
How many jobs did they have to create to qualify? Just 10! Which, of course, is a tacit admission that these behemoths don’t create many jobs, despite their enormous footprint, cost, and consumption of power.
In other words, Agenda 2030 is being fulfilled right under our noses in a state where Republicans control both houses of the legislature with 32-2 and 91-9 majorities.
What West Virginia, with its mind-numbing GOP majorities, shows is that the lack of conservative outcomes under GOP control is not due to a lack of power or votes but too much access to money and special interests.
Sam Altman described as ‘sociopath’ by board member in brutal insider report: ‘He’s unconstrained by truth’

OpenAI CEO Sam Altman was dragged through the mud in a new in-depth report that features former colleagues and current board members referring to him as sociopath and a liar.
Altman, 40, has yet to respond to claims made in a recent report, some of which were uncovered in secret memos to OpenAI’s board members.
‘He is a sociopath. He would do anything.’
According to the New Yorker, OpenAI’s chief scientist, Ilya Sutskever, sent the memos to three other board members in 2023. One of the memos about Altman began with a list titled “Sam exhibits a consistent pattern of.” The first item on the list was “lying.”
The memos also alleged that Altman misrepresented facts to executives and board members while deceiving them about safety protocols. Unfortunately for Altman, the claims did not stop there.
“He’s unconstrained by truth,” a board member told the New Yorker. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
The outlet said that the unnamed board member was not the only person to describe Altman as “sociopathic” without being prompted. Not long before his 2013 suicide, according to the New Yorker, coder Aaron Swartz warned at least one friend about Altman, whom Swartz had known from their time together at Y Combinator. His warning: “You need to understand that Sam can never be trusted. He is a sociopath. He would do anything.”
Sutskever additionally implied that he did not think Altman should have power over others, saying, “I don’t think Sam is the guy who should have his finger on the button.”
Others described him as more ambitious than anything else.
RELATED: Sam Altman tells BlackRock he wants AI on a meter ‘like electricity or water’
The New Yorker just dropped a massive investigation into Sam Altman, based on over 100 interviews, the previously undisclosed “Ilya Memos,” and Dario Amodei’s 200+ pages of private notes. It’s the most detailed account yet of the pattern of behavior that led to Sam’s firing and… pic.twitter.com/vX5xIp5DnI
— Ryan (@ohryansbelt) April 6, 2026
Former OpenAI board member Sue Yoon said Altman was “not this Machiavellian villain” but was able to convince himself of his own sales pitches.
“He’s too caught up in his own self-belief,” she reportedly said. “So he does things that, if you live in the real world, make no sense. But he doesn’t live in the real world.”
Other anonymous colleagues cited by the New Yorker said that Sutskever and similar detractors were simply aspiring to take Altman’s throne. Still, even many neutral comments did not help Altman’s portrayal in the report.
“He’s unbelievably persuasive. Like, Jedi mind tricks,” a tech executive colleague of Altman’s reportedly said. “He’s just next-level.”
At the same time, OpenAI is allegedly in the midst of unleashing superintelligence that Altman himself says will be so disruptive that it will require a new social contract.
RELATED: Sexting with chatbots is too far, OpenAI decides
Anna Moneymaker/Getty Images
Altman told Axios that there would be widespread job loss and a threat of cyberattacks coupled with social unrest.
“I suspect in the next year,” he said, “we will see significant threats we have to mitigate from cyber.”
Altman proposed a new deal with citizens that includes a public wealth fund, taxes on “automated labor,” a 32-hour workweek, and the “right to AI.”
That confirms previous reports that Altman wanted to put AI on a meter like electricity or water, to both democratize its usage and limit the possibility of overburdening the electrical grid.
OpenAI did not respond to Return’s request for comment about the claims made about Altman and who they were coming from.
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
States should work with AI, not against it

For decades, Americans have been conditioned to fear AI. From big-budget blockbusters portraying apocalyptic scenarios to TV shows and books that show AI in a negative light, AI has been shown negatively ever since HAL refused to open the bay doors.
This Hollywood-driven fear has affected real policy change on the state level. The problem is that many of these policies are overly restrictive and come from a place of fear rather than objectivity.
AI innovators should have one set of rules to follow nationwide, rather than being forced to tailor products and services according to a patchwork of laws.
They come from an understandable place, of course. AI has been known to hallucinate legal cases and run roughshod over privacy law, and it can be used in abusive and hurtful ways. It is imperative that humans remain involved in decision-making and implement strong safeguards against misuse. The White House recently called for such policies in the National AI Legislative Framework.
But the Trump administration has also recognized that regulations can be a hindrance.
This is why President Trump issued an executive order to establish a federal framework for AI regulation last December. “My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones,” he wrote in the order. “The resulting framework must forbid State laws that conflict with the policy set forth in this order. … A carefully crafted national framework can ensure that the United States wins the AI race, as we must.”
The order also directed the secretary of commerce to publish a report examining AI regulations from coast to coast. It will identify state AI laws the administration considers “onerous” to create a targeting map that will inform the priorities of the Justice Department’s AI Litigation Task Force.
Colorado — which is already in the administration’s crosshairs, according to the executive order — and other states whose laws make the list (such as California, New York, and Illinois) could lose significant federal dollars.
Although President Trump’s order targets states, cities aren’t in the clear. The DOJ recently created a new Enforcement and Affirmative Litigation Branch within the Civil Division that is tasked with “filing lawsuits against states, municipalities, and private entities that interfere with or obstruct federal policies,” underscoring the administration’s intent to challenge local laws that appear to violate the Supremacy Clause.
RELATED: California’s next dumb tech idea: Show your papers to scroll
Samuel Boivin/NurPhoto/Getty Images
Centralizing AI oversight makes sense. Without a deep understanding of artificial intelligence and machine learning, city and state leaders can inadvertently hinder progress in the field of technology (such as restricting the use of aged, anonymized data for algorithm training).
Regardless of the federal funding at stake, city and state statutes governing AI should be reviewed for conflicts with federal policy, which is being carefully designed to allow growth across industries where, today, progress is often powered by AI.
For the good of America’s economic engine, AI innovators should have one set of rules to follow nationwide, rather than being forced to tailor products and services according to a patchwork of laws.
The future is here, and we should not be afraid of it. AI is a powerful driver for progress in business, science, medicine, and a variety of other fields. Efficiency, accuracy, productivity, creativity, and analysis are magnified and elevated by this technology.
Cities and states should seek to harness this tool and use it for their people. The way forward is smart, federally driven guardrails that allow innovation to flourish, not a giant stop sign.
AI needs so much computing power, it’s being taken away from gamers

AI is completely decimating computer component supply chains, causing mass RAM shortages and increasing prices for new products. While most premium consumer electronics are feeling the heat from these constraints, the gaming industry is getting hit particularly hard. Along with new consoles from Valve’s Steam hardware division, Nvidia’s gaming GPU road map is floundering, and no reprieve is in sight.
Nvidia’s stunted GPU road map
Nvidia is one of the hottest companies on the planet right now, rising up the valuation charts to fifth place after spending years below the top 10. Most of this growth was driven by its GPUs tuned for AI, but although the company has taken a liking to its spot atop the AI hierarchy, its humble beginnings took root in the gaming industry.
Buy your new gear now. Right now. If you can find it.
Nvidia makes some of the best gaming GPUs money can buy, and its products are the gold standard that most game developers use when crafting their games. Unfortunately, gaming hardware just isn’t as lucrative as an entire roster of Big Tech giants willing to spend billions on the best gear to train their large language models.
RAM shortages have caused Nvidia to make a hard decision — keep printing money on the backs of Big Tech or pinch pennies with gamers who want the best graphics. It chose the former.
Latest reports revealed a bleak outlook for Nvidia’s gaming GPU lineup. The first red flag was when it skipped out on unveiling new GPUs at CES in January, a move that is very unlike Nvidia. We’ve since learned that the RTX 50 Super series refresh that was on the way is now delayed. Adding insult to injury, the next-generation RTX 60 series was pushed back even further, rolling to 2027 or maybe even 2028.
That means Nvidia’s gaming GPUs are virtually stuck in limbo, forcing gamers to purchase the same equipment that’s already a year old and aging quickly. Now, that doesn’t mean the 50 series is lacking in terms of performance; they’re still very capable cards. But it does mean that innovation in the industry will stall until Nvidia remembers that it used to be a gaming company before its ostentatious affair with AI.
Steam Machine delays
Valve, meanwhile, has been on a roll lately, with its first-ever handheld gaming computer, the Steam Deck, reaching critical acclaim among gamers everywhere. The launch went so well that Valve decided to take a second stab at a full TV console, once again dubbed Steam Machine.
JianGang Wang/Getty Images
The device is said to be a PC/console hybrid powered by SteamOS, Valve’s Linux-based gaming platform that, in many ways, offers better gaming performance than Windows. Without a crystal ball, it’s impossible to predict whether the Steam Machine sequel will be received better than the original, but if the Steam Deck’s success is any indication, Valve could have a breakout hit on its hands.
The only problem is that the Steam Machine, which was set to release in the first half of 2026, has now been delayed, thanks to — you guessed it — RAM shortages. Some estimates suspect that the console will now arrive mid-year, but Valve hasn’t confirmed this timeline yet. The company has also refrained from announcing an official price, citing that fluctuating RAM costs could drive the final MSRP higher.
Making matters even worse, the Steam Deck has also curiously disappeared from shelves in recent weeks, sparking concerns over Valve’s entire console business.
OEMs fight back
Some OEMs are trying to find ways around the RAM shortages in order to keep their product road maps alive, but the results could be detrimental to their brands. PC manufacturers like Dell, ASUS, and HP are reportedly looking to lesser-known Chinese companies outside their usual supply chains to provide RAM for their laptops.
While this could cut down on RAM costs and boost availability, the memory from these Chinese suppliers are untested in name-brand computers at scale. That means performance could suffer, and it could even open these laptops to security risks.
What are gamers to do?
Needless to say, all of this puts gamers in a tough position. With new hardware delays, market scarcity, potential shoddy RAM options, and rising prices, it’s growing more difficult for gamers to upgrade their existing hardware or make repairs as old components start to break.
The worst part is that RAM shortages are expected to last into 2028. As they drag on, fewer products will be available, and prices on current hardware will jump to even more unreasonable levels. There’s just not enough supply to meet demand, and that could make it impossible for gamers to get the gear they need.
Now you have three options:
- Pray that your current rig holds out until the end of the decade when, hopefully, these issues are resolved.
- Try cloud gaming. It might be easier to rent a rig until this all gets sorted out. But in doing that, you own less of your gaming experience, leaving yourself open to the dictates of companies that could eventually require biometric authentication for access, as is the case with Discord’s new ID-enforced age restrictions.
- Buy your new gear now. Right now. If you can find it.
Why you should buy now, if you can
If you want a 50 series GPU or a brand-new Steam Deck, you might be out of luck. But if a gaming laptop is what you’re after, there’s hope.
Because Nvidia didn’t release new GPUs for 2026, most of the “new” gaming laptops launching this year are minor refreshes. Instead of waiting for these models to drop, last year’s models with the same GPUs are still available and ripe for the picking.
I took advantage of this loophole myself, snatching up a 2025 ROG Zephyrus G14 with a stellar 5070 Ti that was made with premium parts from a time before the RAM shortages. It’s the smarter option than springing for the marginally better 2026 version with an inflated price tag, internals from a third-rate Chinese supplier, and more than likely, a delayed release date. Given the way the market has shaken out, I couldn’t be happier with my decision.
Gamers have to choose what’s best for them, but one thing is clear: If you don’t buy new hardware now, you might be waiting until the turn of the decade for better upgrades to come along, and in the fast-paced world of video games, that’s a long time to wait indeed.
The Pornography Free Pass
Have you ever wondered why so much sexually explicit content pollutes the internet today? Hardcore pornography is omnipresent online, even…
Dear Globalists, AI Won’t Defeat Christianity
This year’s meeting of liberal elites in Davos, Switzerland, was supposed to be banal. You can always tell when meetings…
How the military is computing the killing chain

In 2025, the nomenclature caught up with the reality. For decades, the United States had operated under the fiction of a Department of Defense, a name that suggested protection, reaction, and a reluctance to engage. When Secretary Pete Hegseth signed the memoranda that would redefine the American military for the algorithmic age, the letterhead had changed. It was the Department of War again.
The revival of the old title was not merely cosmetic. It was an unapologetic signal, a shift from a defensive posture to a mission-focused one. Then between late 2025 and early 2026, Hegseth released a flurry of new memos announcing that the United States intended to become an “AI-first” war-fighting force. The language was clipped, urgent, and devoid of the hand-wringing that usually accompanies the introduction of new lethal means. The department now treats AI not as a support tool but as a core element of warfare, intelligence, and organizational power.
There is a simulation engine that alludes without irony to Orson Scott Card’s novel about child soldiers fighting insectoid aliens.
Reading through these documents, one is struck by the anxiety of the “algorithm gap,” which echoes the “missile gap” of the Cold War, with the stakes shifted from megatonnage to processing speed. The prevailing sentiment is that falling behind an adversary’s AI capabilities would be as catastrophic as falling behind in nuclear weapons. The Department of War does not intend to be a laggard. “Speed and adaptation win,” one memo states.
To achieve this speed, the Department has declared war on its own bureaucracy. The memos speak of a “wartime approach” to innovation, dismantling the risk-averse culture that has defined Pentagon procurement for half a century. The endless committees and boards have been dissolved, replaced with a “CTO Action Group” empowered to make quick calls. The ethos is that of Silicon Valley, grafting Mark Zuckerberg’s call to “move fast and break things” onto an institution whose business is to break things in a more literal sense.
The specific initiatives, what the Department calls “Pace-Setting Projects,” read like the chapter titles of a science-fiction novel. There is “Swarm Forge,” a project designed to pair elite war-fighters with technologists to experiment with drone swarms. There is “Ender’s Foundry,” a simulation engine meant to war-game against AI adversaries, a name that alludes without irony to Orson Scott Card’s novel about child soldiers fighting insectoid aliens. There is “Open Arsenal,” which promises to turn intelligence into weapons in hours rather than years.
Photo by ANDREW CABALLERO-REYNOLDS / AFP via Getty Images
What is being built here is “civil-military fusion,” a concept the Chinese have long championed and which the United States is now adopting with a convert’s zeal. The Department is actively courting the private sector, mentioning commercial AI models such as Google’s Gemini and xAI’s Grok. It is bringing in tech executives to run the show, with a new chief technology officer empowered to clear bureaucratic blockers.
The transformation is not limited to the battlefield but permeates the “enterprise,” a sterile word for the three million personnel who make up the Department’s nervous system. The vision is total: Under a program called GenAI.mil, every analyst, logistician, and staff officer will be issued a secure AI assistant to draft reports and code software. The goal is to embed AI systems across war-fighting, intelligence, and support functions until the distinction between soldier and data processor dissolves. The focus is on “decision superiority,” out-thinking the opponent at every turn.
The drive for decision superiority leads to a profound shift in the role of human judgment. The memos describe “Agent Network,” a project to develop AI agents for battle management “from campaign planning to kill chain execution.” They speak of “interpretable results,” a concession to the idea that humans should know why the machine decided to fire. The momentum is toward “human on the loop,” in which a human may abort an attack, rather than “human in the loop,” in which the human must initiate it. We are entering an era of “hyper-war,” in which AI systems could escalate a conflict in seconds, before a human commander can pour a cup of coffee.
The Department is betting that American ingenuity, harnessed in code, will secure the future, that it can maintain “America’s global AI dominance” through force of will and capital. The memos outline a future in which algorithms join soldiers on the battlefield, data platforms become as crucial as tanks, and decisions are increasingly informed by machines. It is a grand experiment in efficiency. We have decided that if warfare is now a battle of algorithms, we intend to algorithmically outgun the world. The name on the building has changed to reflect the reality: We are no longer defending. We are computing the kill.
AI in education: Innovation or a predator’s playground?

For years, parents have been warned to monitor their children’s online activity, limit social media, and guard against predatory digital spaces. That guidance is now colliding with a very different message from policymakers and technology leaders: Artificial intelligence must be introduced earlier and more broadly in schools.
When risky platforms enter through schools, they inherit an unearned legitimacy, conditioning parents to trust tools they would never allow at home.
On its face, this goal sounds reasonable. But what began as a policy push has quickly turned into something far more concerning — a rush by major tech companies to brand themselves as “AI Education Partners,” gaining access to public education under the banner of innovation, often without parents being fully informed or given the ability to opt out. When risky platforms enter through schools, they inherit an unearned legitimacy, conditioning parents to trust tools they would never allow at home.
AI in education is being sold as inevitable and benevolent. Behind the buzzwords lies a harder truth: AI is becoming a back door for Big Tech to access children and sidestep parental authority.
Platforms already under fire for child safety
At the center of this debate are three companies — Meta, Snap, and Roblox — all now positioning themselves as AI education partners while facing active litigation and investigations tied to child exploitation, predatory behavior, and failures to protect minors.
Meta is facing lawsuits and regulatory actions related to child exploitation, unsafe platform design, and illegal data practices. Internal company documents revealed that Meta’s AI chatbots were permitted to engage minors in flirtatious, intimate, and even health-related conversations — policies the company only revised after media exposure.
European consumer watchdogs have also accused Meta of sweeping data collection practices that go far beyond what users reasonably expect, using behavioral data to profile emotional state, sexual identity, and vulnerability to addiction. Regulators argue that meaningful consent is impossible at such a scale. Meta has also claimed in U.S. courts that publicly available content can be used to train AI under “fair use,” raising serious questions about how student classroom work could be treated once ingested by AI systems.
Snapchat is facing lawsuits from multiple states, including Kansas, New Mexico, Utah, and others, alleging that its platform exposes minors to drug and weapons dealing, sexual exploitation, and severe mental health harm. In January 2025, federal regulators escalated concerns by referring a complaint involving Snapchat’s AI chatbot to the Department of Justice.
Despite this record, Snap signed on as an AI education partner, promising “in-app educational programming directed toward teens to raise awareness on safe and responsible use of AI technologies.”
Roblox, long flagged by parents for safety concerns, is being sued by multiple states, including Iowa, Louisiana, Texas, Tennessee, and Kentucky, over allegations that it enabled predators to groom and exploit children. Yet Roblox now seeks classroom access as an “AI learning” platform.
If these platforms are too dangerous for children at home, they are too dangerous to normalize at school. Allowing companies with a history of child-safety failures to integrate themselves into classrooms is negligent and dangerous.
The contradiction no one wants to address
The danger becomes clearer when you step outside the classroom.
Across the country, states including Florida, Tennessee, Louisiana, and Connecticut are restricting minors’ access to social media through age verification, parental consent, and limits on addictive features. At the federal level, the bipartisan Kids Off Social Media Act seeks to bar social media access for children under 13 and restrict algorithmic targeting of teens.
For more than a century, the Supreme Court has recognized that parents — not the state and not corporations — hold the fundamental right to direct their children’s education.
When Big Tech gains access to classrooms without transparency or consent, that authority is eroded. Parents are told to restrict social media at home while schools integrate the same platforms through AI. The result is families being sidelined while Big Tech reduces their children to data sources.
RELATED: Why every conservative parent should be watching California right now
Photo by AaronP/Bauer-Griffin/GC Images/Getty Images
This dangerous escalation must meet a clear boundary. Some platforms endanger children, others monetize them, and some expose their data. None of them belong in classrooms without strict, enforceable guardrails.
Parents do not need more promises. They need enforceable limits, transparency, and the unquestioned right to say no. The Constitution has long recognized that the right to direct a child’s education belongs to parents, not Silicon Valley. That authority does not stop at the classroom door.
If artificial intelligence is going to enter our classrooms, it must do so on the terms of families,not tech companies.
search
categories
Archives
navigation
Recent posts
- MYSTERIOUS DEATH OF: Laura Sweetman April 13, 2026
- Iran – A Nation Held Hostage April 13, 2026
- Euthanasia and the lie of the ‘good death’ April 13, 2026
- Trump’s Mideast oil mess is bringing China and Russia even closer together April 13, 2026
- Sofia Pablo, Allen Ansay, Shuvee Etrata, more stars grace ‘Huwag Kang Titingin’ premiere night April 13, 2026
- Jasmine Curtis-Smith reveals she volunteered her name for ‘Stars on the Floor’ Season 2 April 13, 2026
- Magazine editor Rissa Mananquil Trillo, inilahad ang kaniyang laban sa sakit na cancer April 13, 2026







