Anthropic says its own new model is too dangerous for the public — but not these Big Tech companies
Samyukta Lakshmi/Bloomberg/Getty Images
Anthropic will reportedly commit up to $100 million in credits for the project, meaning the amount of money it would typically charge for such a volume of its chatbot’s usage.
Labeled Project Glasswing, the initiative to shore up cybersecurity will grant Mythos access to handpicked companies chosen largely from Big Tech like Amazon, Apple, Google, and Microsoft. The group is rounded out by internet infrastructure and cybersecurity giants like Broadcom, Cisco, CrowdStrike, Nvidia, and Palo Alto Networks, along with financial titan JPMorgan Chase and key open-source nonprofit the Linux Foundation.
This is not the first time an AI company has warned its product is too dangerous for the public, and looking back, readers can gauge whether or not Claude may be as dangerous as its creators purport it to be.
In 2019, OpenAI sent out a warning ahead of its release of GPT-2, claiming that its capabilities — now vastly eclipsed by later models — could be used to mass-produce propaganda or misleading text.
As Wired reported at the time, OpenAI said GPT-2 was too risky to be released to the general public.
RELATED: Claude, Anthropic’s AI assistant, slammed by Elon Musk for anti-white responses to simple prompts
Claude has been in the news for alleged missteps, leaks, and accidental postings throughout the past year, and while it may not be a household name yet, it has raced its way through the tech sector as a go-to for “agentic” work building software, apps, and even companies.
In addition to its model being open-sourced and used by the general public for free, the company has been noted for “accidental” postings of its own code.
Anthropic “accidentally uploaded a file to a public repository that’s just meant to help developers understand how to use their product” and “exposed some of the source code of Claude,” reporter Aaron Holmes explained recently.
Proprietary information was further leaked in another alleged accidental posting, this time through a blog draft that revealed “internal source code.”
The company seems poised for consistent marketing battles, both willing and unwilling, from its high-stakes lawsuit against the federal government labeling it a supply chain risk to the blowback it has received from putting a woman closely linked to the cultish Effective Altruism movement in charge of its AI’s “Constitution.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
You may also like
By mfnnews
search
categories
Archives
navigation
Recent posts
- Analysis: More Than 90 Percent Of Funds Backing Dems’ Gerrymandering Scheme Come From Outside Virginia April 9, 2026
- Here Are The 8 Most Insane Things In The ‘DIGNIDAD’ Amnesty Bill April 9, 2026
- Hakeem Jeffries Tells Activist Audience DEI Is Explicitly Written Into Constitution April 9, 2026
- Anthropic says its own new model is too dangerous for the public — but not these Big Tech companies April 9, 2026
- Crucial detail about Iryna Zarutska’s suspected murderer may ease online outrage after ‘incompetency’ ruling April 9, 2026
- Teens definitely pick wrong homeowner to ‘ding-dong ditch’; cops say he came out of house with gun, opened fire after prank April 9, 2026
- Trump is keeping his word on health care costs April 9, 2026










Leave a Reply
You must be logged in to post a comment.