Anthropic says its own new model is too dangerous for the public — but not these Big Tech companies
Samyukta Lakshmi/Bloomberg/Getty Images
Anthropic will reportedly commit up to $100 million in credits for the project, meaning the amount of money it would typically charge for such a volume of its chatbot’s usage.
Labeled Project Glasswing, the initiative to shore up cybersecurity will grant Mythos access to handpicked companies chosen largely from Big Tech like Amazon, Apple, Google, and Microsoft. The group is rounded out by internet infrastructure and cybersecurity giants like Broadcom, Cisco, CrowdStrike, Nvidia, and Palo Alto Networks, along with financial titan JPMorgan Chase and key open-source nonprofit the Linux Foundation.
This is not the first time an AI company has warned its product is too dangerous for the public, and looking back, readers can gauge whether or not Claude may be as dangerous as its creators purport it to be.
In 2019, OpenAI sent out a warning ahead of its release of GPT-2, claiming that its capabilities — now vastly eclipsed by later models — could be used to mass-produce propaganda or misleading text.
As Wired reported at the time, OpenAI said GPT-2 was too risky to be released to the general public.
RELATED: Claude, Anthropic’s AI assistant, slammed by Elon Musk for anti-white responses to simple prompts
Claude has been in the news for alleged missteps, leaks, and accidental postings throughout the past year, and while it may not be a household name yet, it has raced its way through the tech sector as a go-to for “agentic” work building software, apps, and even companies.
In addition to its model being open-sourced and used by the general public for free, the company has been noted for “accidental” postings of its own code.
Anthropic “accidentally uploaded a file to a public repository that’s just meant to help developers understand how to use their product” and “exposed some of the source code of Claude,” reporter Aaron Holmes explained recently.
Proprietary information was further leaked in another alleged accidental posting, this time through a blog draft that revealed “internal source code.”
The company seems poised for consistent marketing battles, both willing and unwilling, from its high-stakes lawsuit against the federal government labeling it a supply chain risk to the blowback it has received from putting a woman closely linked to the cultish Effective Altruism movement in charge of its AI’s “Constitution.”
Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!
You may also like
By mfnnews
search
categories
Archives
navigation
Recent posts
- MS Now host implodes over Pete Hegseth saying, ‘We leave no man behind,’ after pilot rescue April 9, 2026
- More than 76,000 Canadians have been killed through MAID. One province has had enough. April 9, 2026
- How a California crook committed $178 million worth of health care fraud — in just one year April 9, 2026
- Texas lieutenant governor sounds the alarm about GOP’s chances in his state in midterm elections April 9, 2026
- Dobol B TV Livestream: April 10, 2026 April 9, 2026
- Russia’s Putin announces Orthodox Easter ceasefire, expects Ukraine to do the same April 9, 2026
- US Republicans block bid to rein in Trump Iran war powers April 9, 2026











Leave a Reply
You must be logged in to post a comment.