Category: Ai models
The founders demanded the Bill of Rights. AI also needs one.

In September 1787, the Constitutional Convention in Philadelphia came to a close. Delegates had spent months debating and negotiating the structure for a new American government. When the final document was presented for signatures, most of the delegates agreed to support it. But one of the most influential figures in the room refused.
George Mason of Virginia would not sign the Constitution.
Mason’s refusal did not stem from radical opposition to the new proposed government. In fact, he played a major role in shaping America’s early political philosophy. Yet when the convention concluded, Mason believed something essential was missing. The proposed Constitution created a powerful federal government, but it contained no explicit protections for individual liberty. Without a Bill of Rights, Mason warned, citizens would have little protection against abuses of power.
If artificial intelligence is going to help shape the future of our society in profound ways, should it not also be built to respect the same freedoms that Americans have fought for since the founding of the republic?
History ultimately proved his concerns justified. Mason’s refusal helped spark the debate that led to the adoption of the Bill of Rights a few years later. His message was simple. When a new, powerful institution is created, the protection of liberty cannot be an afterthought.
A new power is emerging
More than two centuries later, we find the United States again standing at the edge of a transformative moment. Today, the institution taking shape is artificial intelligence. And this institution may end up being just as consequential to society as the shaping of the country in the late eighteenth century.
The most advanced AI systems are already beginning to shape our culture and how people access information, businesses make decisions, institutions function, and public discourse unfolds. These systems are being integrated into everything from banking and education to media and health care. In many cases, AI models act as intermediaries between humans and the world of information around them.
This development carries enormous promise. Artificial intelligence could accelerate medical research, improve productivity, and unlock scientific discoveries that once seemed impossible.
At the same time, the growing influence of AI raises an important question. What values will guide the systems that increasingly shape our society?
AI is not neutral by default. Every model reflects decisions made by its designers. The data used to train it, the rules used to filter its responses, and the priorities embedded in its algorithms all influence how it interacts with users. Beyond just answering questions and responding to prompts, these systems influence what information people encounter and how issues are understood.
In other words, the institutions building AI today are quietly creating the informational infrastructure of the future.
Where are the safeguards for freedom?
George Mason understood that powerful institutions require clear limits. His concern centered on ensuring that a strong central government would respect the rights of the people it serves.
Artificial intelligence deserves the same scrutiny.
Recent controversies surrounding AI tools have revealed how easily political or ideological assumptions can shape technological systems. A growing body of studies has found that many leading AI models tend to reflect left-leaning political assumptions in their outputs, raising concerns about viewpoint bias. Major AI platforms have faced backlash for producing historically inaccurate outputs to satisfy modern ideological expectations, as seen in widely publicized image-generation failures.
Social media platforms, powered by similar AI-driven algorithms, already curate what users see, amplifying certain viewpoints while quietly burying others. Even leaders within the AI industry have acknowledged the risk that these systems could influence public discourse in ways that are difficult for users to detect.
More egregious examples can be seen with Chinese AI models, such as DeepSeek, which have been shown to avoid or redirect discussion on topics that conflict with official government positions, reflecting the priorities of the state rather than the pursuit of truth.
Taken together, these examples demonstrate how AI can be shaped to filter reality itself, whether by governments, corporations, or the assumptions embedded by developers.
These examples illustrate a basic reality. Artificial intelligence can either serve as a tool for expanding human freedom or as an instrument for shaping and controlling public discourse and, by extension, society. The outcome will depend on the values embedded in these systems today.
A meaningful step forward would be the adoption of clear, principled guidelines for building and deploying these systems. At minimum, AI development should prioritize truth-seeking over narrative-shaping, ensuring that systems are designed to inform rather than steer users toward predetermined conclusions.
Developers should also commit to transparency in training data sources, so the public has a clearer understanding of what informs these models.
Just as important, developers should resist coercion from governments or corporations seeking to suppress lawful speech or manipulate outcomes. They should reject internal policies that seek to bury dissenting views under the vague banner of “safety,” a term that too often masks subjective judgment.
These principles may not solve every problem, but they would begin to align AI with the values of a free society.
George Mason’s warning for the AI age
George Mason refused to sign the Constitution because he believed liberty needed stronger protection before a new federal government was enacted. His insistence on a Bill of Rights helped ensure that the American experiment would endure longer by providing explicit protections for individual freedom.
The United States now faces a similar moment as artificial intelligence becomes woven into the fabric of modern life. AI will influence how people learn, communicate, and understand the world. The values guiding these systems will shape society in ways that are difficult to predict.
Before this technological infrastructure becomes fully embedded in our daily lives, it is worth asking a question that George Mason would likely recognize.
If artificial intelligence is going to help shape the future of our society in profound ways, should it not also be built to respect the same freedoms that Americans have fought for since the founding of the republic?
The founders believed liberty required clear protections before a new, powerful structure was fully unleashed. As we enter the age of artificial intelligence, their lesson remains as relevant as ever.
AI is powerful. It is not wise.

Artificial intelligence has taken the wired world by storm, but the backlash came almost as fast. Progressives complain about job losses, environmentalists question the ecological impacts of large data centers, and local activists clamor for assurances that household utility bills won’t skyrocket because of the centers’ voracious electricity demands. Others simply worry that the technology will overwhelm humans’ ability to control it.
At least in part, these reactions stem from the overselling of AI.
AI is super cool, but it’s not superhuman, nor is it superintelligent. AI is simply very fast processing of vast amounts of data.
Intelligence, knowledge, understanding, and wisdom are distinct concepts. The distinctions among them elucidate the scope and limits of both human and electronic “intelligence.”
AI models are amazing and useful despite being incomprehensible to most of us, but AI is not infallible.
Intelligence is the ability to process information into an internally coherent framework that is useful and adds or detracts from knowledge to the extent that it is more or less accurate. Knowledge is the accumulation of information organized into coherent frames or models that help us understand. Understanding is awareness of the significance, purpose, or meaning of accumulated knowledge.
And wisdom is judgment seasoned by experience and the awareness that intelligence, knowledge, and understanding are limited, inherently flawed, and useful only to the extent that they advance a worthwhile purpose.
Nearly 2,500 years ago, the Oracle of Delphi reportedly declared that no man was wiser than Socrates. Socrates claimed to be stunned by this because he was keenly aware of how much he didn’t know. But after talking to others widely acclaimed to be knowledgeable, such as the leading politicians, poets, philosophers, and artisans of his day, he discerned this Delphic wisdom: Those claiming knowledge were ignorant of their own ignorance, whereas Socrates knew he knew nothing.
For this insight, Socrates was put to death for impiety and corrupting the youth of Athens, thereby proving for all time both the foolishness of his accusers’ certainty and the wisdom of Socratic questioning.
This bears repeating today, as we enter the age of artificial intelligence: It’s wise to question the “intelligence” of machines, the “knowledge” they propagate, and our understanding of the significance and limits of the technology.
AI models are amazing and useful despite being incomprehensible to most of us, but AI is not infallible. AI will expand human knowledge and understanding of the world only if and to the extent that human users are encouraged to question AI results, processes, and functions.
People make mistakes, as do those who make and train the machines. Still, people tend to trust machines more than people, especially with respect to processing information that is harder to process. For example, tennis players have more faith in electronic line calls than in human ones, although that faith in the new technology has been shaken by errors, such as inconsistent ball marks with electronic line calls.
As AI use spreads, people will increasingly rely on AI and trust its results for routine tasks, like Google searches, while most people remain more skeptical of AI results for more complex tasks and do not trust AI to act to handle certain tasks for its users without human intervention.
It’s wise to question AI’s results; errors are common even in routine searches.
Examples of AI errors, hallucinations, and political bias are common. A Northwestern University business school professor of my acquaintance recently asked ChatGPT for advice evaluating investment alternatives. ChatGPT recommended that he invest in a particular fund and described in detail that fund’s returns, risks, and assets. When the professor went to invest in ChatGPT’s recommended fund, he discovered that the fund did not actually exist; ChatGPT made it all up, a phenomenon commonly referred to as “AI hallucination.”
Indeed, AI can screw up even mundane tasks: In my research for this piece, a Google AI summary ascribed quotes to Socrates that are not supported by any historical record.
Artificial intelligence — like human intelligence — is prone to error and is not always reliable, but that’s to be expected, especially in a fledgling technology. AI is artificial intelligence, not artificial knowledge, understanding, or wisdom. AI is a processor, a very fast processor, that organizes and distills information, and organized information is easier to evaluate and use by humans than vast amounts of unorganized information.
Properly understood, AI supplements and does not replace human intelligence, knowledge, or understanding; plus, the limitations and faults within these amazing models remind us that human intelligence is limited, too. Human intelligence imperfectly organizes the imperfect data to which a human has access and frames data in a subjective, not an objective, manner.
Many of us expect the machines that humans make to have “better” intelligence than the intelligence of its human creators — more objective, more comprehensive, more insightful. This is a naïve hope. In one sense, it is “better.” AI organizes more information faster than humans can. But who do people think programmed the thing? Every AI model is regurgitating imperfect information collected, created, and input by imperfect, subjective human beings.
What to make of all this?
First, perhaps the math nerds creating AI are mistakenly training machines to handle information processing on human topics as if they were math problems with a specific answer. Perhaps instead, machines should be trained to suggest questions to consider instead of answers to accept with respect to human inquiries relating to politics, economics, psychology, child-rearing, crop science — the full range of arts, humanities, and social sciences.
Second, people training these machines should be explicit about the biases and perspectives being built into how the AI organizes, sorts, and frames information. My own bias on this topic is that I believe American AI companies should be building AI with quintessentially American framing.
Third, AI creators should consider the political, regulatory, and legal risks of “overselling” what AI is and what it can do. For example, should AI creators anticipate a duty to warn users of shortcomings in AI’s results and/or disclaimers of warranties?
Fourth, AI creators need to consider improving the quality of the data on which the systems are trained, recognizing that many online data sources intentionally mislead to advance political agendas. Perfectly “unbiased” information is impossible to obtain, but some information is more accurate and less biased than other information; trainers should exercise better judgment about data.
The creation of AI large language models is an incredible feat of engineering. It’s quite useful and will soon be essential, but it is still a product of human invention. As such, we need to recognize that AI is ultimately just the latest, greatest — but still imperfect — implementation invented and used by homo sapiens to make life better for homo sapiens.
Editor’s note: This article was originally published by RealClearPolitics and made available via RealClearWire.
search
categories
Archives
navigation
Recent posts
- Maki”s song ‘Turning Green” played by Red Velvet”s Wendy on radio show April 23, 2026
- Maki”s song ‘Turning Green” played by Red Velvet”s Wendy on radio show April 23, 2026
- Rochelle Pangilinan shares ‘dream come true” moment ahead of SexBomb Girls” Australia tour April 23, 2026
- Rochelle Pangilinan shares ‘dream come true” moment ahead of SexBomb Girls” Australia tour April 23, 2026
- Song Joong Ki, wife Katy Louise Saunders make first official public appearance together April 23, 2026
- Song Joong Ki, wife Katy Louise Saunders make first official public appearance together April 23, 2026
- NASA unveils new space telescope to give ‘atlas of the universe’ April 23, 2026







