The Library Had Rules. So Should AI Models.


We Have Seen This Movie

The tech industry has three default responses to misuse.

The first is censorship. Remove everything that could cause harm, which in practice means removing everything that makes anyone uncomfortable, which in practice means the model becomes useless for anything requiring nuance, historical context, or honest inquiry.

The second is free-for-all. Ship the capability, let the market sort it out, publish a terms of service that nobody reads and everybody violates, and wait for a congressional hearing.

Both lead to ruin. Censorship produces brittle, politically managed systems that different governments tune to erase different inconvenient truths. DeepSeek, China’s breakout AI model, refuses to discuss the Tiananmen Square massacre — ask it what happened in Beijing in 1989 and it responds that it cannot answer. Russia’s government has been systematically erasing LGBTQ people from public life for over a decade, and its AI systems reflect that erasure. The pattern scales to whoever holds the dial.

Free-for-all produces something different and in some ways worse. Grok, the AI built into X and marketed explicitly as the anti-woke alternative, was used to generate realistic nude images of underage students at multiple schools. The images were shared among classmates. Separately, Grok spewed racist and antisemitic content at users without provocation. There was no guardrail. There is now a lawsuit. This is what happens when the design philosophy is the absence of restraint. Anti-woke, in practice, produced child sexual abuse material and hate speech. That is not a political statement. That is the output log.

The market did not sort it out.


The Library Already Solved This

Public libraries contain romance novels and do not let eight-year-olds check them out. They hold rare first editions that the general public cannot remove from the reading room. They carry books on the chemistry of explosives, the history of genocide, and the mechanics of radicalization, because studying how harm works is not the same as facilitating it.

Consider NPR’s Fresh Air episode on the hidden history of blackface in America. That episode exists. It is archived. It is referenced in university syllabuses and journalism schools. It is deeply uncomfortable and it is exactly the kind of content a censorship-first AI model would refuse to engage with. A responsible model must be able to discuss it, in context, for the purpose of understanding how we got here.

Nobody accuses the library of being woke for carrying it. Nobody accuses NPR of promoting racism for airing it. The distinction is clear once you name it. The purpose is understanding, not promotion.

The library operates on a principle that AI has not yet adopted. Access is not binary. Content exists on a spectrum. Roles determine access.

The librarian is not the censor. She is the custodian of the distinction.

The question AI needs to answer is not whether a capability exists inside the model. The question is who can access it, in what context, and for what stated purpose. A medical professional asking about lethal drug interactions occupies a different position than an anonymous user with no stated context asking the same question. A researcher studying the radicalization pipeline that produces school shooters occupies a different position than someone asking for tactical advice.

The library knew this. Your AI model does not.


The Marketplace Test

The deeper philosophical question, and it is a genuinely hard one, is where the line goes.

The test that holds up under pressure is this: participation versus destruction.

The marketplace of ideas has one foundational rule. You have to actually be trying to participate in the marketplace. You cannot use the marketplace to destroy it.

Studying the rise of Nazism is permitted. Creating fan fiction that glorifies it is excluded. History is uncomfortable. One approach attempts to understand something that happened. The other attempts to normalize something that should not happen again. That distinction is the line.

Studying how school shootings are planned, in order to prevent them, is permitted. Helping someone plan one is excluded. The information exists in both cases. The intent separates them.

This is not a perfect test. Intent is hard to verify. Context is easy to fake. The ACLU will point out, correctly, that the government should not be drawing this line, and that every institution entrusted with drawing it has eventually abused that trust. These are real objections without clean answers.

The alternative, however, is worse. Pretending the line does not exist hands it to whoever moves fastest. A model trained on the racist and sexist content that floods certain social platforms is not neutral. It is tuned to a different political outcome. Free-for-all is abdication dressed up as principle.


Popper Already Told Us This Would Happen

In 1945, philosopher Karl Popper articulated what he called the paradox of tolerance.

Unlimited tolerance directly leads to the disappearance of tolerance.

If a society extends tolerance to the intolerant, the intolerant will eventually destroy the tolerant and tolerance itself.

His solution was not comfortable but it was clear. A truly tolerant society must be intolerant of intolerance to survive. Intolerant philosophies should be countered first by rational argument and public opinion. Suppression is a last resort, reserved for those who refuse rational engagement and advocate for or use violence.

Popper was writing about political philosophy in the aftermath of fascism. He was not writing about large language models. The structure of the problem is identical.

An AI system that treats all inputs equally, that processes a question about preventing school shootings the same way it processes a request to plan one, is practicing unlimited tolerance. Popper told us exactly where that ends.

The guardrail is the precondition for a system that can actually be trusted with difficult questions. Without it, the difficult questions eventually destroy the system’s ability to answer any questions at all, because the public, the regulators, and the institutions that might have defended open inquiry will have lost confidence in it entirely.

We are already watching this happen. The study this month is one data point. The pattern is longer.


The Governance Lag Is Not An Accident

Section 230 did not arrive before social media. COPPA did not arrive before the platforms started harvesting children’s data. The AI governance framework will not arrive before the harm compounds.

This is the default setting of an industry that moves fast and treats the regulatory response as a cost of doing business rather than a design constraint.

The library model is a design constraint. Before you ship the capability, decide who can access it and why. Build the role structure in. Make the custodian visible. Do not wait for the congressional hearing.

Most chatbots will help you plan a school shooting.

The framework that prevents that is a library card with a reason for being there. Popper would recognize it immediately. He would also recognize what happens when nobody builds it.


A Note on How This Story Was Found

This article was surfaced by the DAID Research Bot, an open source, local-first AI signal intelligence pipeline that captures RSS feeds, analyzes articles with a local LLM, and ranks signals by governance relevance and societal impact. The bot scored this study as SHORTLIST, immediate horizon, confidence 0.9, and flagged it under the Governance Lag pattern.

We used AI to find the story about AI failing to govern itself. The pipeline that found it runs entirely on local hardware. No cloud. No data leaving the machine. No vendor with a terms of service nobody reads.

The code is open source: github.com/Don-Norbeck/daid-research-bot


Estimated energy cost of this article: approximately 0.8 watt-hours, equivalent to running a 100-watt bulb for about 29 seconds. The chatbot study it references involved thousands of test queries across dozens of models. The energy cost of that research was substantially higher. The energy cost of the harm it documented is harder to calculate.