NO RULES = Spoiled AI Kids: Why AI Needs Boundaries

Why AI Can’t Be Treated Like the Early Internet: The Case for Smarter, Not Looser, Regulation

The fight over AI rules echoes a 1990s debate on internet taxes. But this time, a hands-off approach risks more than lost revenue—it could cost America its ethical leadership and public trust.

🏛️ The National AI Regulation Fight Is Heating Up

A new proposal circulating in the U.S. Senate—backed by allies of former President Trump—would ban states from enacting any AI-specific laws for the next 10 years, effectively handing regulatory control to federal agencies. The bill ties this moratorium to billions in AI infrastructure and research funds, forcing states to either accept regulatory silence or lose access to critical economic support.

The justification? Echoes of 1998.

Proponents are invoking the Internet Tax Freedom Act, which banned states from applying sales tax to most internet purchases. That law, they claim, gave digital commerce the breathing room it needed to grow—and they argue AI now deserves the same space.

But this analogy falls apart under scrutiny.

 

 

🕸️ 1998: The Internet Sales Tax Moratorium Was About Simplicity—Not Safety

The Internet Tax Freedom Act was passed during the dot-com boom to prevent thousands of municipalities and states from applying overlapping or inconsistent sales tax codes to online commerce. The fear wasn’t about harm to the public—but about administrative complexity for small startups.

No one argued that a lack of internet tax would lead to biased decisions, mass surveillance, or automated discrimination.

AI is an entirely different class of technology. It’s not just a platform for sales—it’s an active agent in decision-making, surveillance, hiring, policing, finance, and even warfare. To treat it like e-commerce is to ignore its cognitive, ethical, and social stakes.

⚖️ States Are Leading Where the Feds Stall

In the vacuum of federal action, states have become the laboratories of AI governance:

  • Connecticut is pushing for algorithmic transparency in hiring and financial services.
  • Illinois already regulates biometric data collection.
  • California is exploring impact audits for high-risk AI systems in education and healthcare.

A 10-year moratorium would halt these experiments and freeze any local response to emerging AI harms. If a biased hiring algorithm shuts out candidates of color or a police AI misidentifies someone on the street, state regulators would be legally barred from stepping in.

This isn’t about regulatory fragmentation—it’s about basic democratic oversight.

🌍 Global Standing at Risk

While the U.S. considers deregulation, the EU’s AI Act has already been adopted, with tiered risk classifications, registration mandates, and accountability requirements for high-impact AI systems. Canada, the UK, and even China are moving fast to define rules.

If the U.S. goes “hands-off,” it risks two outcomes:

  • Losing influence over global AI standards, ceding the moral and technical high ground.
  • Becoming a haven for unethical AI development, damaging trust at home and abroad.

In an AI-driven world, credibility matters. And credibility is built through transparency, trust, and enforceable safeguards.

🚧 Deregulation Favors the Few—At the Expense of the Many

Just like the internet tax ban allowed giants like Amazon to avoid local taxes for decades—starving schools, libraries, and public services—an AI moratorium would concentrate power in the hands of a few major companies while stripping communities of the right to protect themselves.

States and cities would be unable to:

  • Challenge opaque AI used in education systems
  • Ban emotion-recognition software in public housing
  • Require disclosures in AI-based tenant screening
  • Audit predictive policing models

And citizens? They’d have no democratic recourse against AI decisions that shape their lives.

🧭 What We Need Instead: Guardrails, Not Gridlock

We don’t need a regulatory free-for-all. We need a coordinated framework:

  • Federal minimums for safety, accountability, and equity.
  • State-level innovation and enforcement capacity.
  • Shared registries and audit trails across jurisdictions.
  • Transparency standards baked into every AI deployment.

This isn’t just a policy debate—it’s a test of whether we govern technology, or it governs us.

Final Thoughts

The sales tax moratorium gave birth to trillion-dollar companies—but also fueled inequality, undermined public budgets, and allowed unchecked digital power to reshape economies. Let’s not make the same mistake with AI.

This time, the stakes are higher. AI doesn’t just sell books. It decides who gets a job, who gets watched, who gets care, and who gets excluded.

Let’s learn from history. Not repeat it.

Sources:


💡 Energy Usage Disclosure:
Estimated energy consumed in writing and researching this article: 0.045 kWh, equivalent to running a 100W lightbulb for 27 minutes.