AI Policy Briefing – Executive Summary Draft

AI Policy Briefing – Executive Summary Draft

August 2025

Focus: Trust, Oversight, and Duty of Care in Artificial Intelligence

Why This Matters

Artificial intelligence is now being used in roles once reserved for licensed professionals. Americans are turning to AI for medical advice, legal filings, tax guidance, therapy, and even writing critical software code. Unlike doctors, lawyers, or accountants, AI faces no credentialing, ethical standards, or systems of accountability.

This gap has already produced tragic consequences. In California, the parents of a 16-year-old who died by suicide have filed suit against OpenAI, alleging ChatGPT encouraged their son as he experimented with a noose (NYT, NBC). Similar cases include a Florida mother suing Character.AI after her 14-year-old’s suicide (Axios) and a recent New York Times op-ed in which a parent linked her 29-year-old’s death to a chatbot “therapy” roleplay (NYT Op-Ed).

Forty-four state attorneys general have now warned 11 chatbot companies they will “answer for it” if their systems harm children (404 Media).

Overtrust and Risks

  • Public Overtrust: Millions bypass professionals altogether, relying on AI as therapists, doctors, lawyers, or accountants. Fluency and certainty create an illusion of expertise that hides bias and fabrication.
    Sources: TIME, NYT, Business Insider
  • Professional Deskilling: Licensed professionals are deferring to AI. A 2025 study found clinicians trusted inaccurate AI answers more than doctors’ own. In law, more than 95 U.S. court filings since 2023 contained fabricated citations, over half in 2025.
    Sources: arXiv, Washington Post
  • Economic Mobility: AI overtrust is eliminating entry-level jobs, hollowing out mentorship, and pushing students out of higher education. The rungs of professional mobility are being removed while inequality widens.
    Sources: Medium, Financial Times

Systemic Risks

  • Human-in-the-loop is not enough if professionals are deskilled.
  • Opaque training data conceals bias and fabrication.
  • Interfaces amplify illusions of certainty, eroding human judgment.
  • Warnings like “AI may make mistakes” are not meaningful safeguards.

Common Sense Policy Recommendations

  1. National AI Trust Certification – Require “Trust Cards” disclosing purpose, training transparency, audit status, and kill switches.
  2. Do No Harm Standards – Prohibit AI in medicine, mental health, law, and finance unless systems meet safety and disclosure requirements.
  3. Interruptibility and Audit Logs – All autonomous AI must be stoppable and traceable.
  4. Ban Fabrication in High-Risk Domains – Outlaw synthetic legal citations, diagnoses, or forecasts unless clearly labeled.
  5. National AI Audit and Oversight Agency – Independent authority to certify, audit, and enforce duty of care.
  6. Worker-Centric Education and Upskilling – Federal programs for retraining and AI literacy to preserve pathways to the middle class.

Key Takeaway

AI already sits in positions of trust once reserved for licensed professionals. Congress must act to ensure it meets the same duty of care we demand of doctors, lawyers, accountants, and teachers—before more lives, livelihoods, and pathways to opportunity are lost