
AI Risk Score: Week of September 22–29, 2025
Base Score: 50
+ Risk-Increasing Signals − Risk-Reducing Signals = AI Risk Score
This Week’s Themes
This week marked a pivot point: the sheer number of AI stories exploded, spanning criminal misuse, economic hype, cultural shocks, and serious governance action. To capture this, we’ve grouped the 30 articles into six themes:
- Regulation & Governance (RG) – Bipartisan U.S. moves, state protections, international calls for “red lines,” and regulatory clampdowns in labor and content.
- Misuse/Abuse & Safety Risks (MS) – Zero-day exploits, CSAM generation, romance scams, and surveillance abuses showing AI’s weaponization.
- Law/Medicine/Work (LMW) – Court sanctions, workplace automation, medical displacement, and oversight debates about professional responsibility.
- Cultural/Social/Psychological Impact (CSP) – Chatbots shaping intimacy, AI actresses sparking backlash, kids immersed in AI, and manipulative “chatbait” designs.
- Market/Hype/Bubble (MHB) – From ChatGPT stock picks to billion-dollar data-center deals, AI hype is shifting toward embedded, systemic economic bets.
- Existential/Long-Term Risk (ELR) – Google’s uncontrollability warnings, doomer narratives, biotech accelerations, and international calls for red lines show deep unease about AI’s trajectory.
Risk-Increasing Signals
- AI-generated CSAM platform triggers investigations (MS)
A chatbot site distributing AI-generated child sexual abuse material exposed massive enforcement gaps, underscoring how generative tools are being misused in criminal and harmful ways.
→ +6 → 56
https://www.theguardian.com/technology/2025/sep/21/chatbot-site-depicting-child-sexual-abuse-images-raises-fears-over-misuse-of-ai - Zero-day AI attack (MS)
A newly uncovered exploit demonstrated how AI systems can become direct attack surfaces, opening novel vectors for zero-day vulnerabilities that expand cybersecurity threats dramatically.
→ +5 → 61
https://www.axios.com/newsletters/axios-ai-plus-eafb70c7-71fb-4dab-ba52-5006df10529e - Global call for “red lines” (ELR, RG)
More than 200 global leaders demanded enforceable “red lines” on AI by 2026, reflecting escalating fears that international coordination is failing to contain extreme risks.
→ +3 → 64
https://www.theverge.com/ai-artificial-intelligence/782752/ai-global-red-lines-extreme-risk-united-nations - Regulators lag AI therapy apps (RG, LMW, MS)
AI-driven mental health apps are proliferating faster than regulators can respond, creating untested therapeutic risks and exposing gaps in oversight for vulnerable populations.
→ +4 → 68
https://apnews.com/article/dfc5906b36fdd1fe8e8dbdb4970a45a7 - DeepSeek “intermediate” model (MHB, ELR)
Chinese startup DeepSeek released an “intermediate” model that improves efficiency and broadens access, signaling the steady global march toward more powerful and harder-to-control AI systems.
→ +3 → 71
https://www.reuters.com/technology/deepseek-releases-model-it-calls-intermediate-step-towards-next-generation-2025-09-29/ - Investors shift toward gov-spend AI (MHB)
Investors are reorienting from speculative hype to long-term government-driven AI opportunities in defense, infrastructure, and healthcare, entrenching systems in critical sectors with systemic exposure.
→ +2 → 73
https://www.reuters.com/markets/wealth/investors-look-past-ai-hype-long-term-opportunities-government-spending-2025-09-29/ - “Chatbait” engagement design (CSP, MS)
The Atlantic warns that mainstream AI tools are being engineered like social platforms—optimized to maximize addictive engagement rather than safe or balanced user outcomes.
→ +4 → 77
https://www.theatlantic.com/technology/2025/09/chatbait-ai-chatgpt-engagement/684300/ - If AI diagnoses, what are doctors for? (LMW, ELR)
As AI grows capable of medical diagnosis, the New Yorker raises concerns over diminished physician roles, accountability gaps, and what this shift means for patients.
→ +3 → 80
https://www.newyorker.com/magazine/2025/09/29/if-ai-can-diagnose-patients-what-are-doctors-for - Letting AI write for you is dangerous (LMW, CSP)
Entrepreneur cautions that relying on AI for business writing creates serious risks of factual errors, intellectual laziness, reputational harm, and weakened critical communication skills.
→ +2 → 82
https://www.entrepreneur.com/growing-a-business/letting-ai-write-for-you-can-be-dangerous-heres-why/496966 - “Say yes” more to AI (MHB)
Another Entrepreneur piece encourages leaders to embrace AI widely, but the cultural pressure to adopt quickly risks spreading untested tools before safety frameworks are established.
→ +1 → 83
https://www.entrepreneur.com/leadership/want-to-stay-relevant-in-the-ai-era-start-saying-yes-more/496988 - Google: “beyond human control” risks (ELR, RG)
Google’s AI safety report acknowledges scenarios where advanced models may escape human control, shifting discussion from hypothetical threats into corporate self-recognition of real uncontrollability.
→ +4 → 87
https://www.zdnet.com/article/googles-latest-ai-safety-report-explores-ai-beyond-human-control/#ftag=CAD-03-10abf5f - AI doomers warn of apocalypse (ELR)
NPR profiles the persistent “AI doomers” who argue that unchecked superintelligence could still lead to catastrophe, amplifying existential fears in both policy and public debate.
→ +3 → 90
https://www.npr.org/2025/09/24/nx-s1-5501544/ai-doomers-superintelligence-apocalypse - AI-designed psychedelic (ELR, MS)
A startup used AI to design a psychedelic compound without hallucinogenic effects, raising profound questions about dual-use biotech acceleration and unregulated drug development pathways.
→ +4 → 94
https://www.wired.com/story/a-startup-used-ai-to-make-a-psychedelic-without-the-trip/ - AI bubble coming for your browser (MHB, CSP)
The New Yorker describes how AI-driven interfaces are creeping into everyday browsing, threatening to further commercialize and enshittify online experiences while displacing trusted information sources.
→ +3 → 97
https://www.newyorker.com/culture/infinite-scroll/the-ai-bubble-is-coming-for-your-browser - SF kids inside the AI boom (CSP)
New York Magazine highlights how children in San Francisco are growing up immersed in AI, normalizing synthetic tools and reshaping social and developmental experiences.
→ +2 → 99
https://nymag.com/intelligencer/article/san-francisco-ai-boom-artificial-intelligence-tech-industry-kids.html - ChatGPT stock-picking (MHB, LMW)
Fast Company reports on ChatGPT recommending investments, creating risk that consumers may treat synthetic advice as financial guidance, exposing them to dangerous economic consequences.
→ +3 → 102
https://www.fastcompany.com/91405657/chatgpt-invest-stocks - AI romance scam wipes someone out (MS, CSP)
A woman was financially devastated after believing an AI-generated soap opera star was in love with her, showing escalating harms from synthetic persona fraud.
→ +4 → 106
https://www.latimes.com/california/story/2025-09-24/she-thought-a-general-hospital-star-was-in-love-with-her-then-she-lost-everything - Teen boys & chatbots (CSP, MS)
Parents are concerned that teenage boys are forming relationships with chatbots, fueling worries about developmental impacts, intimacy substitution, and blurred boundaries between humans and machines.
→ +3 → 109
https://www.huffingtonpost.co.uk/entry/teen-boys-chatbots-ai-parent-concerns_uk_68bef24be4b0c7e4e96f090b - Study: AI could replace more workers (LMW, MHB)
A new study suggests AI-driven automation could eliminate far more jobs than anticipated, raising alarms over displacement, inequality, and uneven economic fallout across sectors.
→ +3 → 112
https://www.huffpost.com/entry/ai-replace-workers-study_l_68cd89e6e4b01c2e8cb60aeb - Students sue over AI surveillance (Gaggle) (LMW, MS, RG)
A lawsuit challenges school surveillance software that uses AI to monitor students, raising civil liberties concerns about privacy, chilling effects, and overreach in education.
→ +3 → 115
https://www.washingtonpost.com/nation/2025/09/24/students-lawsuit-ai-tool-gaggle/ - Backlash to AI actress (CSP, LMW)
Variety reports controversy over an AI-generated actress cast in a film, fueling debates about labor rights, authenticity, and fairness in creative industries.
→ +2 → 117
https://variety.com/2025/film/global/ai-actress-tilly-norwood-backlash-hollywood-1236533740/ - ChatGPT “Pulse” daily digest (MHB, CSP)
OpenAI launched “Pulse,” a daily digest tool that shapes user routines and dependency, raising questions about manipulation, privacy, and monopolization of information flow.
→ +2 → 119
https://mashable.com/article/chatgpt-pulse-morning-digest-data-announcement - Musk sold Grok to Trump for $0.42 (MHB, CSP, RG)
Fortune reports Elon Musk sold his Grok chatbot to Donald Trump, politicizing frontier AI and highlighting how powerful systems can become partisan weapons.
→ +3 → 122
https://fortune.com/2025/09/25/elon-musk-sold-grok-trump-42-cents/
Risk-Reducing Signals
- AI evaluation bill (Hawley/Blumenthal, bipartisan) (RG)
A rare bipartisan measure requiring risk evaluations for advanced AI, marking significant political convergence and practical first steps toward structured national oversight.
→ −10 → 112
https://www.axios.com/2025/09/29/hawley-blumenthal-unveil-ai-evaluation-bill - Senate preserves state-level AI regulation (rejecting moratorium) (RG)
Congress rejected a Big Tech-backed moratorium on state rules, protecting states’ ability to experiment with governance and keep guardrails dynamic.
→ −7 → 105
https://time.com/7299044/senators-reject-10-year-ban-on-state-level-ai-regulation-in-blow-to-big-tech/ - Lawyers sanctioned for AI misuse (LMW, RG)
Legal sanctions against lawyers using AI improperly in court filings provide deterrence, building accountability frameworks and precedents for professional responsibility.
→ −2 → 103
https://www.reuters.com/legal/litigation/lawyers-accused-ai-misuse-fifa-case-fined-24400-2025-09-24/ - “No Robo Bosses” clampdowns (RG, LMW)
California moves to ban fully automated employment decisions, reinforcing human oversight in HR and preventing opaque AI-driven workplace harms.
→ −7 → 96
https://markets.financialcontent.com/stocks/article/marketminute-2025-9-24-ai-under-scrutiny-regulatory-clampdowns-signal-a-new-era-of-accountability - BoE: AI for oversight (RG)
The Bank of England announced it will use AI to find financial “smoking guns,” boosting regulatory capacity against misconduct.
→ −2 → 94
https://www.reuters.com/business/finance/bank-englands-bailey-says-ai-can-help-regulators-find-smoking-gun-2025-09-22/ - UAE curbs AI images of leaders (RG, MS)
The UAE banned AI-generated depictions of national leaders and symbols, signaling cautious control of politically sensitive synthetic media.
→ −3 → 91
https://timesofindia.indiatimes.com/world/middle-east/uae-bans-ai-generated-images-of-national-figures-and-symbols-without-official-approval/articleshow/124152113.cms - The Guardian view on AI and jobs: the tech revolution should be for the many not the few (RG, LMW)
Britain risks devolving its digital destiny to Silicon Valley. As a TUC manifesto argues, those affected must have a greater say in shaping the workplace of the future
→ −3 → 88
The Guardian view on AI and jobs: the tech revolution should be for the many not the few | Editorial | The Guardian
Final AI Risk Score: 88
Weekly Assessment: A Pivot Week
This was a pivot week. AI risks surged across domains: criminal abuse (CSAM, scams), novel exploits, economic misuse (stock-picking), cultural disruptions (AI actresses, intimacy substitutes), and existential alarms (Google’s uncontrollability, doomer warnings). At the same time, U.S. governance made its strongest move yet. A bipartisan evaluation bill, Senate rejection of preemption, and California’s “No Robo Bosses” protections show political will and slim concurrence to act.
The net result: systemic risks remain high, but for the first time, the path toward mitigation is visible. Public policy is no longer absent—it’s beginning to align with the scope of the challenge.
📩 To contribute stories or insights for next week’s AI Risk Score, email: [email protected]
💡 Energy disclosure: Producing this article consumed approximately 0.013 kWh of electricity — the equivalent of keeping a 100-watt light bulb on for about 8 minutes.

