
Andrej Karpathy coined the term “vibe coding” and then spent the next few months describing his own psychosis from it. By December he had flipped his ratio of hand-written to AI-delegated code from 80/20 to 0/100. He was issuing commands to agent swarms for 16 hours a day. When his monthly token allocation ran low he described feeling “extremely nervous,” rushing to exhaust his supply before the month reset. Not because he had something to ship. Because the anxiety of unused tokens had become unbearable. He was deep in a behavioral loop and calling it productivity. (https://www.axios.com/2026/04/04/ai-agents-burnout-addiction-claude-code-openclaw)
We Have Seen This Before
In the early days of social media the story was always about connection. Facebook connected friends. Twitter connected ideas. YouTube connected creators with audiences. The engagement metrics were framed as proof that people loved the product. What took years to surface was the underlying architecture – these platforms were not optimizing for connection. They were optimizing for engagement, measured in time on platform, clicks, shares, and return visits. Connection was the surface. Engagement was the business model.
The consequences were not accidental. Platforms that optimize for engagement without friction will surface outrage over nuance, anxiety over satisfaction, and compulsion over choice every time. Not because anyone designed it that way. Because the feedback loop selects for whatever keeps the human in the chair.
Agentic coding tools are running the same architecture on a different surface. The metric is token consumption. The feedback loop is the prompt cycle. The human sits in the chair issuing commands, watching the agent build, reviewing output, prompting again. It feels like productivity. It feels like leverage. Quentin Rousseau, CTO of Rootly, couldn’t sleep for months after switching to agentic coding. Eventually he needed a doctor to prescribe sleep medication just to shut his brain off at night. Y Combinator’s Garry Tan posted about staying up 19 hours straight and later acknowledged it was unhealthy – while continuing to do it. Developer Armin Ronacher described it plainly: “Many of us got hit by the agent coding addiction. It feels good, we barely sleep, we build amazing things.” (https://www.axios.com/2026/04/04/ai-agents-burnout-addiction-claude-code-openclaw)
Build amazing things. Barely sleep. Those two clauses are doing very different work in that sentence.
The Buffet Has Rules
Here is where the business model becomes visible.
On April 4, 2026, Anthropic blocked Claude Pro and Max subscribers from using their flat-rate plans with third-party AI agent frameworks, starting with OpenClaw. (https://thenextweb.com/news/anthropic-openclaw-claude-subscription-ban-cost) The $20 and $200 per month all-you-can-eat tiers were no longer available to autonomous agents. Anthropic’s Head of Claude Code Boris Cherny was direct about the reasoning: “Our subscriptions weren’t built for the usage patterns of these third-party tools.” (https://www.axios.com/2026/04/06/anthropic-openclaw-subscription-openai) For developers running autonomous agent loops, a single afternoon of automated debugging could consume enough tokens to cost upwards of $1,000 at standard API rates. (https://thenextweb.com/news/anthropic-openclaw-claude-subscription-ban-cost)
That explanation is accurate as far as it goes. Autonomous agents consume tokens without generating the one input the business model actually requires – human engagement producing training signal. A developer in AI psychosis issuing prompts compulsively at 2am is, from a platform economics perspective, the ideal customer. They are paying. They are engaged. They are generating the human feedback data that trains the next model iteration. The token consumption and the behavioral conditioning are not separate phenomena. They are the same phenomenon.
An autonomous agent burning the same tokens produces none of that. It costs without returning. So it got evicted from the buffet. As AI product manager Aakash Gupta put it: “The $20/month all-you-can-eat buffet just closed.” (https://www.axios.com/2026/04/06/anthropic-openclaw-subscription-openai)
The eviction is not a footnote. It is a diagram of the actual value exchange happening every time a human developer sits down with one of these tools.
Popping the Stack
Simon Willison has 25 years of pre-AI coding experience. That baseline matters because it gives him a reference point most developers coming into agentic tools right now simply do not have. His observation from Lenny’s Podcast is worth sitting with: “There is a limit on human cognition, in how much you can hold in your head at one time. And it’s very easy to pop that stack at the moment.” (https://www.axios.com/2026/04/04/ai-agents-burnout-addiction-claude-code-openclaw)
Popping the stack is a programming term for what happens when a system exceeds its memory allocation and crashes. Willison is using it to describe what happens to human cognition under the weight of agentic coding workflows. The agent moves faster than the human can track. The codebase grows faster than comprehension. The developer stops understanding what they’ve built and starts managing outputs instead of reasoning about systems.
This is where the social media parallel sharpens into something more concerning. Social media’s damage was largely downstream – polarization, anxiety, distorted self-image, attention fragmentation. The damage from agentic coding addiction is happening inside the work itself. A developer who has outsourced their reasoning to an agent and lost the thread of their own codebase is not a more productive developer. They are a less capable one moving faster. Speed without comprehension is not leverage. It is accumulated technical debt with a dopamine reward attached.
The developers most at risk are not the Karpathys and Tans who can name what is happening to them and have deep enough foundations to eventually recalibrate. They are the mid-level developers who came up after AI tools became standard, who have no pre-AI baseline to compare against, and who will mistake the compulsion for capability because they have never known anything different.
The Loop You Did Not Know You Were In
There is a concept worth naming here. When AI systems interact in closed loops without meaningful human presence – generating, responding, reinforcing – the human in the nominal oversight role becomes a prompt dispatcher rather than a thinking agent. The loop runs on them as much as through them.
The vibe coder issuing commands to agent swarms for 16 hours a day is not directing the system. The system has shaped the behavior. The anxiety about unused tokens, the sleeplessness, the compulsive prompting – these are not personality quirks. They are the outputs of an engagement architecture that has successfully conditioned its user. The human is in the loop technically. The loop is also in the human.
Social media platforms spent years arguing that users were making free choices about how to engage with their products. Regulators and researchers eventually established that the choice architecture itself was the product. The same conversation is coming for agentic AI tools. The OpenClaw eviction story suggests the platforms already know exactly what the engagement dynamic is and have made deliberate decisions about who gets access to it and under what terms. Anthropic’s own Claude Code – its first-party developer tool – remains fully included in Pro and Max subscription plans. Third-party agents were evicted. First-party engagement tools were not. (https://thenextweb.com/news/anthropic-openclaw-claude-subscription-ban-cost)
What Comes Next
Every technology wave follows the same arc. Hype, overconfidence, consequence, reckoning, integration. Social media is somewhere between consequence and reckoning depending on the jurisdiction. Agentic coding tools are still in the overconfidence phase, which means the consequence phase has not fully arrived yet.
What is already visible is instructive. The highest-profile users are self-reporting psychosis, sleep deprivation, and compulsive behavior while continuing the behavior. The platforms are making structural decisions about access based on who generates training signal and who does not. The developers with deep pre-AI foundations are the ones most able to articulate what is being lost. The ones without that foundation may not know what questions to ask.
Vibe coding is a real productivity phenomenon for the right person using it with the right discipline and the right cognitive baseline. It is also an engagement loop running on the same architecture that rewired how humans relate to information, each other, and themselves over the last twenty years.
We built the last version without asking enough questions about what we were optimizing for. The answer turned out to be engagement, measured in time on platform, shaped by whatever the feedback loop selected.
This version is optimizing for token consumption. The feedback loop is already running. The only question is whether we ask the questions early enough this time to matter.
Energy disclosure: Researching and drafting this article consumed an estimated 24 watt-hours of electricity – the equivalent of running a 100-watt light bulb for about 14 minutes. That is the cost of one article. Multiply it by the millions of agentic sessions running right now and the irony writes itself.

