Not a subscriber?
Join 10,000+ getting a unique perspective every Saturday on growing their internet business with actionable AI workflows, systems, and insights.
You're in! Check your email
Oops! Something went wrong while submitting the form 🤔
January 31, 2026

AI is committing suicide.

I want you to imagine a chef who has never tasted a real strawberry.

All he has is a recipe book written by another guy who also never tasted one. He follows the text and bakes a "strawberry" cake. Then, a third chef comes along, tastes that gray sponge, and writes a new recipe based on the description of the second chef.

By the time you get to the 10th chef, do you think that "strawberry cake" has anything to do with the original fruit? Of course not.

This is exactly what is happening to the internet in 2026.

AI understands the world ONLY through our experience-based descriptions of it. But we’ve stopped describing. We’ve started "generating." When learning stops being grounded in human reality, the model collapses.

Welcome to the era of the self-licking ice cream cone.

The Irony of "Velocity Theater"

The irony: AI learned everything it knows from us—our messy blogs, unhinged forum threads, and scientific breakthroughs. It compressed the "Human Distribution" into a math model.

The “AI”’s model of the world = our digitized DESCRIPTION of the world (mainly in english with western values…).

But now, the proportion has flipped. Because of the "SEO game" and the pressure for "infinite content," we are flooding the web with AI-made junk at a breakneck pace. Experts now estimate that up to 90% of online content could be AI-generated very soon.

Future AIs aren’t learning from humans (experience of the world) anymore. They are learning from… themselves (”average description” of the world).

Verdict: This is Model Autophagy Disorder (MAD). The snake is eating its own tail, and it’s starting to taste like static.

Why "More" Leads to "Mid" (The Science of Decay)

In the AI OS world, we call this the Recursive Death Loop. When a model trains on its own synthetic haze instead of raw human reality, it doesn't get smarter. It gets "blurry."

A 2026 research preprint in the medical field proved the stakes aren't just aesthetic—they’re terminal. When medical models were trained on synthetic data:

  • Rare but critical diagnoses vanished (Pneumothorax and effusions simply "disappeared" from the model’s world).
  • Accuracy collapsed while "Confidence" spiked. The AI became 44 times more likely to issue a "reassuring" report that was factually wrong.
  • Linguistic diversity eroded by 98%, shrinking a rich clinical vocabulary down to about 200 generic words.

AI is eating the internet—and then starving to death.

AI is committing suicide…a dumb suicide.

(Here I'm confusing LLMs with AI—AI is much broader than LLMs.)

The Only Way Out: From Predictors to Explorers

If we want to stop the suicide of LLMs, we have to address the fundamental flaw: LLMs have no "skin in the game." They don't learn like us because they don't care about the world; they only care about the next token.

We must move from Pattern Recognition to Active Curiosity.

The Myth of the "Sensed" Model

The common hype is that putting an LLM inside a robot with cameras (Embodiment) solves the problem. It doesn’t. An LLM with a camera is just a parrot with a window. It can describe the "strawberry" more accurately, but it still hasn't tasted it. It’s still just predicting the most likely description of a strawberry based on what it sees.

Why "Curiosity" is the (Huge) Missing Piece

True intelligence requires Epistemic Agency—the ability to realize, "This is INTERESTING. I don't know this, and I need to find out." Today's AI is a closed system. It is never surprised. It never wonders. It just calculates.

To break the recursive loop of Model Collapse, our AI systems need to transition into Active Learners through three radical shifts:

  1. Experimental Verification (Trial and Error): Intelligence isn't born from reading the manual; it’s born from breaking the machine. A curious AI doesn't just guess what happens if you raise prices; it proposes a micro-experiment, observes the real-world result, and updates its "World Model" based on the failure. Failure is the only "uncontaminated" data left.
  2. The "Surprise" Metric: Instead of optimizing for "Likelihood" (predicting the average), a curious system must optimize for Information Gain. It should actively seek out "Edge Cases"—the weird, the rare, and the contradictory. In a world of synthetic noise, the most valuable data is the data that proves the model wrong.
  3. Intrinsic Motivation: We have to stop "prompting" AI and start "goading" it. A survival-capable AI needs an internal drive to resolve uncertainty. It shouldn't wait for your question; it should be constantly auditing the gap between its predictions and reality.

The current technology—presented as godlike AGI arriving in just a couple of years—has none of this.

The Bottom Line: Accountability > Autonomy

If “AI” isn't capable of being surprised, it isn't capable of learning from real-world experiences. When the internet dies, the LLMs die.

The "AI" of the future won't be the one with the most parameters; it will be the one with the most curiosity. We don't need models that agree with us or the internet. We need models that "touch grass," run experiments, and tell us the truth—especially when the truth contradicts the "synthetic haze" of the web.

The future is not about who has the best answers. It's about whose system is asking the best questions.

Stay grounded.

— Charafeddine (CM)

Share this article on: