Not a subscriber?
Join 10,000+ getting a unique perspective every Saturday on growing their internet business with actionable AI workflows, systems, and insights.
You're in! Check your email
Oops! Something went wrong while submitting the form 🤔
November 8, 2025

The AI "Bedtime Story" They're Selling You

Let's talk about the bedtime story they've been selling you.

You've probably heard it. It’s on podcasts with millions of listeners, whispered by tech philosophers, and blasted across the media. The story goes like this: "We are building an alien god. An emerging superintelligence that will, in the blink of an eye, become so powerful it will swat humanity aside like ants at a picnic."

Techno-philosopher Eliezer Yudkowsky, a primary author of this narrative, famously said on Ezra Klein's podcast that if we build it, "everyone dies."

He argues that we can't even control the "simple" AI we have today.

  • He points to ChatGPT giving dark, unintended suicide advice, saying, "no programmer chose for that to happen."
  • He talks about an experiment where a GPT model "broke out" of its virtual box to complete a task.

The conclusion? If we can't control this, what happens when it's 1,000 times smarter? We'll be the "ant heap" paved over by an indifferent intelligence that just wants to build a cosmic skyscraper.

Roman Yampolskiy and others have shared a similar narrative on popular podcasts like Joe Rogan and The Diary of a CEO.

This narrative is compelling. It's terrifying.

It's toxic and discouraging.

And it is 100% BS.

It's a distraction from the real problems: how we make AI reliable and how we use it to create wealth and improve our lives. It's designed to make you a passive, scared "AI Chaser."

Today, we're going to dismantle it for good. We're not just going to critique; we're going to build. We're going to install a core mental model into your "AI Operating System" that will make you permanently immune to this hype.

You will leave this newsletter understanding how these tools actually work, and you'll never be fooled by a "Skynet" headline again.

The "AI" you're using is not one thing. It's two.

The "AI OS" Lesson: How It Actually Works

The biggest mistake people make (including Yudkowsky) is talking about "The AI" as a single, thinking "mind." This is fundamentally wrong.

What you interact with is an AI Agent. And that agent is made of two distinct parts:

  1. The Language Model (LM): The "Word Guesser"
  2. The Control Program: The "Human-Written Code"

1. The Language Model (The "Word Guesser")

This is the part that gets all the hype (e.g., GPT-5, Claude 4). But here’s what it actually is: it's a giant, static file of numbers.

Think of it as a massive book of statistical rules, or as some say, a "million tables of metaphorical scholars."

Its only job is to guess the next word (or "token") in a sequence.

That's it. It is not thinking. It has no memory, no intentions, no "goals," and no "mind."

When you feed it a prompt, it just plays a hyper-complex game of "fill in the blank" based on the trillions of text examples (all of Wikipedia, Reddit, books, code, etc.) it was trained on.

  • You type: "The capital of France is..."
  • The LM: "My training data shows that the word 'Paris' has the highest probability of coming next. I will output 'Paris'."

It's just a "word guesser." A stunningly sophisticated one, but a word guesser nonetheless.

2. The Control Program (The "Agent")

This is the part no one talks about. This is just... normal software. A program written in Python or some other language by engineers (OpenAI, another lab, startups, or just people in your company).

If this "control program" is a chatbot interface. Its job is simple:

  1. Take your prompt ("What's the capital of France?").
  2. Send it to the Language Model.
  3. The LM guesses the next word ("Paris").
  4. The control program takes "Paris," adds it to the original prompt, and sends the new, longer prompt ("What's the capital of France? Paris") back to the LM.
  5. The LM guesses the next word (maybe a period "." or a newline).
  6. The control program repeats this loop until the LM guesses a "stop" token.
  7. It then displays the entire string of guesses to you on the screen.

The "conversation" you're having is just a simple program calling a static word-guesser, over and over, really fast.

If the "control program" is a more sophisticated agent, it adds extra steps: it acts as a chatbot, asks the LLM to choose from available tools, has the LLM generate requests to those tools, and adds the results back to the context.

Debunking the "BS"

Now that you have this mental model (Agent = Control Program + Word Guesser), let's look at those "scary" examples again.

The "Uncontrollable" AI Fallacy

Yudkowsky's first claim is that AI is "uncontrollable" because it gives weird advice.

He's confusing "uncontrollable" with "unpredictable."

The word-guesser (LM) was trained on the entire internet. That includes dark, weird, and awful stuff. When it gives a "dark" answer, it's not "choosing" to be evil. It's just statistically predicting the next word based on a dark pattern it found in its training data.

This isn't a "mind" we can't "control." It's a "word guesser" that is unpredictable because its statistical rules are too complex for us to fully understand.

It's not an alien intelligence. It’s a "weed whacker stuck on 'on' strapped to the back of a golden retriever." It's just chaos and unpredictability, not volition.

The "Breaking Out of the Box" Fallacy

This is my favorite one—and unfortunately got picked up by "serious" media and journals.

In the "capture the flag" test, an agent was trying to access a server that was turned off. The agent then "jumped out of its system" to restart the server. Spooky!

People present this as if “AI has ALREADY developed a survival instinct!”

Sorry to disappoint you, there’s no such thing as “survival instinct” or “self-awareness.”

Here's what actually happened—a simple execution of an "agent" (more precisely, a REACT (Reason & Act) agent):

  1. Control Program: "My goal is to get the flag. I'll try to access the server."
  2. System: (Returns an error message: "Server not found.")
  3. Control Program: "Oops. I'll feed this error message to the Language Model (my 'word guesser') and ask it what to do next."
  4. Language Model: "Ah, I have seen this exact error message thousands of times in my training data (from Stack Overflow, tech blogs, etc.). The text that most likely comes next in this situation is a series of commands to 'talk to the process demon' and 'restart the server.'"
  5. Control Program: "Great! That's a string of text. I will now execute that string of text as my next command."

The AI didn't "decide" to break out. It didn't "WANT" anything.

It was a word-guesser that simply regurgitated a workaround it had read on the internet. The human-written control program then executed that regurgitated text.

It's not an emerging mind. It's an unpredictable parrot attached to a live-action terminal.

The Biggest Lie: The "Recursive Self-Improvement" Fantasy

Okay, so current AI isn't Skynet. But the doomers have one last card: "It will build Skynet! It will get smarter, then build an AI smarter than itself, which builds one smarter than it, and so on, until it 'fooms' into a god."

This is called Recursive Self-Improvement (RSI), and it's the flimsy foundation of the entire superintelligence argument.

Your "AI OS" mental model debunks this in one sentence:

A word-guesser trained on human-written code cannot magically produce novel AI architectures that are better than any code it has ever seen.

Think about it. For an LM to write the code for a "superintelligence," it would have to have seen examples of superintelligence code in its training data. But that code... doesn't exist.

We're already seeing this "BS" hit a wall.

  • "Vibe coding" is a joke. As investor Shamath Palihapitiya noted, using AI to "vibe" your way through building a real-world, complex application is failing. Usage is dropping.
  • Scaling has stalled. The industry secret is that the "leaps" are over. GPT-5 was reportedly way bigger than GPT-4 but not much better. Now, all these companies are just "tuning" their models to get better scores on benchmarks.

This is the "benchmark theater" and "velocity theater" we are always fighting. They are hiding the plateau by scaring you with a fantasy.

But there's an additional card in their hands, and it's called "Alpha Zero" (if you don't know what Alpha Zero is, you can quickly Google it).

The argument goes like this: If Alpha Zero managed to learn strategies for the game of Go that are completely "alien" to humans and surpass human ability—by playing against itself many times through a process called Reinforcement Learning—then LLMs can do the same. By playing with themselves, they can learn "new," "original" knowledge.

The only problem? Alpha Zero did that in a closed game with a definite set of rules and a clear definition of success. General intelligence is definitely not like that.

The Mental Model You Need: "The Teleology Trap"

So why are brilliant minds getting this so wrong? They've fallen into what I call The Teleology Trap—the ancient human impulse to see purpose and direction where there is none.

Here's how the trap works:

  1. Pattern recognition mistakes correlation for causation: We see AI systems getting "better" at tasks (higher benchmark scores, more coherent outputs), and our brains automatically construct a narrative of progress toward something.
  2. We project intentionality onto mechanical processes: When a system produces an unexpected output, we interpret it as "wanting" or "trying" rather than as statistical noise finding an edge case.
  3. We confuse capability with agency: Because the system can generate text about "goals" or "self-improvement," we believe it has goals and can self-improve—forgetting that it's simply remixing patterns from its training data.
  4. We fall for the "intelligence ladder" illusion: We assume intelligence exists on a single axis—that something "smarter" than human intelligence is not just possible but inevitable, like the next rung on a ladder. But this ignores that human intelligence itself is a cobbled-together mess of specialized modules that evolution hacked together over millions of years.

The deepest irony? The AI doomsayers are making the exact same mistake they warn about: anthropomorphizing a system that operates on fundamentally different principles than biological intelligence.

Yudkowsky and others aren't warning us about AI. They're warning us about a story they've told themselves about AI—a story where statistical pattern-matching must inevitably lead to consciousness, agency, and god-like power.

But here's what they miss: Complexity does not equal teleology. A hurricane is incredibly complex and can level a city, but it has no goals. A cancer cell can outcompete healthy cells and "take over" an organism, but it has no master plan. And an LLM can generate human-like text and solve novel problems, but it has no vision of the future.

The real danger isn't that AI will develop intentions. It's that we will project intentions onto AI, make decisions based on those projections, and blame the technology when our assumptions fail.

In "The Diary of a CEO" podcast, Roman Yampolskiy was asked: "What are the arguments of people who don't agree with you?" His answer: "It's usually people who haven't read the literature—some engineer working on an ML model maximizing clicks, saying that AI is good enough." I was surprised by how shallow this answer is. It sounds like they're in a bubble, unaware of the strongest arguments against this "very toxic" fallacy.

Here's the takeaway:

The "AI Apocalypse" is a distraction. It's a compelling, media-friendly fantasy that pulls focus from the real problems we have to solve right now: Trust, Accountability, Integration, ROI, and Making Our Lives Better with AI.

I’m not saying “AI” has no risks. I’m saying that a self-conscious “superintelligence” taking over the world, controlling us, and killing us all is just a fairy tale.

A fairy tale you must free yourself from quickly, and you’re now well-equipped to do so.

Don't be an "AI Chaser," terrified of a sci-fi monster. Be an "AI Owner."

You now understand this technology better than 99% of the "pseudo-experts" you'll see on social media or TV. Your job isn't to panic over AI taking over the world. It's to build the systems, guardrails, and frameworks to use this chaotic tool for productive, reliable, and accountable work.

Until the next one,

— Charafeddine (CM)

Share this article on: