Not a subscriber?
Join 10,000+ getting a unique perspective every Saturday on growing their internet business with actionable AI workflows, systems, and insights.
You're in! Check your email
Oops! Something went wrong while submitting the form 🤔
August 2, 2025

Do you believe in “AGI”?

“Whenever I hear someone say, ‘AGI will soon be conscious and enslave us,’ my brain flashes a giant 404: Not Found.”

If you’ve ever felt the same jolt of confusion (or mild panic), you’re in good company. Many smart and famous people talk confidently about super-smart AI that will someday control humans.

I know why — because of the so-called "scaling laws".

Meanwhile, the rest of us are staring at a system that—let's be honest—predicts the next word really well, "simulates thinking" very well, solves problems, can click buttons and build programs but behaves mostly like "a giant database with perfect communication skills" rather than a real brain.

That disconnect is the itch this newsletter scratches.

Let’s Name the Beast: What LLMs Actually Are

Large Language Models (LLMs)

Think: “giant libraries that talk back.”

  • Massive info sponge – They soak up absurd amounts of written human knowledge (but not the only way to store knowledge).
  • Pattern matchers – The system not only stores this information in a siloed way, but connects these large amounts of information in a "meaningful / useful / coherent" manner (inferred from human language or description of what's "meaningful / useful / coherent")
  • Parrots with style – Based on those patterns, the system is capable of remixing these "large amounts" of information in a "meaningful / useful / coherent" & "persuasive" way to please the user's request.

LLMs represent a “revolution” in information processing—storing vast knowledge and actively using it to solve problems, unlike traditional systems that merely search and retrieve data.

Agents

Think: “LLMs with a to-do list.”

  • LLMs (essentially "databases that excel at communication")
  • Determine which tools to use through generated text
  • Create code-based requests to these tools based on their determinations
  • Produce final outputs using the results from these requests

That’s it—no hidden death-ray mode.

Key take-away: LLMs aren’t magic brains. They’re a new, hyper-actionable way to structure and render human information.

That's true as well for other AI systems (modalities): image, sound, and video generation.

Experts in labs and Silicon Valley are excited about the General Superintelligence race. Even countries like the US and China are engaged in a "compute" and "ships" / ”minerals” war to win the "AI race".

WHY? The rationale behind this is very simple: AI labs have observed a "scaling law" — as we scale up training, these systems get "linearly" better and “exponentially” capable of solving tasks that take humans longer (reportedly doubling in capability every 7 months).

(Of course, everything depends on how you measure these improvements—using the so-called benchmarks...)

Therefore, many argue it's just a matter of time and "available" compute to reach a "General Superintelligence" smarter than all humans combined (around 2030 according to Elon Musk).

But let's first define intelligence — in a very narrow (AI) sense, an "intelligent" system:

  • Has a persistent memory
  • Understands the physical world
  • Can plan and execute a plan

Current AI systems have none of this (we're just at the genesis of the third point).

This is just one definition among many. I think we all agree that intelligence is a far more complex concept — there are many forms of intelligence — emotional, social, behavioral...

So would anyone explain to me how "databases with excellent communication skills" (even with “agentic capabilities) could possibly surpass all humans combined across the entire spectrum of intelligence?

I have no doubt that those systems would "know" and "answer questions" better than most or all humans combined. But this is not surprising (we already have systems doing better than many humans combined — a powerful calculator can do better than many people in the same room trying to solve a complex equation…).

Some people are even talking about enslaving humanity and "conscious AI" - this is where we enter science fiction or human hallucination...in my opinion.

I know what most people would say: "But systems are getting better, and today's technology is nowhere near what we'll have in a year."

Let's break down what AI is actually improving at.

Where the Bots Shine (and Flop)

Getting “Exponentially” Better Still Pretty Meh
Coding and debugging Original poetry
Math proofs & structured logic Fresh marketing angles
Clear-pattern images/videos Deep philosophy
Puzzle games with rules Inventing new scientific paradigms

Everything on the left has a crisp, objective answer. Predict-the-next-word works great. Everything on the right is fuzzy, taste-driven, or open-ended—and the wheels fall off fast.

Example: Give an LLM 50,000 Sudoku puzzles—watch it nail Sudoku. Ask it to write the next Lord of the Rings or your marketing copy—you’ll be disappointed.

A Little Time-Travel Thought Experiment

Picture this:

  • Year: 1900
  • Training data: Every scrap of human writing up to that point
  • Hardware: Somehow as beefy as 2025 GPUs
  • Question: Would LLM-based “AGI” come up with general relativity or quantum mechanics for us?

My opinion: Sure not. I can't prove it.

Silicon Valley can't prove that it would either.

Those breakthroughs required creative leaps, not merely bigger pattern matching. Predicting the next 1900-era word doesn’t hand you Einstein’s 1915 field equations.

My Contrarian Cheat-Sheet on AGI & Super-Intelligence

  1. Imminent AGI? Show me the receipts.
    • Current models bomb tests even slightly off their training set. Generalization is still narrow.
  2. Creativity remains stubbornly human.
    • Moving from Newton to quantum mechanics wasn’t just data compression—it was conceptual heresy.
    • Even today, the best AI-generated content comes from the collaboration between highly skilled creative professionals and these powerful AI tools. Ask yourself: who consistently produces the most impressive AI-generated images, videos, art, and software?
  3. “Super-intelligence” means beating all humans at all tasks.
    • We’re miles away — not with the current tech. (And that’s fine.)
  4. Silicon Valley tunnel vision.
    • Many folk over-index on one flavor of intelligence (logic) and ignore others (emotional, social).
  5. Consciousness ≠ Side Effect.
    • Treating sentience as a rounding error feels more like faith (religion) than fact. Consciousness is highly complex and difficult subject.
  6. Gödel’s Theorem.
    • Roger Penrose argues humans can see truths an algorithm can’t prove. That paradox still stands. It’s Gödel’s Theorem.

How These Models Actually Improve

When discussing AI improvement and surpassing human capabilities, people often point to AlphaGo as a prime example.

AlphaGo developed "original" knowledge and strategies for the game of Go that surpassed everything humans had invented until that time. This breakthrough happened because AlphaGo played thousands of games against itself (Self-Play), allowing these "original" patterns to emerge.

Could we apply this approach to LLMs?

In theory, yes.

But practically, how would this work? AlphaGo is an expert system specialized in one game with clear rules and outcomes. How could we implement a similar self-improvement mechanism for a "generalist" AI system operating across countless domains?

Training Mode Works Great For Hits a Wall With
Self-Play (AI vs. itself) Chess, Go, coding puzzles, math Novels, branding, existential dread
Human Data (learn from us) Marketing copy, fashion tips, naming babies Edge-case creativity unless we supply it

Translation: closed games with a scoreboard? Self-play rockets to “super-human.” Subjective taste? Still needs lots of sweaty human examples—and therefore there’ll be no “originality”.

This represents a fundamental trade-off between a "SuperIntelligent" and a "Generalist" system. I don't believe you can achieve both simultaneously.

Human + Machine = Magic

Here’s the most exciting part to me:

AI doesn’t replace us — it augments us.

The real future isn’t robots taking our jobs. It’s humans using these tools to do more, faster, beautiful, better, exceptional.

When I see what some "artists," "creators," and brilliant "engineers" are creating with AI I'm speechless.

They’re turning those new brushes and tools into human brilliance.

That's what I think the future of AI is — not Super AGI enslaving humans and all humans losing their jobs with a "universal income".

I call it man-machine symbiosis.

The Bottom Line

AI isn't marching toward human enslavement; it's marching toward ultra-helpful tools and brushes. Use your intelligence to create exceptional, better "human-made" things.

Even now, AI cannot write this newsletter on its own.

When someone talks about AI taking over, just think of it as a fancy calculator waiting for you to press the buttons. Remember—humans are still in charge.

I think the biggest threat to "human civilization" is "human civilization" — and over-using AI making us less capable, not AI becoming smarter than us.

Have a great weekend.

Until the next one,

— Charafeddine

Share this article on: