The “Year of Agents” Was a Lie
My friend,
First of all, I want to wish you a wonderful 2026. I hope this year brings you more health, more wealth, and—most importantly—more time to do the things that actually light you up.
You might have noticed I was quiet last week.
I completely unplugged. I spent the last few days of 2025 doing absolutely nothing. And honestly? I resisted the "creator hypocrisy" of sending you an email just to keep up a streak. If I don’t have signal, I’m not going to send you noise. We have enough of that.
But as the new year kicks off, everyone expects the standard "2026 Predictions" post.
Usually, these are lists of tools to download or hype about how AGI is coming next Tuesday.
I want to do something different. I want to look at 2025 with brutal honesty—stripping away the marketing fluff—and use "first principles reasoning" to figure out where the real leverage is for us this year.
Because if you look closely, the narrative has shifted.
The "Velocity Theater" of 2025
Let’s look at the data. If you were scrolling LinkedIn (or X) in 2025, it felt like the world was ending every week.
Here is just a fraction of the "groundbreaking" releases we saw last year:
- Jan: DeepSeek-R1, GPT o3-mini
- Feb: GPT-4.5 (The "Wait, that’s it?" release)
- Mar: Manus AI, Gemini 2.5
- May: Claude 4 (Sonnet & Opus), Veo 3
- Aug: GPT-5 (The one we waited forever for)
- Nov: Gemini 3 Pro, Grok 4.1, Claude Opus 4.5
- Dec: GPT-5.2, DeepSeek-V3.2
On paper? A new benchmark record.
In reality? Diminishing returns.
Let’s be honest with each other. The last time you truly felt a "step-change" in your daily workflow was probably around April or May of 2025 (Gemini 2.5 / Claude 4 era).
Everything after that was "Benchmark Theater."
Models scored higher on obscure math tests, but for you and me—trying to run businesses and solve actual problems—the experience didn't fundamentally change. We hit a plateau where "good enough" became the standard.
And that brings me to the biggest elephant in the room.
The "Year of Agents" Lie
2025 was supposed to be the "Year of Agents." Remember?
We were promised autonomous employees. We were told we’d fire up a terminal, give a vague command like "build me a marketing business," and go to the beach.
It didn’t happen.
Sure, we have "agents" in the technical sense. We have coding tools like CLINE and RooCode (which are excellent). But those aren't autonomous agents; they are highly capable chatbots with tools that need babysitting — a lot of babysitting.
Real autonomous agents failed to deploy in the "real economy" at scale. Why?
It wasn't a capability issue. It was a Trust issue.
Would you give an intern who hallucinates 5% of the time access to your bank account without supervision? Would you let them email your biggest client without you proofreading it?
Of course not.
If you find this shocking, I've written many letters about this; you can go back and read them.
The "BS" narrative was that Autonomy > Everything.
The "AI OS" reality is that Accountability > Autonomy.
Until we solve the "Trust Problem"—how to reliably diagnose, monitor, and guardrail these systems—agents are just cool toys that you can't actually employ.
The Broken Internet
While agents struggled, LLMs succeeded at one thing: Breaking the web.
In 2025, the cost of generating content dropped to zero. The result? The internet was flooded with "synthetic slop."
- Google Search became a minefield of AI-generated SEO bait.
- "Truth" became a probability distribution.
- Hallucinations didn't go away.
Hallucinations aren't bugs; they are features.
LLMs don't know the truth. They know probability. Asking an LLM for a factual guarantee is like asking a poet for a rigorous math proof. They fill in the gaps with what sounds right, not what is right.
(There's a way to make them 100% predictable by changing the "sampling method," but this will make them 100× less interesting.)
The economics are starting to crack, too. Training GPT-5 costs hundreds of millions. Inferencing it costs even more. Yet, the average user expects it for free (or $20/month).
Investors are getting nervous. The "Unit Economics of Intelligence" aren't adding up yet.
The AI economics are broken; no one is making money out of this…
The 2026 Prediction: The "Trust" Market
So, if text models are plateauing and the internet is noisy, where is the opportunity?
I am placing a massive bet on AI Trust.
I believe "AI Trust" will be a trillion-dollar market, bigger than the models themselves.
As AI continues to disrupt traditional roles, it will create a massive vacuum for a new type of systems & professionals. We don't need more "Prompt Engineers." We need AI Auditors/Challengers/Owners.
For example, we need people who can:
- Diagnose system behavior & challenge AI.
- Monitor drift and hallucinations.
- Architect the guardrails that allow businesses to actually use this tech without getting sued.
This is the bottleneck. The companies that solve "Trust" will win 2026 (and beyond). If you want to future-proof your career, stop trying to learn every new tool that drops on Tuesday. Start learning how to evaluate, verify, and secure AI workflows.
The Creative Frontier: "Netflix at Home"
While I’m bearish on text generation making huge leaps this year, I am incredibly bullish on Video and Audio.
This is where the curve is still vertical.
In 2026, the "Hollywood Studio in a Laptop" isn't a pipe dream; it's a workflow. We are approaching a point where a single person with taste, agency, and an AI OS can create a Netflix-grade series from their living room.
But here is the catch: The model won't do it for you.
We are moving from "Prompt and Pray" to "Direct and Edit."
The winners won't be the people who type "Make me a cool movie." The winners will be the ones who understand story structure, who understand editing, and who use AI to execute their vision, layer by layer.
I'm truly excited about this, and I hope we reach that level of quality by the end of 2026. There are so many untold stories waiting to be brought to life.
The Verdict for 2026
Many people are screaming about Nvidia's "Blackwell" clusters and how they will save us.
They won't.
Better chips will help models collect and process more information with more nuance. But this increment won't make a significant impact for 99% of people.
It won't solve the fundamental issues of Trust or Utility.
If you want to win in 2026, stop waiting for GPT-6.
Stop waiting for the "God Model" that does everything for you.
- Build Trust Systems: Be the person who makes AI reliable, safe, and accountable.
- Leverage High-End Media: Use the video/audio tools to tell stories that cut through the text-based noise.
It's about building an "AI Operating System" where you trust your tools, verify the output, and focus on the creative work that comes from your craziest ideas and subjective experience of life.
The "Magic" is over. The "Work" begins.
Let's build.
Have a great week,
— Charafeddine (CM)