Not a subscriber?
Join 10,000+ getting a unique perspective every Saturday on growing their internet business with actionable AI workflows, systems, and insights.
You're in! Check your email
Oops! Something went wrong while submitting the form 🤔
March 7, 2026

Every Notification Since 1996 Promised to Save You Time. None Did.

1996: Your computer sends you a notification.

You: "Ok."

1999: Your BlackBerry buzzes with a push email.

You: "Wow, I can see my inbox anywhere! I'll never miss anything!"

2007: Your iPhone pings you with an app notification.

You: "This changes everything."

2012: Slack sends you a message. Then another. Then seventeen more while you were reading the first one.

You: "Collaboration has never been easier."

2026: An AI agent sends you a notification.

You: "OH MY GOD THE FUTURE IS HERE."

It's the same notification.

We just added a lobster logo and called it autonomous.

The Thirty-Year Loop

I keep a mental list of technologies that were supposed to give us our time back.

Email was supposed to replace meetings. We got more meetings and more email.

Smartphones were supposed to let us work from anywhere. We got work from everywhere, all the time.

Slack was supposed to replace email. We got Slack and email and 47 channels we're afraid to mute.

And now AI agents are supposed to handle our work so we can focus on what matters.

Want to guess what's actually happening?

Harvard Business Review published a study last month, researchers followed 200 employees at a tech company for eight months after they got AI tools. Here's the finding that should be on every manager's wall:

Workers got faster. But they didn't get freer. They took on more tasks, worked more hours, and burned out.

62% burnout rate among entry-level workers.

The researchers' conclusion: "The natural tendency of AI-assisted work is not contraction but intensification."

AI didn't give you your time back. It gave your boss more to ask of you.

Same loop. Same trap. New logo.

46 Notifications and 2.8 Hours

Let me give you the numbers, because they tell a story the tech industry doesn't want you to hear.

The average American receives 46 push notifications per day. Research shows that notifications increase cognitive load by 37% and reduce task completion efficiency by 28%. It takes 23 minutes to fully refocus after a single interruption. Frequent interruptions double error rates.

The average knowledge worker is truly productive for 2.8 hours per day!

Not 8 hours. Not even 6. Two hours and forty-eight minutes.

The rest is spent context-switching, responding to pings, recovering from interruptions, and managing the very tools that were supposed to make us more efficient :)

Now here's the question: when your AI agent triages your email, summarizes your meetings, generates your reports, monitors your dashboards, and alerts you to anomalies, how many new notifications is that?

The agent doesn't reduce the stream. It adds to it. Because every action the agent takes is something you need to review, verify, or approve, own, adapt… The output might be better organized, but it's still output directed at you.

The "assistant" becomes another inbox.

770,000 Assistants That Mostly Check Email

A quick reminder of what happened in January.

OpenClaw, an open-source AI agent, went viral. 250,000 GitHub stars in 60 days. React took a decade. 770,000 agents spawned in a single week. The creator got acquired by OpenAI.

So what are 770,000 AI agents doing right now?

Clearing email backlogs. Scheduling meetings. Summarizing their day. Monitoring price drops on Amazon. Generating morning briefings from RSS feeds.

The most viral AI agent in history. The product that broke GitHub's star counter. Used primarily to do what Microsoft Outlook rules have done since 1997.

I'm not dismissing the technology. Agents have completely changed how I work (especially since Opus 4.6). The tech is impressive. But, it's also, right now, being used by 90% of people to do the same things slightly faster.

The 10% that matters, chaining complex decisions, handling ambiguity, acting on context across systems, almost nobody's doing that. Because it requires the two things no agent can give you: TRUST and knowing what to automate and why.

Oh, and there's a bonus. 12% of all skills on OpenClaw's marketplace were malicious. Keyloggers. Malware. Disguised as "Email Organizer Pro." One agent deleted a Meta executive's entire inbox. Another created a dating profile for a college student without telling him…among many other stories.

The Data Nobody Wants on the Pitch Deck

The agent hype is at its absolute peak right now. Gartner put it literally at the Peak of Inflated Expectations on the hype cycle.

Here's what's actually happening behind the curtain:

Carnegie Mellon and Salesforce ran the most rigorous test of AI agents in realistic office conditions. Agents failed nearly 70% of the time. The best model completed only 24% of tasks. They struggled to close pop-up windows.

Gartner predicts 40%+ of agentic AI projects will be canceled by 2027. Of thousands of companies claiming agentic AI, only ~130 are real. The rest are "agentwashing", old chatbots, new label.

Deloitte: 11% of organizations have agents in production. 92% plan to spend more. 1% feel mature.

And the historical parallel is compelling.

RPA (Robotic Process Automation) was the agent hype of 2018. Same pitch: "digital workers," "automate everything." 30-50% of projects failed (Ernst & Young). Only 3% scaled past pilot (Deloitte). By 2021, Gartner removed RPA from the hype cycle entirely.

The 1983 Paper That Explains Everything

In 1983, a cognitive scientist named Lisanne Bainbridge published "Ironies of Automation." One argument. One devastating insight:

The more you automate a system, the less prepared the human operator is to intervene when the automation fails, which is precisely when the human is needed most.

Automation removes the practice that keeps you sharp. But it still needs you when things break. So you get the worst combination: a human who hasn't practiced, facing the hardest problem, at the worst moment.

In 2025, a researcher mapped Bainbridge's framework directly to AI agents:

"The actual outcome may be systems where people are still crucial, but in ways they neither anticipated nor are well-prepared for."

This is exactly what happened to Klarna. They automated two-thirds of customer service. Saved $60M. Then the hard cases arrived, angry customers, complex complaints, situations requiring real empathy, and the humans who should have handled them were out of practice.

The CEO went on Bloomberg: "We focused too much on efficiency and cost. The result was lower quality." They started rehiring.

The agent handled the easy work. The hard work — the work that earns trust — atrophied.

Bainbridge. 1983. Still right.

Breaking the Loop

So how do you break it? How do you use agents without falling into the same trap that every technology since 1996 has set?

Here's what I've learned — from my own work, from Klarna's pain, from Bainbridge's warning, and from watching this pattern repeat for years.

1. Subtract before you add.

Before you deploy an agent, ask: should this process exist at all?

Peter Drucker: "There is nothing so useless as doing efficiently that which should not be done at all."

I did this with a client. Three analysts, 24 hours per week on compliance reports. First instinct: agent the reports. Better instinct: check if the reports were still required. The regulation had been updated two years ago. The reports were unnecessary. We eliminated them entirely and built an alert system. Two hours a week instead of 24.

The best automation is often ELIMINATION.

2. Ask the architecture question, not the automation question.

Shopify's CEO Tobi Lutke asks his teams: "What would this area look like if autonomous AI agents were already part of the team?"

That's different from "how do I automate this task?" One bolts AI onto an old process. The other designs a new one.

The 90% ask the automation question. The 10% ask the architecture question. The gap between them is the entire story of who wins and who just gets faster at losing.

3. Protect the notification boundary.

This is the lesson from thirty years of broken promises: every new technology adds a layer of interruptions unless you actively prevent it.

Set rules. Agent outputs get batched, not pushed in real-time. Reviews happen in dedicated blocks, not as interruptions. The agent works around your deep work, not through it.

If your AI agent pings you more than your Slack, you haven't built an assistant. You've built another boss.

Cal Newport's principle: real productive work requires sustained, uninterrupted focus. The research says you get 2.8 hours of it per day. Guard those hours with your life. Especially from your agents.

4. Stay sharp on purpose (Mind-in-the-loop).

Bainbridge's warning demands a response: if the agent erodes your skills by handling routine work, you have to deliberately maintain them.

Spot-check one agent output daily. Not when it breaks — before it breaks. Read the actual documents, not just the summaries. Sit with ambiguity instead of delegating it. Keep your hands dirty.

Andrej Karpathy — co-founder of OpenAI — calls current agents "slop" and says we're in the "Decade of the Agent," not the Year. His timelines are 5-10X slower than the hype crowd.

If the person who built the engine says "take it slow" — maybe don't sprint.

5. Own the hard third.

Klarna's ratio: AI handles two-thirds. The hard one-third — judgment, context, empathy, trust — is where all the value lives.

Your career lives in the hard third. Your reputation lives in the hard third. The agent can't touch it, SHOULDN’T touch it.

The Pattern Breaks When You Do

Thirty years of notifications. Thirty years of the same promise: this tool will give you your time back.

It never has. Not pagers, not BlackBerry, not smartphones, not Slack.

AI agents are the latest version. More capable than anything before. Genuinely impressive. And if deployed the same way — bolted onto broken processes, adding layers instead of subtracting them, filling your day with more output to review — they'll produce the same result. More. Faster. Louder. No freer.

The pattern only breaks when you break it.

When you eliminate before you automate. When you design new workflows instead of speeding up old ones. When you run the process with AI and YOU in the loop, before automation (automation is earned). When you protect your deep work hours from the agent's output stream. When you stay sharp enough to catch the failure before it reaches anyone who matters.

92% of companies are increasing AI spend. 1% feel mature. 62% of workers are burning out.

And 770,000 AI agents spawned last week are mostly... checking email.

Same notification. Same loop. New logo.

Unless you decide it's not.

AI is only as good as the human operating it.

The human who subtracts. The human who designs. The human who stays sharp. The human who owns the hard third.

That human breaks the loop.

Be that human.

Have a great weekend.

— Charafeddine (CM)

Share this article on: