Not a subscriber?
Join 10,000+ getting a unique perspective every Saturday on growing their internet business with actionable AI workflows, systems, and insights.
You're in! Check your email
Oops! Something went wrong while submitting the form 🤔
October 25, 2025

Cognitive Atrophy (is this you?)

We all have a version of this.

Three months ago, a brilliant product manager walked into a leadership meeting with “his” strategy memo. It was tight, punchy, and… suspiciously perfect.

“Walk us through your logic,” the VP said.

Silence. Then the PM glanced at his laptop like it might whisper the answer.

“It’s… comprehensive,” he finally muttered.

Translation: ChatGPT wrote it. He couldn’t explain his own memo.

Nobody cared that the writing was clean. They cared that he didn’t own the thinking.

No conviction. No persuasion. No ownership. A waste—a costly waste of time. And wasting people's time is the worst thing you can do to your reputation.

That’s the risk we’re facing as a humanity. Not robots taking our jobs (or killing us all)—us handing over our brains.

I shared a brilliant comedy sketch on social media a few days ago that perfectly parodies this. Check it out—you'll laugh, I guarantee.

I believe you also followed Deloitte's AI scandal on using AI…

I’m not saying we should stop using AI. That’s just a dumb statement.

There's obviously a sweet spot where you feel fulfilled, the output is 100% you, but ×100 better than what you could've done alone in the same time. Reaching that sweet spot is an art.

Picture this.

Tuesday, 7:12 a.m. "Big AI risk," you type. Cursor blinks like it's judging you. The easy move? Paste a prompt into ChatGPT and call it a day.

Instead, you scribble five bullets, write a scrappy first pass, articulate your own convictions, then ask AI to edit. One hour later it still sounds like you—just clearer. Your brain did the thinking. AI did the polishing. You leave the desk awake, not emptied.

That tiny choice is the whole game.

The Big Idea

The biggest AI risk isn’t automation.

It’s cognitive atrophy—outsourcing micro-decisions until our minds go soft.

Early evidence from MIT’s Media Lab had people write essays with and without ChatGPT while wearing EEGs. The ChatGPT group showed the weakest neural engagement and, across months, underperformed on neural, linguistic, and behavioral measures; many struggled to accurately quote their own writing later. In a swap session, “brain-first” writers who then used LLMs showed patterns closer to search (i.e., still engaged), while “LLM-first” folks switching back to brain-only showed under-engagement — a flip that hints habits matter. It’s a preprint, but it’s a clear signal.

Zoom out: this fits decades of cognitive offloading research. We remember where info lives more than the info itself — the classic “Google effect.”   And yes, GPS reliance correlates with worse spatial memory during self-guided navigation.

Workplace angle: GenAI raises output, but can lower intrinsic motivation if you disengage. That’s not destiny; it’s design.

Key takeaway: Productivity ≠ learning. “Looks good” isn’t “lives in your head.”

Receipts (without the hysteria)

  • Lower brain engagement with AI help. In lab conditions, ChatGPT users showed the lowest EEG connectivity; brain-only was strongest. Over ~4 months, heavy AI users underperformed across neural/linguistic/behavioral metrics and reported the least ownership. Some couldn’t recall passages they’d “written.” (Again: preprint; small n.)
  • Cognitive offloading is real. When knowledge is retrievable, we store the directory, not the data.
  • Homogenization risk. AI prompts can boost individual creativity yet reduce variety across people’s outputs (social-dilemma effect).
  • Motivation trade-off. Across >3,500 participants, gen-AI collaboration increased output yet reduced intrinsic motivation on subsequent tasks without AI.

Sanity check: No proven permanent IQ drop. The risk is dependency, not doom. Used well, AI augments—especially for people who think first, then tool.

The literature says no IQ drop, and I accept that. But I'm still convinced the brain works like a muscle. If you don't use it—or use it differently—you lose brain power.

Many of you might be thinking—yeah, yeah, we know this stuff. It was obviously predictable. You don't use your brain, you lose the ability to use it. But what can we actually do about it? And how do you find that "sweet spot" I mentioned?

I don't have a perfect answer.

But I've spent years (at least since ChatGPT launched) training people in companies on how to use GenAI—people who kept telling me either "AI can't do my job" or "OMG I'm going to lose my job tomorrow."

I'm convinced AI will create more jobs than it will destroy… but that's another story.

I've built an operating system that I teach in companies and my bootcamps.

Here are the a few core principles.

The 80/20 Rule: Stay in the loop (never outsource the first draft)

  • Think first, then prompt. Outline your angle before you ask.
  • Write a rough pass, then edit with AI. Keep the struggle that creates understanding. No struggle, no understanding. This was natural. Now it requires discipline.
  • Explain it back (no AI). If you can’t teach it (simply), you don’t own it.
  • Schedule "no-AI" reps. Short, sharp sets (see drills below). I know this might sound a bit bizarre now, but I believe we'll need to block "brain workout sessions" in our calendars within the next few years.
  • Audit dependence. List tasks you auto-delegate; reclaim one per week.

The "sweet spot" indicator I teach:

  • If your first draft is AI-generated, you glance at it, say "looks good," and ship it—you don't own it. (You're in the brain-atrophy zone.)
  • If you do everything from scratch, you're still riding a carousel horse while everyone else drives Ferraris.
  • If you've struggled to find sharp angles (with or without AI), challenged the AI, let AI challenge you back, added your perspective, and feel like you've done it ×10 faster—and you own it, and it's a perfect articulation of what you deeply believe—you're in the sweet spot.

Tiny dialogue

  • You: “But AI’s faster.”
  • Me: “Is fast helpful if you can’t defend it in a meeting?”
  • You: “Can’t I just edit the bot?”
  • Me: “Edit what you understand. Delete what you don’t until you do.”

Key takeaway: Struggle = signal. That’s where the learning lives.

Why this keeps happening (and what it steals)

  • Offload direction → lose integration. GPS helps, but your mental map atrophies (though you can get away with it).
  • Retrieval over retention. Easy access nudges memory to store pointers, not concepts.
  • Polish → sameness. AI raises average quality yet narrows collective novelty.

Net: You get speed now at the cost of ownership later—unless you design around it.

Okay, so what do we do? Build a system, not a tool habit

Since 2022, I’ve coached hundreds of managers, developers, lawyers, designers, and analysts on using AI at work. Tools change fast; prompts change faster. Hacks don’t cut it—you need a reliable system.

You need a system so you can stop thinking about how to use AI—what to say, how to prompt, how to make it push back, how to judge one draft when you could generate ten—and focus on thinking with AI. The system handles the mechanics.

When people switch to my Operating System principles, their ability, output quality, and confidence jump. Same for me and my team—less thrash, more results.

Everyone needs an AI Operating System to stop wasting energy on the how and focus on the what.

I built those core principles into LUMEN—a simple OS for using AI without losing your mind.

I’m sharing here the big picture.

L — Layered Assistants

Build a bench of specialists (Email Polisher, Briefing Partner, Meeting Scribe). Stop re-explaining context to one “magic bot.”

  • Marketing: Audience Research / Offer Angles / CTA Refinement
  • Dev: Spec Clarifier / Edge-Case Hunter / Test-Writer
  • Why it works: Context lives with the assistant, so prompts stay task-focused and consistent.

U — Unified Prompt Library

Keep winners; retire the rest. Version prompts like code.

  • LandingPage_Hero_v3 (B2B, skeptical CFO tone)
  • Your library = recurring tasks → steps → a “killer prompt” per step.

M — Mind in the Loop

Human judgment is a feature, not a failsafe bolted on later.

  • Approve tone and numbers before anything ships.
  • Add a contrarian-check (“Where could this be wrong or missing?”).

E — Evaluation by Default (GRAIL Loop)

Generate → Rank with criteria → Aggregate best parts → IterateLaunch.

  • Two passes cover 90%; run 5–10 when “must be perfect.”

N — Nucleus OS (Notion or equivalent)

One home for SOPs, assistants, prompts, evaluations, and “swap files.” Use my template as a start. (Hate Notion? Any organized text works.)

Key takeaway: Tools churn. Systems scale. Build your own.

My best work sessions happen when I execute my OS, skip endless prompt tweaking, and focus on evaluating AI's work, challenging AI and pushing it to challenge me.

Sources & further reading

  • MIT preprint on “Your Brain on ChatGPT” (cognitive debt).
  • Time Magazine coverage (June 2025).
  • UCL research: AI boosts individual creativity but reduces variety.
  • HBR: Gen AI raises productivity—and lowers motivation without good design.
  • Classic “Google effect” (2011): when info is available, we remember where, not what.
  • GPS & spatial memory (2020): heavier GPS use → worse internal maps.
  • Polytechnique Insights: cognitive atrophy risk with over-offloading.

Closing thought

You grow physically by doing hard workouts.

You grow mentally by solving hard problems—lifting more with your brain than you usually do.

AI is leverage for lifting more. Not for lifting less.

You need discipline, and the fastest path to discipline is a system.

Keep your mind in the loop, and you'll be the person who can write the memo and defend the logic without peeking at the screen.

People who can lift more (using AI), challenge AI, and own the work are the future of the workplace—not "autonomous agents."

Stay sharp,

— Charafeddine (CM)

Share this article on: