Not a subscriber?
Join 10,000+ getting a unique perspective every Saturday on growing their internet business with actionable AI workflows, systems, and insights.
You're in! Check your email
Oops! Something went wrong while submitting the form 🤔
April 11, 2026

You Cannot Automate a Mess.

A friend texted me last week. He runs ops at a growing tech company.

"We just put 12 AI agents on our onboarding flow. It's a mess. Can you take a look?"

I asked one question before I said yes.

"Can you send me how the onboarding process works today? The written version. How it actually runs end to end."

Long pause.

"What do you mean by 'the written version'?"

That was the answer. I didn't need to look at the agents. I already knew what was wrong.

They hadn't deployed AI.

They had deployed AI on top of nothing.

The Call

I jumped on a video call with his team two days later. Six people. Too many browser tabs. That very specific energy of "we've been arguing about this for a month and nobody wants to say the real thing out loud."

The head of ops walked me through what they had built. Twelve agents. One to read signups. One to verify documents. One to provision accounts. One to send welcome sequences. One to route support questions. Three for exceptions. A few orchestrators on top coordinating the rest.

On paper, impressive. Keynote-ready.

"How is it going?"

She didn't answer. She shared her screen and opened the logs.

Welcome emails going to the wrong customers. Accounts getting provisioned twice. An exception handler quietly routing VIP signups into a folder nobody was reading. An agent calmly offering a discount the company had never approved, because nobody had told it the rules.

Three months of operation. A painful amount of refunds. And a long internal doc explaining why "the model isn't quite there yet."

I stopped her.

"The model is fine. The agents are fine. The problem isn't in your stack."

"Then where is it?"

I opened a blank Google Doc, shared it with the team, and said:

"Write me your current onboarding process. All of it. The decisions, the exceptions, the thresholds, who approves what, who handles the weird cases. I'll wait."

Silence.

A full minute of it.

"We don't really have that written down anywhere."

There it was. Diagnosis in one empty doc.

You Cannot Automate a Mess

Here is a sentence I keep repeating in client calls now, and every time I say it, someone in the room goes very quiet.

You cannot automate a mess. You can only run it faster, with more damage, at bigger scale.

This is the part nobody puts on a conference slide. Because it's embarrassing. Because it means admitting that your company, the one with the dashboard and the OKRs and the AI strategy deck, actually runs on a bunch of unwritten habits in the heads of four people who happened to be around when the system was set up.

And this isn't unusual. It's normal. I walk into teams every week, ask the same question, and get one of these answers:

"It's in our workflow tool but nobody has updated it."

"Sofia knows it by heart."

"We used to have one. It got out of sync."

"It's in a deck from 2022 somewhere."

"What do you mean, 'process document'?"

You know…organizational folklore.

Here's the thing you cannot see from the outside. When a human runs a messy, undocumented process, the mess is contained. People fill the gaps with judgment. They know the refund rule technically says 30 days but the real rule is "be flexible with long-time customers." They know never to auto-approve anything on a Friday because Sofia handles the weird cases and Sofia leaves early on Fridays. They know the CFO changed the discount limit last week in a Slack message nobody pinned.

Those thousand invisible adjustments are what keeps the business working. Not the workflow tool. Not the wiki. The unwritten knowledge in people's heads.

Now drop an AI agent into the middle of that.

The agent doesn't know about Sofia. The agent doesn't know about Friday. The agent doesn't know the CFO changed the rule last week. The agent reads the "official" process (which is either wrong or missing) and does exactly what you asked it to do.

Agents need CONTEXT. Context engineering.

If all you have is organizational folklore, it will run the written part of it…

A few months later you have agents confidently executing a “process” that was never meant to be trusted in the first place. When it breaks, everyone blames the model. Because the model is the new thing. Because blaming the model is easier than admitting nobody ever sat down and wrote how the business actually works.

Garbage in. Garbage in faster. The mistake, as always is very convincing and, wears a suit.

Two Teams. Same Lesson.

The team that skipped the homework

My friend's team. I already told you most of it. Twelve agents, three months, no written process, a lot of apologies going out to customers.

The fix wasn't a better model. It was two weeks of nothing but writing the process. Every rule. Every threshold. Every "ask before doing." Every Friday quirk. Every unwritten rule about which customers get flexibility and why.

We didn't touch the agents for those two weeks. I made that the rule. No "let me just tweak the prompt." No "let's try a new model." No "what if we add another agent." Only one job: write down how the business actually runs.

By the end of week two, the doc was 22 pages. Ugly. Dense. Honest.

We handed it to two of the original agents and ran them on a backlog. Near-zero errors. Same agents. Same models. Totally different result.

The upgrade was in the document. Not the AI.

Remember that: AI engineering for an organization, almost boils down to context engineering. Data and processes are a big (critical) chunk of that context.

The team that did the homework first

Compare that to a small B2B company I worked with last year. Their head of support sent me her support playbook before I even asked. 38 pages. Updated last month.

I asked how long it took to build.

"A year and a half. We rewrote it twice. We still argue about some of it."

We added a single AI agent to that workflow in two weeks. One agent. It handled 60% of tickets cleanly, flagged 30% for humans with a clear reason, and pushed the hard 10% to seniors with full context.

Her team had been writing the AI OS layer for eighteen months. They just didn't know that was the name for it.

When the agent arrived, it had something to stand on. So it stood.

Same tools. Same market. Completely different outcome.

The Uncomfortable Truth

I'll say the part nobody wants to hear.

Most companies didn't know how their own processes worked before AI.

The processes just…happened.

They were getting away with it because humans are unreasonably good at patching organizational chaos with judgment, side conversations, and "I'll handle it."

Your AI Is Only As Smart As Your Worst Process

The agents are the x-ray. The ugly things on the screen were always inside your body, quietly working around the damage. You just never had a machine sharp enough to expose them. Now you do.

This is why I keep telling people the same thing.

Before you grant autonomy, you have to earn accountability.

Accountability is not a job title. Accountability is a document. A document that says: in our company, the way we decide X is this, the way we handle exception Y is this, THEN the rule for Z is this, and this is what "done well" looks like.

If you can't write that document for a process, you are not ready to automate that process. Not because the AI isn't good enough. Because you don't know what correct or success looks like. And you cannot ask a machine to execute something you cannot define.

The protocol is the AI OS layer. Not the model. Not the platform. Not the vector database. Not the last “AGENTIC architecture.”

The document.

Read that again.

Process First. Agent Second.

The principle is simple.

Process first. Agent second. Always. No exceptions.

Before you add a single agent to a workflow, you should be able to run that workflow yourself, end to end, using only the written process, without asking anyone how any step works.

If you can't, the process isn't ready. And if the process isn't ready, no agent on earth is going to fix it. The agent is just going to execute the gaps faster.

Build the tracks before you launch the train. Obvious, right? Except in AI, everyone wants the train first, because the train photographs well. The tracks are boring. The tracks are what you build in the first few weeks while the team argues about edge cases they've been silently disagreeing on.

I promise you, the tracks are the WHOLE GAME.

What to Do Monday Morning

Short version of what I'd actually do.

1. Pick one workflow. Just one.

Not "AI transformation." Not "automate the whole thing." One workflow you already run, where the stakes are real. Onboarding. Ticket triage. Proposal drafting. One.

The most common mistake is picking ten. Pick one. Finish it. Then pick the next.

2. Sit with the people who actually do the work.

Not the managers who THINK they know how it runs. The two or three people who actually run it every day. Watch them work. Write down every decision, every exception, every time they mutter "ah, this one is weird" and open Slack.

Most of your process is hiding in the sentences that start with "well, normally we do X, but if...". Those sentences are the real process. Capture them.

3. Write the doc like it's for a new hire with zero context.

Every rule. Every threshold. Every judgment call. Every "ask before doing."

If two people on your team disagree on a step, that's a signal, not a formatting issue. Resolve it on paper before any agent touches it. The disagreements are where all the expensive mistakes live.

4. Run it with a human first.

Before you touch an agent, have a teammate run the workflow using only the written doc. If they struggle, the doc is incomplete. If they have to ask for help, something is missing. Keep iterating until a new teammate can follow the doc without any support.

This is the step almost everyone skips. It's also the step that decides whether the agent succeeds.

5. Then, and only then, add one agent.

Not twelve. One. Start with the part of the process where the rules are clearest, the stakes are lowest, and the feedback loop is fastest. Let the agent earn trust at the edges before it touches the middle.

This is the prove, then automate idea I keep writing about. It isn't a slogan. It's the only thing that consistently works in the field.

Back to My Friend

He texted me this Monday.

"We processed 900 onboardings this week with two agents. Zero refunds. Team is calmer than they've been in six months."

I smiled.

Here's what actually happened. The agents didn't change. The models didn't change. The platform didn't change. The only thing that changed was that the team finally wrote down how their business actually works. For the first time.

That doc is now the most valuable thing they own. More valuable than the tools. More valuable than the agents themselves. Because tools can be swapped. Agents can be swapped. Models can be swapped. The process stays.

The document is the AI OS layer.

The Close

Every process improvement movement of the last thirty years, from Six Sigma to RPA, was trying to get companies to write down how they actually worked. Most of them got ignored, half-done, or quietly mocked as corporate theater.

AI is the first one where ignoring the homework has immediate, visible, expensive consequences. Because AI does exactly what you asked, at the speed you asked, on the process you gave it. If the process is a mess, the mess is what scales.

Your AI is only as smart as your worst process. Not your best one. Not your average one. Your worst one. That's where the agent trips. That's where the damage compounds.

The fix is not a new model. The fix is not a bigger context window. The fix is the slow, unglamorous work of sitting with the people who actually do the work and writing down how they do it. Until the doc is honest enough that a new hire (or an agent) could follow it without asking anyone for help.

Process first. Agent second. In that order. Always.

AI is only as good as the human operating it.

Build the tracks. Then launch the train.

Have a great weekend.

— Charafeddine (CM)

Share this article on: