A Company in 2030 Might Run on Text Files
Here’s the picture I can’t get out of my head.
It’s 8:12 AM in a company in 2030.
A sales manager opens her laptop and types:
“Who do I need to follow up with today?”
No CRM dashboard.
No 14-tab disaster.
No digging through Slack, email, and whatever horrible internal tool IT forced on everyone three years earlier.
Her local agent answers:
“You have 6 deals worth prioritizing. Two are at risk because legal is waiting on procurement language. One account went quiet for 9 days after pricing was sent. I drafted follow-ups for all three situations. Want me to show them?”
She says yes.
It drafts them.
Not generic templates. Not “Hope this email finds you well” slop. Actual drafts based on the company’s history, pricing rules, tone, previous conversations, and the current state of those accounts.
She reviews. Edits two lines. Sends.
Then someone from HR does something similar.
Then someone in finance.
Then someone in ops.
Then someone in marketing asks their agent to turn last week’s campaign data into a deck for the Monday meeting, while pulling approved numbers from the company system and checking whether any claims in the presentation need legal review.
And the weird part is this:
Almost none of them are “using software” the way we think about software today.
They are working with agents sitting on top of context, permissions, and data.
That sounds futuristic until you realize I already work like this on my own laptop.
That’s what has been messing with my head all week.
Last week I wrote to you about my own setup.
A folder.
A bunch of text files.
An agent on top of it.
That simple system has already replaced a silly amount of tool-switching in my day. My CRM is basically text. My projects are text. My notes are text. My agent reads, updates, drafts, organizes, and helps me move.
It feels absurdly efficient.
Then two things happened.
First, Deep Agent launched an open source version of this general idea. Second, a friend showed me conversations running on a Qwen model deployed directly on his machine.
And suddenly I stopped thinking about my laptop.
I started thinking about the IT department of a company in 2030.
Not in a hype way.
Not in a “everything changes tomorrow” way.
Just in a very practical way.
What happens when the thing that works freakishly well for one person starts getting built seriously for 500 people? Or 5,000?
That question matters a lot more than “which chatbot is best?”
Because I think the real shift is not chatbot versus chatbot.
It’s this:
Software is becoming data plus context plus permissions plus agents.
And once you see that, it becomes very hard to unsee.
The Weird Realization
Most business software is doing one of two things:
1. storing information
2. helping people retrieve and update that information
That’s it.
Yes, I know. Somewhere a SaaS founder just threw their laptop across the room.
But really, look at it honestly.
A CRM is mostly customer data, notes, activities, stages, tasks.
An HR system is employee records, policies, approvals, documents, history.
A procurement tool is vendors, requests, contracts, approvals, spend records.
A project management tool is tasks, owners, due dates, status, comments.
A lot of what we call “software” is a database with a nice outfit on.
And I’m not saying the outfit never mattered. It did. It still does. Good interfaces reduce confusion. They make systems usable. They standardize work.
But if an agent can understand the data, navigate context, retrieve what matters, update the right records, and generate the right outputs, then the interface stops being the product.
It becomes one possible surface.
Not the center of the whole thing.
That’s the shift.
Not “AI will replace every app.”
More like:
Many apps were valuable because they were the only practical interface to the data. Agents are changing that.
And yes, some data should stay structured.
Invoices, transactions, ERP records, audit logs, payments, inventory. You do not want “vibes” managing those.
But a shocking amount of work inside companies is semi-structured or unstructured anyway.
Notes.
Policies.
Threads.
Documents.
Conversations.
Meeting recaps.
Relationship context.
Decision history.
Internal know-how.
That stuff lives everywhere, badly, and people spend half their life trying to reconstruct it.
So when I say “my CRM is a text file,” I’m not being cute.
I’m saying a lot of valuable business context does not need some precious bloated interface to exist. It needs to be captured properly, versioned properly, permissioned properly, and made usable by agents.
That’s a different world.
A Small Story From 2030
Let’s make this concrete.
A recruiter at a mid-sized company is hiring for three roles.
In 2026, she opens six tools and still somehow misses something important.
In 2030, maybe her morning looks like this:
She opens the company agent workspace and says:
“Show me candidates who are strong for the product ops role, but flag any concerns from previous interview notes. Also tell me which hiring managers are blocking the process.”
The local agent checks approved systems through company middleware.
It reads the candidate records.
It reads interview feedback.
It checks calendar delays.
It notices that one manager keeps asking for “more profile variety” without giving usable criteria. Politely, of course.
It replies:
“You have 9 qualified candidates. Three are strong matches based on prior hiring patterns for similar roles. Two may be at risk because compensation expectations exceed band. The product lead has delayed feedback on four profiles. I drafted a message asking for decision criteria and next-step confirmation.”
She approves it.
Done.
No hunting.
No “Where did we store that?”
No “Can someone give me access?”
No Friday afternoon meeting just to explain what the system already knows.
That is not magic.
That is not AGI.
That is good context, permissions, and orchestration.
That’s also why I think the future of enterprise AI is less about one giant super-agent doing everything and more about a stack.
A very boring, very powerful stack.
The Enterprise Stack of 2030
If I had to describe the core IT stack of a modern company in one sentence, it would be this:
Storage, middleware, master agents, local agents.
That’s the whole picture.
Let me break it down.
1. Storage
The company still stores data.
Obviously.
Some of it is structured databases.
Some of it is files.
Some of it is text.
Some of it is documents, transcripts, contracts, policies, notes, and internal knowledge.
Not everything needs to be squeezed into rigid forms. A lot of business context actually gets worse when people are forced to over-structure it.
So the job becomes:
Store information properly.
Version it.
Label it.
Keep it accessible.
Keep it secure.
That is already most of the battle.
In my own work, versioning unstructured information is often good enough with simple practices and git. Yes, really. My own system is often just a collection of text files updated by the agent, then committed and pushed occasionally.
Messy in theory.
Shockingly effective in practice.
At enterprise scale, it obviously gets more serious. You need policies, retention, approvals, lineage, backup, traceability.
But the principle stays the same.
The company is not mainly “buying software.”
It is maintaining usable context.
2. AI-ready middleware
This is the real heart of it.
The middleware matters more than the flashy demo.
Because the middleware decides whether enterprise agents are useful or dangerous.
This layer would include things like:
Connections to data
APIs, MCP servers, databases, file systems, internal tools, external systems.
Security layers
Because letting an agent read and write across company systems without guardrails is a fantastic way to get fired.
Authentication and role management
This is critical.
Not just what you can do.
What your agent can do.
Those are not the same thing.
And they should not be the same thing.
You may be allowed to approve a vendor. That does not mean your agent should be able to do it silently because you once clicked “yes” in a hurry at 11:47 PM.
For higher-risk actions, the agent should need stronger confirmation.
Not just “approve.”
More like:
“This action modifies a contract record and sends it externally. Re-enter password to authorize.”
Annoying?
A little.
Necessary?
Absolutely.
Because the fantasy version of enterprise AI is “the agent handles everything.”
The real version is “the agent handles a lot, but critical actions require deliberate control, and….RESPONSIBILITY.”
Machines can NEVER take responsibility.
Monitoring and back office
Someone needs to see what agents are doing, what they cost, where they fail, what they access, what they trigger, and what weird things they keep trying to do.
Because yes, they will do weird things.
An AI trust layer
Guardrails, confidence scores, anomaly detection, policy checks, escalation rules, harmful-process detection.
Basically: the police of agents.
Not sexy. Extremely important.
This layer exists because the most dangerous AI mistakes are not the hilarious ones.
They are the polished, plausible, slightly wrong ones.
The ones that look professional.
The ones that sound confident.
The ones that quietly create legal, financial, or operational mess.
So the company will need systems that ask:
Should this task be completed automatically?
Should it be reviewed first?
Does this action look unusual?
Is the agent acting outside normal patterns?
Is the data source stale?
Is there a conflict between systems?
That’s the real enterprise work.
Not “wow, the bot wrote an email.”
3. Master agents
This part gets really interesting.
Inside a company, you do not want every employee’s local agent reinventing the wheel every day.
That would be chaos wearing a clean shirt.
So you probably end up with specialized master agents.
A sales master agent.
A marketing master agent.
An HR master agent.
A procurement master agent.
A finance master agent.
These agents sit closer to company-wide context. They accumulate patterns, knowledge, policies, best practices, reusable workflows, approved templates, historical learning.
They do not replace the employee’s local agent.
They support it.
Think of them as shared intelligence layers for each function.
So instead of every rep independently guessing how to structure a renewal follow-up, the sales master agent knows what has worked, what is approved, what legal language matters, what pricing rules exist, what changed last quarter, what not to say, and where deals usually get stuck.
That matters.
Because the biggest waste in most companies is not lack of software.
It is repeated confusion.
Ten people solving the same problem from scratch.
Twenty teams building their own local workaround.
Thirty meetings to standardize something that should already be encoded in the system.
Master agents reduce that.
They turn scattered institutional memory into usable support.
Not perfect truth. Not central command. Just a smarter shared layer.
4. Local agents
Then there’s the part people actually touch every day.
The local agent.
Not 100 specialized agents living inside the company.
One local agent per employee.
That local agent can spin up specialist agents in parallel, orchestrate them, then shut them down.
(So for all the folks building their “agentic platform” around the idea of hundreds of permanent agents… think again.)
This is the Claude Code part of the vision.
Or whatever that evolves into.
Each employee has an agent sitting on their laptop, desktop environment, or secure company-hosted workspace.
Maybe eventually a local model too, running on-device or through company infrastructure, depending on cost, privacy, and hardware.
The exact deployment detail matters less than the experience.
You talk to it naturally.
It sees your work context.
It helps you do the work.
It drafts.
Retrieves.
Collaborate with master agents.
Updates.
Summarizes.
Prepares.
Routes.
Checks.
Asks before acting when needed.
Maybe it looks like a terminal. Maybe it doesn’t. Frankly, the terminal is just a text box with bad PR. The future version will almost certainly feel friendlier.
But the key idea remains:
The employee works with an agent in their environment, not by bouncing between disconnected apps all day.
When they need team-level context, the agent works with shared drives and shared files.
When they need company-level systems, it communicates through the middleware and master agents.
Same pattern.
Different scope.
That is what makes the whole thing coherent.
So What Does Work Actually Look Like?
This is where it gets fun.
Because once you imagine this setup, daily work gets weirdly simple.
A marketing lead says:
“Take last month’s campaign performance, compare it to the same period last quarter, pull the top three lessons, draft a six-slide internal summary, and flag anything that needs review before I share it with leadership.”
The local agent pulls approved data through the right systems.
The marketing master agent helps with benchmarks and standard reporting logic.
The trust layer notices one campaign source is incomplete and flags low confidence on that comparison.
The agent says:
“I can draft the summary now, but slide 4 contains incomplete attribution data from one source. I recommend review before sharing.”
That’s good.
That’s exactly what you want.
Not fake certainty.
Useful assistance plus informed caution.
Or imagine a sales rep saying:
“Update the account record after today’s call, summarize objections, draft next steps, and tell me whether legal should review anything before I send pricing.”
Again, same story.
Human and machines collaborate.
Humans focus on human work: judgment, relationships, responsibility, and real problem‑solving.
Machines take the grind: the repetitive steps, the coordination, the drafts, the busywork.
In that world, humans stop being the engine.
They become the source of will and direction.
Why This Feels Obvious to Me Now
Because I’ve already felt the small-scale version of it.
That’s the important part.
This is not me sitting in a chair inventing sci-fi because I had too much coffee.
This is me noticing that once you have an agent working on top of real context, the old app-centric way of thinking starts to feel weirdly inefficient.
You begin asking uncomfortable questions.
Why do I need to open this tool just to see information the agent can already retrieve?
Why am I manually copying notes between systems?
Why am I the middleware?
Why am I translating work between five interfaces that all claim to be “integrated” while clearly hating each other?
That’s what changed for me.
The current software stack often makes humans do the dumbest part of the process.
Find the thing.
Move the thing.
Reformat the thing.
Paste the thing.
Repeat the thing.
An agent with context is just better at that class of work.
Usually faster too.
Which brings me to a tweet-length truth I hate to admit because it sounds rude:
It is often easier to get AI to do something than to tell a person what to do.
Not because people are bad.
Because most work instructions are context-heavy, ambiguous, and annoying.
A well-set-up agent already has the context. That changes everything.
The Catch
You knew there was a catch.
There is always a catch.
Agents are fast.
Sometimes stupidly fast.
That is useful.
It is also dangerous.
Because the error mode is often not spectacular failure.
It’s subtle failure.
The number is slightly outdated.
The draft email sounds right but commits too early.
The summary misses the one caveat that matters.
The candidate looks perfect on paper because the system over-weighted the wrong signal.
The procurement request gets routed correctly but based on the wrong threshold logic.
This is why I keep coming back to the same point, even when it’s less exciting than the demos:
The skill that matters most is judgment.
Not prompting.
Not magic words.
Not “how to get the best result in 7 prompt hacks.”
Judgment.
Knowing what to check.
Knowing when to trust.
Knowing when not to.
Knowing which errors are harmless and which ones become very expensive very quickly.
In a company full of agents, that becomes even more important, not less.
Because every polished output creates temptation.
The temptation is to say:
“Looks good, send it.”
Famous last words.
The real professionals in an agent-heavy company will be the ones who can supervise well.
They will know their domain deeply enough to direct the system and catch subtle nonsense before it escapes into the world.
That’s why I don’t think the winners here are automatically the most technical people.
Plenty of non-technical people will be incredible at this.
Because the bottleneck is often not technical skill.
It’s operational clarity and good taste.
It’s understanding the work itself.
That matters more than people think.
This Is Not “No Software”
Let me be careful here.
I am not saying all software disappears.
That would be a dumb thing to say.
Databases matter.
Systems of record matter.
Interfaces matter in many workflows.
Audit trails matter.
Controls matter.
Specialized enterprise software will still exist (probably).
In many cases it should.
What I am saying is that the center of gravity may shift.
From app-first work to agent-first work.
From humans navigating software manually to agents navigating systems and data on their behalf.
From static interfaces to conversational and task-based execution.
That changes what companies buy.
It changes what IT teams build.
It changes where value sits.
A lot of existing enterprise software may get compressed down into infrastructure, storage, workflow logic, and permissions.
Still important.
Possibly less visible.
And definitely less glamorous.
Which is hilarious, because the future of enterprise AI may be built on the most unsexy ingredients imaginable:
clean data, file systems, auth layers, logging, approvals, monitoring, versioning, policy controls.
The Most Human Part of All This
The irony in all of this is that the more capable AI becomes, the more obvious the human role becomes too.
Taste.
Judgment.
Responsibility.
Clarity.
Domain knowledge.
Trust.
Those things do not go away.
They become the bottleneck.
Same with writing.
Same with content.
Same with company operations.
AI can generate form all day long.
But the valuable part is still the person who knows what should exist in the first place.
The person with the actual idea.
The actual standard.
The actual ability to say:
“No, this is almost right, but not right enough.”
That person becomes more powerful, not less.
I think that’s the real story here.
Not that companies in 2030 will be run by autonomous bots while humans sip matcha and admire dashboards.
More like:
The best people will work with agents the way good operators work with great teams.
Clear direction.
Shared context.
Fast execution.
Strong review.
Real accountability.
That sounds much less dramatic.
And much more likely.
See you next week.
— Charafeddine (CM)