AI Didn’t Remove the Bottleneck. It Moved It.
I have conversations every day with smart people who are struggling hard with AI.
Not struggling with the technology. Struggling with what it means for how they work.
They keep defaulting to the old playbook: lean on your area of expertise, stay inside your circle of competence, trust the experience you’ve built over years in a specific domain. That playbook worked for a long time. I won’t pretend it didn’t.
But it’s not the whole game anymore.
The old model of knowledge work was pretty simple. You picked a lane. You spent years learning the terrain. You built judgment through repetition and pattern-matching. The longer you stayed, the more you knew, and the more you knew, the more valuable you became.
This made sense because the bottleneck was real. Knowledge was hard to get. Synthesis was expensive. If you needed to understand regulatory compliance for financial documents, or the edge cases of a particular API, or how to architect a system that wouldn’t fall over at scale — you needed someone who had been there. No shortcut existed.
Experience in a narrow domain reduced error and increased speed. That’s why companies paid a premium for it.
AI changed the bottleneck.
When I work with LLMs and agents daily, I don’t feel like I’m using better software. I feel like I’m operating in a different epistemic environment. My advantage isn’t primarily what I already know. It’s my ability to reason from first principles, use AI to explore and execute beyond my prior experience, and build validation systems that catch where the model is wrong — and where I’m fooling myself.
That last part is the key. Anyone can ask an LLM a question and get a plausible-sounding answer. The actual skill is knowing how to verify it, stress-test it, set up feedback loops that separate signal from confident nonsense. Not trivial. But learnable, and it compounds fast.
I genuinely believe that in most digital knowledge work, someone who is excellent at problem framing, AI orchestration, and ruthless validation can outperform someone relying purely on accumulated domain depth.
Not always. Not in every domain. But often enough that the structure of work is already shifting beneath us.
The question used to be: What do you already know?
It’s becoming: How fast can you learn, test, and validate what matters?
First-principles thinking was always valuable. It used to be slow. You could reason from first principles all day and still need months of hands-on experience to become competent in a new area. Now you can pair that reasoning with an LLM that gives you immediate access to synthesized knowledge, code generation, and execution — then close the loop with validation that confirms you’re actually right, not just moving fast.
That loop — reason, explore with AI, validate — is the new unit of competence. The people who build that loop well are going to be shockingly productive compared to the people who don’t.
This sounds like liberation. In many ways it is.
It also creates a completely different class of problems that almost nobody is talking about.
The old world was constrained by lack of capability. The new world is constrained by too much simultaneous capability. Very different problem. Most people haven’t caught up to it.
I have ADHD. I’ve always been unusually good at context switching. Moving across domains, juggling threads, holding multiple partially-resolved problems in motion at the same time — this has genuinely been an advantage for most of my career.
Even I’m hitting cognitive limits in an agentic environment.
Here’s what my actual workflow looks like right now: I have agents running in Discord and Slack through OpenClaw. I have terminal sessions running coding tasks with Claude Code and Codex. I have chat threads going in multiple apps. Work is happening in parallel across interfaces that don’t talk to each other, producing output at a rate that would have seemed absurd two years ago.
The problem is no longer “can I do the work?” The problem is “can I maintain coherence across a system of work that’s moving faster than my brain was designed to track?”
Agents don’t just increase output. They increase the number of active processes you’re surrounded by at the same time. You’re not managing one task anymore. You’re managing task trees, async execution, partial outputs, validation loops, handoffs between tools, retries — all unfolding in parallel, all needing attention at unpredictable intervals. One agent finishes and needs review. Another is blocked and needs clarification. A third completed something wrong and you need to figure out why before it cascades.
I don’t have a clean solution yet. I’m either building systems that help me when I hit cognitive load, or building systems that cap concurrency before I reach it. The right answer probably involves both. I haven’t figured out the balance.
But I think this is one of the next major disciplines in AI-native work. Not prompting. Not model selection. Cognitive load design.
We need systems that know when to compress context, when to escalate, when to queue, when to interrupt, and when to stop adding threads. We need better ways to decide which work deserves attention now and which can safely wait. We need operating models for humans working alongside agents, not just better tools.
AI creates abundance. Abundance creates orchestration problems. Orchestration problems are management problems.
This is the part I think is most underappreciated.
Traditional middle management has been evaluated on human dimensions: motivate people, align the team, communicate clearly, resolve conflict, combine specialized individuals into an effective unit.
All of that still matters. The weighting is changing fast.
A good manager in an agentic world isn’t just a people manager. They’re an architect of information flow, operating cadence, system boundaries, and coordination design — across humans and agents simultaneously.
The management skill that rises in value isn’t persuasion. It’s operational design.
Who gets what context, in what format, at what fidelity? Which tasks should be handled by a human, an agent, or a hybrid loop? What are the validation gates? What’s allowed to run in parallel and what must serialize? What gets logged, summarized, routed, or killed? How do you prevent a team from drowning in output that looks like progress but exceeds their capacity to absorb, verify, and act on?
These are management questions now. Not technical questions. Not prompting questions. Management questions about the design of work itself.
Here’s the inversion I keep coming back to.
Historically, you could be a great individual contributor without being much of a manager. Management was a separate track. Some overlap helped, but it wasn’t required.
I don’t think that holds anymore.
Every manager should now be capable of being a strong IC with a fleet of agents. If they can’t operate at that level, they won’t understand the real shape of the work they’re assigning, measuring, or redesigning. They’ll be managing abstractions instead of reality.
The flip is also true: every IC is now a manager.
Even if you manage zero people, you’re responsible for managing a system of delegated cognition, execution, and feedback. Your personal fleet of agents requires the same things management has always demanded — prioritization, delegation, quality control, coordination. The person who refuses to manage and just wants to “do the work” is going to find that the work itself now requires management.
The baseline skill floor is rising for everyone. It’s rising in a direction nobody was trained for.
The future high performer isn’t just a specialist. Not just a generalist either.
It’s someone who can reason from first principles, decompose problems well, orchestrate AI effectively, validate ruthlessly, and manage cognitive load across a system moving faster than any individual can track alone.
That’s a different kind of competence than what the last era of knowledge work trained us to admire.
I think that’s exactly why so many smart people are struggling with it.
Their skepticism isn’t irrational. Skepticism is necessary — it’s the same instinct that protects you from bad reasoning, hype, and overconfidence. Basic building block of sound epistemology.
But here’s what I keep watching happen: the same skepticism that should be helping them evaluate the new model is instead preventing them from engaging with it seriously enough to validate it at all.
They look at AI output, see the obvious failure modes, and use that to justify staying on the old playbook. What they’re not doing is building the validation systems that would let them see where the new approach actually works — and works significantly better.
AI isn’t just a productivity tool. It’s an epistemic tool. It expands what a single person can investigate, synthesize, and execute by an order of magnitude. But it requires the same skepticism these people already have — just pointed in a different direction. Toward building systems that validate the new way of working, not dismissing it before the experiment begins.
The compounding advantage is shifting. Away from static possession of knowledge. Toward dynamic systems for learning, coordination, and validation.
The winners in the next era of knowledge work won’t be the people with the cleanest titles or the most linear resumes. They’ll be the people who learn fastest, build the best operating systems for themselves and their teams, and figure out how to keep humans and agents productive together without collapsing under their own expanded capabilities.
AI didn’t remove the bottleneck. It moved it.
The people who figure out where it moved first will have an enormous advantage over those still optimizing for the old one.