Why Getting Better Output Means Letting Go of Your Method
The same instinct that limits your team is limiting your agents. Here is what good delegation actually looks like as the models get smarter.

When you have spent time managing the work output of other people you learn that getting the best results means adapting how you manage that output as their capabilities grow. When someone is new in their position you might put more guardrails around “how” they do the work in order to make sure you get the output you need. As they gain experience and confidence, those same guardrails can limit the result by taking the focus off the problem and expected result.
I was working with an AI agent recently and I started noticing the same thing. As the models are getting smarter, some of the instructions and guardrails just aren’t necessary.
That parallel is worth discussing.
Most good managers have an action bias, and usually, that’s a strength. When a problem pops up, they want to make sure something gets done rather than risk paralysis by analysis. The problem is that same reflex can kick in even when a bit of observing or reframing would lead to a much better outcome.
Before handing something off, they unconsciously or consciously solve it first. They work through the approach, define the steps, and develop a point of view on how it should be done. Then they hand off the work without realizing they already have made an unconscious decision about “how” the problem should be solved. What lands on the other person’s desk isn’t the problem; it’s the solution, with their name on the execution.
For individual contributors, breaking things down and solving them is the job. But for managers, the real value is in framing the right problem and shaping the conditions for others to solve it. When a manager delegates this way, they haven’t actually delegated anything meaningful. They’ve just moved the labor while keeping the thinking.
What good delegation actually requires is getting clear on three things: what problem are we solving, what does the output look like when it’s done well, and what are the specific constraints that genuinely have to be honored. That last one matters because not every constraint is real. Some are. But a lot of what gets treated as a requirement is really just a methodology preference. Knowing the difference before you hand something off is where the actual work of delegation lives.
OK, so what does that have to do with LLMs and AI Agents?
When we started working with LLMs, writing detailed, specific prompts made sense. The models needed that level of direction to produce reliable output. The guardrails were doing real work.
But the models have been getting more capable, and a lot of people are still working with them the same way we did two years ago. They’re handing off methodology when all they really need to hand off is the problem and what good looks like on the other side of it. The same reflex that causes managers to over-specify work for their team is showing up in how people design and direct agents.
The agent I was working with recently made this concrete for me. It kept checking in before working through the full scope of what I had asked it to do. I recognized that pattern immediately because I had seen it in people I managed. What looks like conscientiousness is often something else. It was accountability avoidance dressed up as deference, and it was happening because the instructions I had given it were creating the conditions for that behavior. Once I named it, the agent could actually diagnose it itself. But someone had to notice it first.
That’s the part that doesn’t change whether you’re managing people or working with agents. Even when you delegate well, you still have to stay observant. The work of management doesn’t disappear. It moves to a higher level.
As models get smarter, the question that matters shifts. It’s no longer just about how precisely you can specify a process. Now, knowing when precision is actually necessary and when it’s getting in the way is a much more critical and admittedly harder task.
A capable person on your team who gets great results through a different approach than yours isn’t doing it wrong. An agent that produces high quality output through a path you didn’t prescribe isn’t broken. In both cases, if you find yourself focused on the methodology instead of the outcome, that’s worth paying attention to.
This is also what “human in the loop” actually means, or at least what it should mean. The way most people talk about it, it sounds like monitoring. Watching the process, checking the steps, inserting yourself at key moments to make sure things are going the right way. The problem with that version is that you become the bottleneck. You’re not adding judgment, you’re adding latency.
The real human judgment happens earlier. It’s in deciding what problem is worth solving, what good output actually looks like, and which constraints are load bearing versus which ones are just habit. That’s the leadership part. An old principle I’ve come back to a lot is that you lead the people and you manage the results. With agents, the same thing applies. You’re not there to manage the process. You’re there to lead the work by setting the right conditions, and then evaluate what comes back against a standard you defined clearly enough to actually use.
The same discipline that makes you a better manager applies here. Get clear on the problem. Define what good looks like. Identify the constraints that genuinely matter and hold those. Everything else, step back and see what happens.
Kenzie Notes
Analog wisdom for a digital world
A weekly page from The Workshop — frameworks, stories, and practical thinking on leadership, systems, and the craft of building things that matter. Wednesdays.
Related Ideas
7 Leadership Lessons I Learned From Training AI
What Teaching Machines Reveals About Leading Teams
Context Over Control
People don’t resist direction. They resist absence of meaning. When you provide real context, you’re not giving up control, you're providing the material that makes control unncessary.
The Friction Is Gone: Who Are You Without Your Tools?
Who were you before the tools? What AI is really exposing about how we work and who we've become