BACK TO BLOG

Leadership in a Human + Agent System

Leadership is no longer about output control. It's about system design.

The traditional model of engineering leadership is pretty well understood: you review output, approve work, direct execution, and stay accountable for what ships. It worked reasonably well when production moved at a human pace—when you could realistically read the code, understand the decisions, and course-correct before things went too far. That model is under a lot of pressure right now, and the response most leaders reach for—reviewing more, moving faster, staying more involved—doesn't actually address the underlying problem.


The Old Model and Why It's Breaking

When a developer can produce in a day what used to take a week, the review-and-approve loop doesn't scale the same way. There's more output, produced faster, with more decisions baked in that never surfaced for discussion. A leader trying to maintain oversight by reviewing everything will either become a bottleneck or start rubber-stamping things they don't fully understand, and both outcomes are bad in different ways. The shift that's required isn't about working harder or faster—it's about changing what leadership is actually responsible for.


Designing the Operating System

If the old job was controlling output, the new job is designing the system that produces it—the human and agent operating system your team runs on. That means thinking carefully about a few questions most teams haven't explicitly answered yet.

What gets automated? Not everything should go through an agent, and not everything that can be automated should be. Deciding this intentionally, rather than letting it happen by default as individuals make their own choices, is a leadership call.

What gets reviewed? Human review becomes more valuable and more scarce at the same time when agents can produce a lot of code fast. Where does it matter most? What's the cost of getting it wrong versus the cost of slowing down? These are judgment calls that need an answer, even if the answer changes over time.

What requires human synthesis? Some decisions genuinely need a person who holds the full context—the business goals, the users, the history of the system, the things that aren't written down anywhere. Identifying where that's true and protecting space for it is part of the job now.

What is non-negotiable? Every team has lines they shouldn't cross, even under time pressure, and if agents are going to operate with any autonomy, those lines need to be explicit and written down somewhere they can actually be enforced.


Trust and Psychological Safety

There's a dynamic that doesn't get enough attention in conversations about AI adoption: what happens when people don't feel safe being honest about how they're working.

If developers feel like they'll be judged for using agents, or that admitting a mistake means admitting they over-delegated to a model, they'll hide it. They'll present output as their own work without disclosing how it was produced, and they won't share the prompt patterns that are working well or the failure modes they've run into. Knowledge becomes opaque exactly when you need it to be shared.

Teams with genuine psychological safety around AI usage tend to develop faster than the sum of their parts. Prompt patterns get shared. Failures get discussed openly, which means they get fixed rather than repeated. People iterate on their practices together rather than each figuring it out alone. A leader who creates those conditions is compounding knowledge across the team in ways that are genuinely hard to replicate—which makes this less of a soft concern and more of a strategic one.


The Competitive Advantage

It's tempting to think the winning companies in this era will be the ones with access to the most powerful models, but that's probably not how it plays out. Model capabilities are converging and access is commoditizing—the same tools are available to almost everyone. What isn't easily replicated is a team that has figured out how to think together, how to share context, learn from each other, and direct agents toward shared goals rather than individual ones. That's organizational capability, and it compounds over time in a way that model access doesn't.

The companies that build the best products in this era won't necessarily be the ones with the smartest individual contributors. They'll be the ones where everyone—including the agents—is working inside the same understanding of what matters and why.

An individual with an agent is faster. A team with aligned agents is exponential.

Comments