Breaking News

Leadership in the Age of AI: Why Managers Need to Stay Technical

https://ift.tt/Cz0ZRV7

There is a piece of management advice that circulates widely, feels intuitive, and is quietly becoming one of the more dangerous ideas in enterprise technology leadership.

It goes something like this: once you cross into management, your job is to set direction, develop people, and remove obstacles. The technical details — the actual behavior of the systems, the friction in the workflows, the edge cases in the tools — are your team’s domain now, not yours. Staying too close to that work signals distrust, creates bottlenecks, and distracts you from the “real” job of leadership.

In a stable environment, there’s logic to this. When the underlying systems change slowly, when you can safely assume last year’s mental model still approximates this year’s reality, delegating technical depth is a reasonable strategy for scaling your own attention. You synthesize. You trust your team’s judgment. You operate one level of abstraction above the details.

But that is not the environment we are operating in. And the leaders who haven’t updated this assumption are accumulating a form of strategic debt that will eventually come due.

The false binary and where it breaks down

Many organizations still frame career development in technology as a fork in the road: the individual contributor track, or the management track. The implication embedded in that framing is that technical depth and leadership responsibility exist in tension — that gaining one means ceding the other.

To be fair, the best organizations have evolved past the rigid version of this binary. Principal engineer roles, staff-plus IC paths, player-coach models, and technical program management structures all exist precisely because that clean separation failed under real conditions. Most senior practitioners in mature tech organizations understand that effective leadership at the VP level requires ongoing technical credibility, not just people skills and OKR fluency.

But the underlying instinct — that management means moving away from technical judgment, that “stepping back” is what professional maturity looks like — persists widely. It persists in how we coach high-potential managers. It persists in the unspoken signals organizations send about what “executive presence” requires. And it persists most visibly in the moments when a technically excellent leader gets promoted and is quietly advised to stop doing the thing that made them excellent in the first place.

That pattern was always imperfect. In an AI-driven environment, it has become actively counterproductive.

What makes AI different from previous technology shifts

Every major technology transition produces some version of this debate. Leaders who came up through the transition to cloud had to decide how much infrastructure depth to maintain. Leaders navigating mobile had to decide whether to stay close to the UX implications or delegate that entirely to their teams. In most of those cases, a leader could afford a learning lag of a year or two. The systems matured, the patterns stabilized, and synthesized understanding from secondhand input was eventually “good enough” for strategic decision-making.

AI is different in at least three ways that matter for how leaders should calibrate their proximity to the work.

First, the rate of capability change is genuinely fast relative to enterprise decision cycles. The gap between what a model could do when you last evaluated it and what it can do now — or what it does differently under a new configuration, a new version, or a new prompting approach — can be significant enough to invalidate prior decisions. Leaders who are making platform bets, vendor commitments, or policy calls based on six-month-old firsthand knowledge are, in many cases, operating on outdated assumptions without knowing it.

Second, the failure modes are subtle in ways that earlier technology transitions were not. Infrastructure failures are usually visible. Downtime is measurable. A misconfigured AI tool, by contrast, can fail silently — producing outputs that look plausible, are acted upon, and are only understood as wrong weeks or months later, if at all. Leaders who haven’t used the tools themselves don’t develop the instinct to spot these failure modes. They’re dependent on their teams both to encounter them and to escalate them — a chain that is longer and more fragile than it appears.

Third, the decisions that AI forces are genuinely cross-domain in ways that resist clean delegation. A decision about which AI tools to standardize across an organization looks like a procurement call. It is actually a product decision, a workflow decision, a security decision, a change-management decision, and a talent signal all at once. Untangling those dimensions requires a level of integrated judgment that is hard to develop — or to exercise — from a distance.

The leaders most accountable for AI outcomes — budget, risk, adoption — are often the least exposed to how the tools actually behave. That gap isn’t a leadership style choice. It is a structural liability that compounds over time.

What the cost actually looks like

The cost of technical disconnection at the leadership level rarely announces itself. It doesn’t look like a missed deadline or a failed deployment. It tends to look like a series of small, reasonable decisions that accumulate into a strategic position that no one quite intended.

It looks like a platform standardization decision made on the basis of a vendor demo and a team summary — where the tool that won the evaluation performs well in controlled conditions and consistently underperforms in the edge cases that make up 40 percent of actual work. No one lied. The evaluation was reasonable. But no one in the room had spent enough time in the friction to know what questions to ask.

It looks like a risk posture that was calibrated to a model’s behavior at the time of the security review, then silently drifted as the model was updated, new capabilities were enabled, or adjacent tools were integrated in ways the original review didn’t anticipate. The risk function did its job. The gap is that no one with accountability for the outcome had the firsthand context to notice the drift.

It looks like a change-management approach designed around the assumption that the tools are intuitive — because the leader sponsoring the rollout found them intuitive — that runs into significant resistance from practitioners working in contexts the leader hasn’t tried to use the tools in.

These are not failures of intelligence or effort. They are failures of proximity. And in a fast-moving environment, proximity gaps compound faster than they used to.

What “staying technical” actually means at the VP level

Staying hands-on as a senior leader is not an argument for micromanagement. It is not a claim that leaders should be doing individual contributor work, competing with their teams for execution credit, or inserting themselves into decisions that belong at lower levels. The goal is not technical heroics. It is technical proximity — being close enough to the actual behavior of the systems you are accountable for to exercise sound judgment about them.

In practice, for me, that proximity shows up as three distinct behaviors.

The first is using the tools personally, in real workflows, on a regular basis.

Not in a demo environment. Not in a structured evaluation exercise. In actual work — the kind of work where you have a real deadline, a real output you care about, and real consequences if the tool fails. The failure modes that matter in enterprise AI rarely surface in controlled presentations. They surface when you’re trying to produce something real and the tool does something unexpected: hallucinates a confident but wrong answer, degrades in quality when the context window fills, produces output that is technically correct but structured in a way that creates downstream problems. These are the things that practitioners on your team are navigating every day. If you have never encountered them yourself, you cannot fully appreciate the gap between “the tool works” and “the tool works well enough to build on.”

The second is being present — not peripheral — when key tradeoffs are made.

There is a specific kind of meeting that looks, on the agenda, like a vendor review or a cost-optimization discussion, and turns out to be a decision that will shape your organization’s AI posture for the next two years. Leaders who are in the room for the summary but not the deliberation often don’t realize which meeting was which until later. Technical proximity means being engaged enough in the details that you can recognize when a conversation that looks like a tactical call is actually a strategic one — and being present enough to shape it when it matters.

A concrete example from our own work: we have been building an AI capability program segmented by persona — different tools, access levels, and cost structures for different kinds of work. On paper, it reads as a vendor rationalization exercise: consolidate overlapping tools, manage spend, create tiered access. In practice, the decisions required knowing where specific tools fail silently for specific job functions, which personas have workflows that break if you standardize on a lower-capability model, and where the friction cost of a given tool outweighs its efficiency benefit. None of that was visible from a summary. The judgment calls required firsthand context about tool behavior across different use cases — the kind you only develop by actually using them.

The third is earning the right to push back substantively.

When your team flags a risk, identifies a constraint, or pushes back on a direction, your ability to engage with that input — rather than simply accepting it or overriding it — depends on having enough firsthand context to distinguish a genuine technical constraint from a comfort zone, a real risk from an overstated one, a well-reasoned concern from a framing that reflects a team’s prior assumptions. Leaders without technical proximity are forced into one of two bad options: rubber-stamp their team’s judgment on everything technical (which isn’t leadership), or override it without sufficient basis (which is). The third option — engaging as a peer, asking the right questions, contributing informed perspective — is only available if you’ve done the work to earn it.

The mindset underneath the behavior

The behavioral commitment to technical proximity requires a particular kind of orientation — one that Satya Nadella articulated when he took over Microsoft as the shift from “know-it-all” to “learn-it-all.” The premise is that curiosity and the willingness to be a beginner, repeatedly and publicly, are more durable leadership advantages than accumulated expertise. Expertise has a shelf life. The disposition to keep developing it does not.

I think that framing is right, but in an AI-driven environment it cannot remain at the level of philosophy or identity statement. It has to show up as behavior — specifically as the behavior of choosing, repeatedly, to stay close to the work even when your role would give you permission to step back from it.

The leaders I have seen navigate AI transformation most effectively share a specific characteristic: they are willing to be visibly uncertain in front of their teams. They try tools in public, ask questions that reveal gaps in their understanding, and treat their own learning as part of the organizational capability-building rather than something to develop privately before presenting a confident face. That posture — call it learning out loud — creates a different kind of organizational permission. It signals that not knowing something yet is not a leadership failure. The failure is in stopping the effort to find out.

The harder question: what are organizations signaling?

The individuals who tend to engage with this kind of content are, by selection, already curious and already oriented toward staying close to the work. The harder problem is not convincing them. It is the organizational context they operate in.

What signals do we send, as organizations, about what good leadership looks like in technical domains? If the implicit message — in how we coach, promote, and develop leaders — is still that management means stepping back from the details, we are producing a generation of leaders who are technically accountable for outcomes they don’t understand well enough to steer. We are rewarding the appearance of strategic thinking while quietly penalizing the hands-on engagement that makes that thinking grounded.

The corrective is partly cultural and partly structural. It looks like senior leaders modeling technical engagement publicly, not treating it as something they do privately. It looks like development programs that build technical literacy as a leadership competency, not as something that phases out after a certain level. It looks like evaluation criteria that include quality of technical judgment alongside financial performance and people metrics.

And it looks like being willing to say, explicitly, that the “manager track means stepping back” advice — while well-intentioned — is producing leaders who are less equipped for the moment we are actually in.

What this moment actually demands

The leaders who will perform best in the next three to five years are not the ones who managed their way to altitude and surveyed the landscape from a safe distance. They are the ones who stayed close enough to the work to know when the map no longer matched the territory — and who had enough firsthand context to do something about it when it didn’t.

That requires a different definition of leadership maturity than the one many of us were handed. Not the ability to operate from abstraction. The ability to move fluidly between abstraction and ground truth — to set direction without losing contact with the reality that direction has to survive.

In the age of AI, leadership is not a step away from the work. It is a sustained, deliberate commitment to understanding it well enough to steer it — even as it keeps changing.

The post Leadership in the Age of AI: Why Managers Need to Stay Technical appeared first on SD Times.



Tech Developers

No comments