Leadership decisions are never purely about authority. Every choice has tradeoffs — team dynamics, organizational alignment, timing, and trust.
The human side of engineering at scale — navigating resistance, quantifying hidden costs, and building the conditions where teams make good decisions on their own.
Series · 4 parts
AI adoption fails when leaders treat it as a tool problem instead of a judgment problem — you can't mandate trust, you can't speed your way past context, and you can't automate away the need for thinking. These four pieces show you how to build adoption that actually improves your system instead of just making output faster. The real work isn't getting engineers to use AI. It's building the conditions where they use it right.
View full series →There's a version of AI adoption leadership that makes the leader the most exhausting person in every standup. Constant references to what AI could do, unsolicited suggestions to try it on everything, frustration when the team doesn't follow through. That's enthusiasm without strategy. Here's the actual sequence.
Most engineering leaders approach AI adoption the wrong way. They announce it, roll it out, and wait for results. When results don't come they push harder. That's not a strategy — it's pressure. Here's what actually moved adoption forward.
AI tools accelerate output. That's the point. But acceleration without judgment doesn't produce better software — it produces more software, faster, with the same quality ceiling as the engineer who wrote it. The guardrails didn't become less important when Copilot arrived. They became the only thing standing between adoption and amplified mistakes.
Generic AI tools can read your codebase. What they can't tell you is why a feature exists, which clients it affects, or what breaks if you change it. That missing context is the difference between a code completion tool and a system reasoning tool.