There's a version of AI adoption leadership that makes the leader the most exhausting person in every standup. Constant references to what AI could do. Unsolicited suggestions to try it on everything. Visible frustration when the team doesn't follow through. That's enthusiasm without strategy — and it's one of the fastest ways to make engineers tune out a tool before they've genuinely tried it. Here's the actual sequence.
2–5 min read
Key Takeaways
The sequence matters more than the message.
Before advocating for Copilot with my team, I used it myself — long enough to have real examples, not just impressions. I knew which tasks it handled well, where it produced plausible-looking output that needed correction, and where it saved meaningful time versus where it added friction. I found that out on my own time, not theirs.
By the time I brought it up, I wasn't selling a tool I'd read about. I was sharing results I'd produced. That's a different conversation — and engineers can tell the difference immediately. One asks them to trust your enthusiasm. The other gives them something to evaluate.
Credibility precedes advocacy. Without it, you're just adding noise.
Scheduled AI training sessions rarely work. Engineers sit through the demo, nod, go back to their work, and don't change anything. The context is wrong — they're not in the middle of a problem, so the value doesn't land.
What worked was coaching in the moment. When an engineer brought a documentation task they were dreading to a 1:1, I'd redirect: this might be something AI could help with. When someone was trying to orient themselves in an unfamiliar part of the codebase, same suggestion. When the repetitive scaffolding work was piling up before the interesting implementation could start — same.
The problems that responded best weren't the intellectually interesting ones. They were the tedious ones. Boilerplate nobody wanted to write. Documentation nobody wanted to document. Debugging code nobody originally wrote. These were the friction points engineers were already feeling — which meant showing AI address them created immediate, testable value rather than a theoretical promise.
When the tool solves a problem you were already stuck on, you remember it. When it's introduced in a workshop on a Tuesday afternoon, you forget it by Wednesday.
Making the tool available isn't the same as making it useful.
A significant part of the early resistance wasn't engineers rejecting Copilot — it was engineers trying it, getting inconsistent results, and concluding it didn't work. What was actually happening was a prompting problem. Vague inputs produced vague outputs. Engineers who didn't give the model enough context got suggestions that missed the mark.
This is coachable — and it's where adoption actually turns. Teaching engineers how to frame a problem clearly, how to provide the context the model needs, and how to iterate when the first result isn't right changes the reliability of the output significantly. Once engineers could consistently get useful results, the skepticism dropped fast.
Most leaders introduce the tool and step back. The missing step is teaching engineers how to drive it.
Series
AI Adoption in Engineering Teams · Part 1
You Can't Mandate AI Adoption — You Have to Make It the Obvious Choice
Most engineering leaders approach AI adoption the wrong way. They announce it, roll it out, and wait for results. When results don't come they push harder — more mandates, more metrics, more pressure. That's not a strategy. You don't push engineers toward tools they don't trust. You make not using them feel like leaving something on the table.
Read moreAI Knows Your Code — It Doesn't Know Your System
Generic AI tools can read your codebase. What they can't tell you is why a feature exists, which clients it affects, how feature flags alter behavior per tenant, or what the blast radius of a change will be. That missing context isn't a minor gap. It's the difference between a code completion tool and a system reasoning tool.
Read moreAI Didn't Remove the Need for Guardrails — It Made Them More Important
AI tools accelerate output. That's the point. But acceleration without judgment doesn't produce better software — it produces more software, faster, with the same quality ceiling as the engineer who wrote it. The guardrails didn't become less important when Copilot arrived. They became the only thing standing between adoption and amplified mistakes.
Read moreSome engineers leaned in hard from the start. Others barely touched it. Both are valid outcomes — and trying to close the gap by focusing on the holdouts is usually a poor use of energy.
The engineers who leaned in became force multipliers. Their output improved noticeably. They also became the strongest advocates — not because I asked them to, but because the results were visible and their teammates asked about them. Peer credibility is more durable than manager advocacy. An engineer demonstrating what Copilot did for their code review that morning carries more weight than any all-hands presentation.
Once those engineers were moving, my job shifted. Not pushing adoption — removing friction for the people already going. Making sure they had space to experiment. Surfacing what they were learning to the broader team. Getting out of their way.
The goal was never everyone. It was enough momentum that adoption became self-sustaining.
The instinct when you want adoption is to track it — how many engineers activated the tool, how often they're using it, what the usage metrics look like. That instinct produces the wrong behavior.
Engineers who know adoption is being measured will use the tool to satisfy the metric, not because it's improving their work. Compliance theater is worse than slow adoption — it creates the appearance of progress without the reality, and it erodes trust in the process.
The better question is whether engineer output is improving. Are the tedious tasks moving faster? Are engineers taking on harder problems? Is the quality of what ships getting better? Those questions take longer to answer and can't be pulled from a dashboard. They're also the only ones that tell you whether the adoption actually worked.
The best AI adoption strategy looks less like a rollout and more like a series of individual conversations — each one meeting an engineer where they already are, with a tool that solves a problem they're already feeling.