Most engineering leaders approach AI adoption the wrong way. They announce it, roll it out, and wait for results. When results don't come they push harder — more mandates, more metrics, more pressure. That's not a strategy. You don't push engineers toward tools they don't trust. You make not using them feel like leaving something on the table.
3–5 min read
Key Takeaways
When we introduced GitHub Copilot, the skepticism wasn't irrational. Engineers didn't trust the output — and early inconsistent results gave them real reasons not to. Suggestions that looked plausible but introduced subtle bugs. Generated functions that worked in isolation but didn't fit the codebase. Results that varied enough that engineers couldn't build a reliable mental model of when to trust it.
Nobody knew where it fit either. The engineers who were skeptical weren't resistant to new tools — they were busy. Without a clear picture of how Copilot fit their daily workflow, the path of least resistance was to leave it installed and mostly ignored.
Mandating adoption in that environment produces compliance theater — engineers who appear to use it, results that don't improve, and a team that associates AI tools with management pressure. The resistance wasn't the problem. It was information.
Not every barrier was about the tool. Some engineers didn't trust the output. Others had a different problem: they didn't want to be seen using it.
Senior engineers who'd spent years earning their technical reputation were reluctant to reach for an AI tool — as if using it meant admitting they needed help. There was an unspoken belief that good engineers don't need AI. That relying on it signals weakness.
The result was predictable: engineers who rejected it publicly and adopted it quietly once results became visible. The stigma didn't stop adoption — it just made it slower and less visible than it needed to be.
The reframe that worked: a craftsman doesn't apologize for using better tools. A senior engineer who uses AI to eliminate tedious work isn't less capable — they're making better use of the capability they've built. The judgment about what to build, how it fits the system, whether the output is correct — that's still theirs. AI removed the friction around it, not the expertise behind it.
I normalized it by being open about using it myself. When I mentioned in standup that Copilot had saved me an hour on documentation that morning, it gave others permission to say the same. The stigma lives in silence — it dissolves when the people others respect talk about the tool without apology.
Series
AI Adoption in Engineering Teams · Part 2
AI Knows Your Code — It Doesn't Know Your System
Generic AI tools can read your codebase. What they can't tell you is why a feature exists, which clients it affects, how feature flags alter behavior per tenant, or what the blast radius of a change will be. That missing context isn't a minor gap. It's the difference between a code completion tool and a system reasoning tool.
Read moreAI Didn't Remove the Need for Guardrails — It Made Them More Important
AI tools accelerate output. That's the point. But acceleration without judgment doesn't produce better software — it produces more software, faster, with the same quality ceiling as the engineer who wrote it. The guardrails didn't become less important when Copilot arrived. They became the only thing standing between adoption and amplified mistakes.
Read moreHow to Lead AI Adoption Without Becoming the AI Evangelist Nobody Asked For
There's a version of AI adoption leadership that makes the leader the most exhausting person in every standup. Constant references to what AI could do. Unsolicited suggestions to try it on everything. Visible frustration when the team doesn't follow through. That's enthusiasm without strategy — and it's one of the fastest ways to make engineers tune out a tool before they've genuinely tried it. Here's the actual sequence.
Read moreBefore I asked anyone else to try it, I used it myself. There's a meaningful difference between "I've been reading about what this can do" and "here's what it did for me this week" — and engineers can tell immediately. One is a pitch. The other is a result.
By the time I started talking about Copilot with the team, I had real examples. Specific tasks where it saved meaningful time. Specific cases where it generated something that needed correction and why. That credibility is what makes advocacy land. Without it, you're asking engineers to take on the experimentation cost based on your enthusiasm. Most won't.
I didn't pitch Copilot as a general productivity improvement. "Be more productive" is abstract, and abstract value propositions don't change behavior.
What worked was finding the work engineers already found tedious and showing Copilot addressing it directly — boilerplate, documentation nobody wanted to write, debugging unfamiliar code. When engineers brought those problems to 1:1s or standups, I'd redirect: this might be something AI could help with. Not as a lecture. As a suggestion in the moment when the problem was live and the value was immediately testable.
Meeting people at their actual frustration is what makes a tool feel like a solution instead of an assignment.
A significant portion of the early inconsistency wasn't a tool problem — it was a prompting problem. Vague prompts produced vague results. Engineers concluded the tool didn't work. The tool worked fine — it just needed to be used differently.
Coaching on prompting changed that: how to frame the problem, how to provide context, how to iterate when the first result wasn't right. Once engineers understood that output quality is partly a function of input quality, results got more consistent and skepticism dropped fast.
This is where adoption actually turned. Not from the tool getting better — from engineers getting better at using it.
Some engineers leaned in immediately. Others barely touched it. That's normal — it's what adoption looks like for every significant tool change.
The engineers who leaned in became noticeably more productive and became the real evangelists. An engineer showing a teammate what Copilot did for their debugging session that morning is worth more than ten all-hands slides. Peer credibility transfers in a way manager advocacy doesn't — it's specific, recent, and came from someone doing the same work.
Once I identified engineers who were genuinely getting value, my job shifted to removing friction for the people already moving — not pushing the ones who weren't.
You can mandate a tool. You can't mandate the judgment to use it well. Make it the obvious choice and get out of the way.