The quiet productivity shift: what changed when our IT team started shipping with Copilot
Six months in. The boring parts got quieter, the interesting parts got louder, and a few things broke in ways I didn't expect.
For most of my career in IT, “productivity” has been a slightly suspicious word. It’s the thing on the quarterly dashboard, the thing the consultants promise, the thing that usually means “we bought another tool.” So when we rolled Copilot out to the whole team six months ago, I was skeptical, in the polite way you have to be when it’s already been announced.
What I didn’t expect was how quiet the shift would be. There was no dramatic before-and-after. The team didn’t suddenly start shipping twice as many tickets. They just started shipping differently.
What actually changed
The clearest change: the boring parts of our work got shorter. Writing a runbook, drafting an incident report, stubbing out a Bicep template — things that used to take 30 quiet minutes now take 10 loud ones, because someone’s typing back to you.
- Ticket triage notes went from roughly 4 lines to 8, and got more useful.
- First-draft PRs showed up 2–3 days earlier.
- The
#ask-infrachannel got quieter, because people were asking the model first.
A small observation The biggest gain wasn’t speed — it was friction reduction. People started attempting things they would have previously dropped in a backlog.
A small example
Here’s the kind of thing that used to take me 20 minutes of Googling and now takes about 20 seconds. Nothing fancy — but the compounding effect over a week is real.
// Query our runbook index with a hard cap on context size.
import { AzureOpenAI } from "@azure/openai";
const client = new AzureOpenAI({ endpoint, apiKey });
async function answer(question: string) {
const ctx = await search(question, { limit: 5 });
return client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "system", content: prompt(ctx) },
{ role: "user", content: question }],
});
}
What didn’t work
Not everything was a win. Two categories consistently failed:
Long, messy documents
Feeding the model our 40-page change management policy and asking for a summary produced something that sounded right and was wrong in three specific, dangerous places. We stopped doing that. You should probably stop doing that too.
Anything with organisational context
“Why did we choose that region?” has a real answer and it lives in someone’s head, not in the docs. The model’s guess is always confident and often wrong.
The model is a junior engineer who never tires and never asks clarifying questions. That’s useful. It’s also the whole problem.
— me, in a Teams message, at 11pm
Where we landed
The frame we’ve ended up with is boring and — at least for us — correct: AI is an accelerator for work the team already understands. When we point it at things we don’t understand, it invents a confident version of “understood” and we pay for it later.
That’s not a complaint. It’s a user manual. Six months in, I’d say: give the team the tools, teach them the failure modes, and then get out of their way.