PERSPECTIVE
Why GenAI adoption in professional services is different
A note on scope -
The patterns described here draw on my original qualitative research with UK solo and small professional service firms, alongside client work supporting leaders making AI adoption decisions in practice. The research was designed for depth rather than scale and reflects a mid-2025 snapshot in a fast-moving space. I use it here as a sense-check against real practice, not as a claim of representativeness or a prescription for all firms.
Professional services don’t sell output - they sell judgement
In most sectors, technology adoption is evaluated primarily on efficiency, scale, or margin. In professional services, value is tied much more tightly to credibility, professional judgement and the ability to explain how an answer was reached.
That changes everything. When professionals talk about AI risk, they are rarely worried about whether the tool works. They are worried about whether they can stand behind the output, explain it to a client and defend it if challenged later.
​
This is why GenAI use often remains limited to internal, supportive work: drafting, structuring, sense-checking, or thinking things through. Not because client-facing use is impossible, but because the reputational downside of getting it wrong is asymmetric.
Adoption is constrained by credibility, not cost
One of the more striking findings from my research was that cost is rarely the limiting factor.
Most professionals see the price of GenAI tools as negligible relative to the value they can create. What constrains adoption instead are questions like:
-
Would I be comfortable explaining this in front of a client?
-
Would this pass the “red face” test if scrutinised?
-
Does this sound like me or like a machine?
Those questions don’t appear in most generic technology adoption frameworks, but they dominate real decisions in professional firms.
This also explains why many experienced professionals are cautious about letting AI touch the work of junior team members. They recognise that judgement is built through doing the work, not just reviewing outputs. While they may feel confident using AI themselves, they are more concerned about the long-term effects on skill development if too much thinking is delegated too early.
​
Peer behaviour matters more than formal policy
In larger organisations, adoption is often driven top-down through policy, tooling, and training. In solo and small professional firms, adoption is far more social and informal. Professionals pay close attention to:
-
what peers are doing,
-
what clients seem to tolerate,
-
and where the unspoken lines are being drawn.
Seeing a trusted peer use AI safely and thoughtfully lowers the perceived risk far more than any formal guidance. Conversely, uncertainty - especially in regulated or standards-driven professions - tends to reinforce caution and incrementalism.
This is one reason adoption varies so sharply between sectors like consulting and more tightly regulated fields such as law, finance, or psychology.
​
GenAI is often treated as a thinking partner, not a replacement
Another important difference is how GenAI is used. Rather than replacing professional judgement, many experienced practitioners use it as:
-
a way to surface blind spots,
-
a prompt for alternative perspectives,
-
or a means of getting unstuck when thinking feels constrained.
In other words, GenAI is often valued less for automation and more for cognitive support.
This reinforces why adoption is so context-specific. The same tool can feel empowering in one setting and inappropriate in another, depending on what is being delegated and what must remain human.
​
Why this matters for leaders making AI decisions
The danger for professional service firms is not that they move too slowly. It’s that they import assumptions from other sectors - treating GenAI adoption as a tooling exercise - and underestimate the role of credibility, trust, and professional identity. When that happens, firms either:
-
over-engineer controls that stall learning, or
-
move too fast and create decisions they later struggle to defend.
A more realistic approach recognises that adoption here is pragmatic, incremental and shaped by judgement rather than enthusiasm. The firms that navigate this well are not the ones with the most AI usage but the ones that can clearly explain where they draw the line, why they draw it there, and how that choice reflects their professional standards.
That clarity matters far more than the tools themselves.
​
Where this connects to my work
This is the kind of thinking I support when leaders want to move forward with AI without undermining the credibility they’ve spent years building. Not to accelerate adoption at all costs but to help them decide what makes sense in their context, and to do so in a way they can stand behind later.
​