AI StrategyOperating ModelValue Realization
Designing AI Centres of Excellence that actually deliver
A practical operating model for enterprise AI CoEs that move from pilot purgatory to durable, measurable value.
The pattern most enterprises hit
Almost every Centre of Excellence I review has the same shape: a small team trying to do governance, platform engineering, and delivery at the same time. Each role has different success criteria, different stakeholders, and different cadences. Forcing them into one pod is why throughput stalls within twelve months.
A simpler operating model
Three pods, one shared backlog:
- Governance. Owns risk, model cards, evaluation harnesses, and the responsible-AI bar. Reports into a steering committee.
- Platform. A thin, opinionated team that runs the shared LLM gateway, evaluation tooling, and golden paths. Centrally funded.
- Delivery. Embedded squads paid for by the business unit they serve. Use the platform; don’t rebuild it.
How to measure it
The most useful metric is idea-to-production cycle time for use cases that pass the value gate. Pilot count is a vanity metric. Production cycle time is the only number that correlates with ROI in my client portfolio.