Content operations
AI doesn’t remove the need for editorial systems. It makes their absence noticeable. And their presence more necessary than ever.
Content teams integrating AI face two failure modes.
Automate broken processes and you scale chaos. Replace editorial judgement with automation and you degrade quality.
AI can serve editorial expertise rather than displace it – enforcing standards, removing administrative friction, flagging drift.
But only if the operations architecture is designed that way from the start.
Not bolted on. Not replacing the humans who understand tone, context, and quality.
This is the diagnostic and blueprint for content teams that want AI to work within their editorial standards, not around them.
For Heads of Content, Content Operations Directors, VPs Content, and Editorial Directors responsible for integrating AI without losing what makes their content good. Also relevant for content agency founders navigating the same challenge on behalf of clients.
Before the automation decision
Three questions your content operations need to answer
Scaling broken processes looks like contradictory brand voice in three markets simultaneously. Degraded editorial quality looks like published content that contradicts your own policy – authored by a system that had no way to know.
- Do you have documented standards for AI to enforce?
- Are your workflows clear enough for automation to sit inside?
- Is there a codified understanding of what ‘good’ means – or does it live in people’s heads?
If the system can’t see it, it can’t enforce it.
The cost of automating content ops without diagnosing where infrastructure won’t support it, isn’t just inefficiency – it’s scaling the speed of the chaos.
Book a 30-Minute Strategy ConversationNo pitch. Honest assessment of whether this diagnostic fits your team’s situation.
Worked with
The engagement
Three phases. One operational blueprint.
Phase 1
Content Infrastructure Diagnostic™
Context first
What exists and why?
The context lens establishes the factual baseline – what content exists, what it’s for, and how it came to be.
Purpose
The commercial function the content serves: brand, proposition, utility, or transaction. What it is supposed to make someone think, feel, or do.
Provenance
Who owns it. Which business unit, team, or function is accountable for its accuracy and currency.
Process
How it came to exist – team by team, stage by stage. The decisions, workflows, and standards that moved content from brief to publication to maintenance.
Quality second
How is it performing?
That content estate is assessed across three infrastructure layers:
Substance
What is true, what exists, what is said – and whether it is accurate, complete, and consistent across the organisation.
Structure
How meaning is encoded and retrieved – taxonomy, metadata, information architecture, and retrieval design.
Governance
Where ownership accountability breaks down. Where workflow and standards exist in policy but not in practice. Where quality cannot be maintained at scale.
What you receiveA mapped picture of your content operations as they actually function – lifecycle stages, ownership, process gaps, and the quality assessment across Substance, Structure, and Governance. The foundation every phase that follows is built on.
Phase 2
Opportunity Mapping
Phase 2 makes one decision explicit: what stays human, what gets automated, and why. Content lifecycle opportunities are mapped against the Phase 1 constraints, with improvements identified, ranked, and categorised across four dimensions:
- Roles & responsibilities – who owns what at each lifecycle stage; where accountability is explicit and where it’s assumed.
- Standards, guidelines & SOPs – what’s documented; what lives in people’s heads; what exists in policy but not in practice.
- Tools – the full ecosystem per lifecycle stage, from spreadsheet to CMS to AI-capable platform; what’s connected and what isn’t.
- Training & capability – skills gaps in human content governance and AI orchestration; where the team can execute and where it can’t yet.
What you receiveAn opportunity map showing where automation is viable now, where infrastructure work must come first, and what that work looks like – broken out across roles, standards, tools, and capability.
Phase 3
Content Ops Model Specs
Findings from Phase 2 resolve into a unified set of specifications for your content operations that strategically divides between human and AI-managed content lifecycle management. AI agent specifications are only produced for departments and contexts where the Phase 1 diagnostic confirms sufficient content infrastructure quality – where the CID surfaced weaknesses, the recommendation is sustained human governance and targeted infrastructure improvements until that threshold is met.
The output
- Role definitions & accountability matrix
- Workflow specs per lifecycle stage
- Codified standards & editorial policies
- Governance calendar & review cycles
- Agent scope & capability definitions
- Skills specifications
- Business context files
- Automated workflow specs
- Guardrails & escalation logic
What you receiveA complete operational blueprint: role definitions, workflow specifications, codified standards, and AI system specifications for every context where the diagnostic confirms the infrastructure is ready.
Typical engagement: 6–8 weeks from kick-off to architecture delivery.
Your time investment: five to ten stakeholder interviews across content-creating teams, plus access to existing standards documentation and workflow artefacts. I do the analysis; you provide access.
The specification principle
Context engineering, at team scale
AI systems aren’t innately intelligent or unintelligent, good or bad at providing valid information. They’re literally only as good as the operating context they work from.
Individual AI users have been doing this for some time: documenting operational scope, specifying agent capabilities, defining guardrails and escalation rules – the context engineering that makes AI systems reliable rather than unpredictable. Phase 3 applies that principle at organisational scale. The same questions – what is the scope, what are the rules, who does what, within what parameters – answered for a content team’s full operations rather than a single workflow.
About
More than fifteen years working inside enterprise organisations – Meta, Google, Grundfos, Pret, UK Government Digital Service – watching the same pattern repeat. Significant investment in AI, digital, and content transformation. The content layer that determines what any of it can do, never independently assessed before the money is spent.
Humans used to compensate for broken content systems. They reconciled contradictory information, navigated broken taxonomy, called support when content fell short. AI doesn’t compensate. It executes. It doesn’t resolve ambiguity. It scales it. The ceiling doesn’t disappear. It becomes visible – usually at the worst possible moment.
The diagnostic is tool-agnostic and vendor-independent. Its conclusion is equally likely to be ‘not yet’ as ‘invest now.’ That’s what makes it useful.
No pitch. A conversation about how AI can serve your editorial standards rather than bypass them.