If you’ve been online in the last fortnight, you’ve probably seen ServiceNow’s “Kevin” memo, the fictional 2028 post-mortem about an enterprise where the AI agents won, the governance team was eliminated, and a single AI governance lead named Kevin spent two years filing risk assessments that were auto-resolved before anyone read them.
An intriguing piece that also names something most leadership teams have been circling for a year without quite saying out loud: the gap between deploying AI and being able to answer for it.
In our implementation work, we don’t see Kevin. We see Steeve.
Steeve knows what AI is running. He has a list. The list is incomplete, and he knows that too. What he cannot do, on any given Tuesday, is answer four questions in the same sitting:
- Which of these systems are subject to the EU AI Act, and at what risk classification?
- Which of them touch customer data, regulated workflows, or material decisions?
- Who approved each one, against what policy, with what review cadence?
- What measurable outcome is each one producing, and against what baseline?
Each of those questions has an owner. None of them has a system. The answers live in spreadsheets, ticket comments, vendor decks, and the institutional memory of three people.
Steeve’s problem is that AI governance is happening in fragments, by hand, in tools that were never designed to talk to each other. When his CIO asks for a single view, he builds one. It takes two weeks. By the time it’s done, three of the underlying numbers have already moved.
This is the actual decision-maker problem in 2026. Not absent governance. Fragmented governance, performed by people whose other job is something else.
Why the Gap is Widening, not Closing
A few things are true at once this year, and they don’t resolve neatly.
AI investment is accelerating. PwC’s 2026 CEO Survey reports that 56% of CEOs see no measurable financial return from the past year of AI spend, and yet budgets for agentic AI are increasing at most large enterprises through this fiscal year. Boards are pressing for value. Regulators are pressing for documentation. The two pressures are not always pointed in the same direction.
The EU AI Act reaches full enforcement on August 2, 2026. High-risk systems require completed conformity assessments, technical documentation, and EU database registration. Penalties run up to €35 million or 7% of global turnover. Sector-specific obligations from the FCA, FDA, and equivalent bodies layer on top. None of this is news to any leadership team. What is news is how little of the underlying work has been done at organizations that assumed they’d get to it.
The governance that worked in 2024 was a checklist. The governance that works in 2026 has to be a continuous system. That is the actual transition underway. Most enterprises are partway through it and don’t have a clean way to finish.
Four Things any Enterprise Needs to Answer for its AI
Strip the marketing layer off, and the requirements aren’t complicated. They’re just hard to deliver consistently across a real estate of models, agents, vendors, and use cases.
A single, current inventory
One place that lists every model, agent, dataset, and prompt in use — internally built, hyperscaler-sourced, or embedded in a SaaS product. Updated by workflow, not by quarterly survey. This sounds basic. Almost no large enterprise has it.
Risk is classified at the point of intake
New AI requests scored against NIST AI RMF and the EU AI Act before they’re built or procured — not after they’re in production. This is the difference between governance as a gate and governance as an audit. The latter is too late.
A visible link between AI initiatives and business outcomes
Every active AI initiative is tied to a measurable business goal — revenue, cost, cycle time, customer experience — with a baseline and a review cadence. This is what turns a portfolio of pilots into a portfolio you can defend at a board meeting.
Proof of oversight, on demand
A documented, exportable record of who approved what, when, against which policy. The kind of record that a regulator or an auditor can read without an explanation. The kind of record that doesn’t require a two-week project to assemble. Most enterprises can produce one or two of these on a good day. The work is in producing all four, repeatably, without a person manually stitching it together.
Where ServiceNow AI Control Tower fits
ServiceNow AI Control Tower (AICT) is a deliberate attempt to operationalize those four requirements inside one platform. It’s worth being clear about what it is and isn’t.
It is not a model-monitoring tool. It is not a substitute for the governance frameworks themselves. It is the system of record that makes those frameworks executable across an enterprise AI estate: connecting AI assets, the business services they touch, the policies that apply to them, and the outcomes they produce.
Inventory: AI assets are onboarded through a structured intake workflow and cataloged in the CMDB. As of the March 2026 Yokohama release, AICT connects via Service Graph Connectors to AWS Bedrock, Azure AI Foundry, Google Vertex AI, and AI embedded in Salesforce, so a model deployed outside ServiceNow shows up in the same inventory as one deployed inside it. The same release added governance over MCP server access for ServiceNow AI agents, which is a meaningful detail for any organization deploying agentic workflows.
Risk and compliance: NIST AI RMF and EU AI Act controls are embedded into the lifecycle, not bolted on at audit time. The Q1 2026 platform update introduced risk-based classification at intake, which means an AI request can be scored and routed before development begins. Dormant agents are surfaced. Privileged role usage is monitored. Regional data routing policies are enforced.
Strategic alignment: AICT integrates with Strategic Portfolio Management, so AI initiatives sit in the same prioritization framework as every other strategic investment. Each use case carries an outcome and a baseline before it goes into build. This is where most AI portfolios lose the thread, and it’s where AICT does some of its most useful work.
Value visibility: The Value Dashboard consolidates ROI, productivity, cost avoidance, and risk reduction into a single view, drillable from portfolio to individual model. The view is exportable. That last word is the one that matters when a board paper is due on Friday.
None of this is unique in concept. The CMDB-backed architecture is what makes it different in execution. When an AI asset is registered in the CMDB, it inherits its connection to the business services and processes it touches. A risk flag becomes a business decision, not a technical alert. That linkage is what most point solutions and hyperscaler dashboards can’t produce, because they don’t sit on the underlying enterprise service graph.
Forrester’s 2026 survey of 500 enterprises deploying AI agents found that 71% lack a formal governance framework for those agents — and 64% plan to increase agent autonomy within twelve months. That gap is where most of the next two years of regulatory and audit risk will land. AICT is one of the few platforms positioned to close the infrastructure that many of these organizations already run.
How iOPEX Maximizes the Impact and ROI for Enterprises
AICT is a platform. Like any platform, it produces value in proportion to how well it’s implemented, which is where partner choice starts to matter.
As a leading ServiceNow partner, iOPEX has operated AI governance environments for organizations at the inflection point Steeve represents: AI estates that have outgrown their original governance model, compliance obligations that are arriving faster than internal teams can prepare for, and leadership that needs a credible answer ready before the next audit cycle.
Practically, that means four things:
Assessment of the current AI estate, including the assets that aren’t on anyone’s list yet, and a governance gap analysis against the obligations specific to your sector.
AICT design and implementation in your existing ServiceNow environment, with intake workflows, risk frameworks, and review cadences configured for how your organization actually operates.
Integration with your hyperscaler AI services, embedded SaaS AI, and any in-house models or agents — so the inventory is complete, not just the parts that were easy to connect.
Executive reporting designed for board, audit, and regulator review, the kind that holds up without an internal explainer attached.
We don’t pitch AI governance as a transformation program. For most of the organizations we work with, it’s a cleanup job followed by discipline. The first part takes a quarter. The second part takes a culture.
Next Step
Can your organization answer the four questions — inventory, risk, alignment, outcomes — in the same sitting, with the same data, exported in a format an auditor would accept?
If yes, you’re further along than most. If no, the gap is unlikely to close on its own, and the August 2026 enforcement date is closer than the next planning cycle.
If a thirty-minute conversation about where you stand and what closing the gap would look like is useful, we’re happy to have one. No deck. No commitment. Just a conversation about where your AI governance is today, and where it should be.





