Data Readiness is a Journey and Governance is the EngineData Readiness is a Journey and Governance is the Engine
  • Facebook
  • LinkedIn

March 17, 2026

Our expert, Marcus Norrgen, explains that AI systems become probabilistic, agentic, and autonomous, governance can no longer rely on manual processes or periodic checks. It needs to function continuously, in real time, and across the full data and AI estate.

Marcus Norrgren

Marcus Norrgren

Portfolio Lead, Data and AI, Sogeti Sweden

There’s a pattern I keep seeing in enterprise environments. Governance exists – on paper. Policies, glossaries, quality thresholds: they’re all documented (if there are any). But once new AI capabilities enter the data stack, governance is the first thing that struggles to scale.

Governance has traditionally lived beside engineering workflows, not inside them. It’s been something you audit after the fact – a manual step upheld by individuals who care enough to maintain it. And if they move on, or look away, the process unravels, or simply becomes too slow with too many hoops to jump through.

AI systems are moving into operational workflows, which raises the stakes. Human users, BI systems, and AI agents are all consuming from the same data estate. If they don’t share the same definitions, infrastructure, and interfaces, inconsistency will build fast.

It’s an architecture conversation. Governance has to become part of the engine driving us to stable, sustainable AI systems.

The shared foundation under human intelligence, BI, and AI

Three distinct consumers now rely on the same infrastructure:

  • BI systems need structure. They depend on dimensional models – clear metrics, consistent definitions.
  • AI systems need semantics AND structure. They parse graphs, embeddings, and operate probabilistically and at speed, with tradeoffs.
  • Humans need trust and shared understanding. They want to know where an insight came from, and what it actually means, and also what it means to others.

All three must be aligned, and that’s why ontology is critical. It connects raw data structures to business meaning. It acts as a translator between structured tables, graph representations, and human language. AI governance should ensure that every system operates against the same semantic layer.

So how do we move in that direction?

1. Move governance into engineering

To support AI at scale, governance must shift left and be automated. Instead of acting as an external check, it should be embedded where data is produced and consumed.

Data contracts are one of the most influential ways to achieve this. They define schema, semantics, and quality constraints at the source. They are programmatic and testable.

Think of streaming environments where data flows continuously in real time – governance cannot be a batch checkpoint. It has to operate on the wire. Data that meets readiness thresholds should flow to consumers. Data that does not, routed to a controlled queue for review.

The mindset change is key: data readiness is not something we want to certify quarterly. It’s something to calculate continuously.

Governance belongs inside the data stack

Treating governance as a layer applied after the fact does not scale. Embedding governance into the data stack itself – alongside models, pipelines, APIs, and agents – so it runs as part of normal system behaviour – makes control sustainable.

The mindset change is key: data readiness is not something we want to certify quarterly. It’s something to calculate continuously.

2. Observability: Make value visible

As AI agents generate outputs, we need to understand the context behind them:

  • Which ontology version was in use?
  • Which sources passed contract checks?
  • What state was the model in?

If “lineage” means just logging prompt text, we’re still in documentation mode. That’s brittle and hard to scale. Logging prompts has its use for training, improving and understanding, but not for speedy, on-the-wire automated governance.

Instead, lineage and context could be treated as a mathematical anchor, using embeddings, graph traversals, version hashes, which would make it more scalable. You could create a point-in-time representation of the decision space – reconstruct logic, measure semantic drift, or set thresholds where human intervention is triggered. Make governance quantitative and therefore more observable.

Automation enables human accountability

Automated governance mechanisms handle what does not scale for humans, while thresholds and scoring determine when human oversight is required. This preserves accountability while making governance practical at enterprise scale.

Then consider the human dimension. Manual governance does not scale indefinitely. Repetitive validation and consistency checks are not a good use of human attention, especially at high frequency. Fatigue sets in. But automation can handle predictable enforcement. Humans are better positioned to adjust definitions, interpret ambiguity, and resolve edge cases when the machine asks a question it’s never asked before.

And if a human refines a data product, they should see an immediate, tangible return.

3. Make it practical with modular sovereignty

A governance model that depends entirely on one large, general-purpose model is difficult to stabilize over time. But a layered approach would be more durable.

Engineers across disciplines acknowledge a time-honored truth: never ever break the interface and the system boundary. Lean into engineering wisdom for the data stack.

At the base – strict, deterministic contracts – rules that must be followed. Above that, specialized models to handle classification, anomaly detection, or narrow decisions. Evaluation mechanisms to flag out-of-bounds responses. Scope boundaries to keep agents focused and contained, even as underlying models evolve.

Deterministic rigor and probabilistic intelligence must coexist

AI-driven systems require strong deterministic foundations – clear data contracts, strict interfaces, non-negotiable constraints + probabilistic mechanisms that monitor drift, relevance, and risk in dynamic environments.

This modular architecture allows new capabilities to emerge without breaking foundational controls. It’s scalable, adaptable, and most importantly, survivable.

The heart of the matter

The more I think about AI systems operating continuously and dynamically – contextual, always running – I wonder if we can be inspired by natural rhythms and frames to visualize the whole picture. Imagine:

  • Data contracts – the skeleton – structured, programmatic enforcement of schema and types of data
  • Mathematical lineage – the DNA – an immutable record of state and origin
  • Specialized AI – the reflexes – for fast, narrow-scope reactions
  • Human overseer – the consciousness – guiding direction, ethics, and value
  • Governance agents – the heartbeat – to monitor system health in real time, and power it too.

If governance is going to support AI at scale, it has to operate with the same continuity as the systems it governs.

It need not mean redesigning everything at once. Start in one domain. One contract, one streaming pipeline. Make it observable. Make it measurable. Let it run – and expand from there.

Governance could just be the engine you need to keep your data-AI readiness moving.

Read more articles

Shifting Quality Right: Lessons from the Real World.

Most organizations gather plenty of production data but struggle to turn it into real resilience. Shift‑right succeeds…

Why observability is the next competitive edge for apps &amp...

Modern software development is transforming at an precedented pace. Long hours spent combing through documentatio…

Quality Engineering in agile: From control to strategic enab...

Modern software development is transforming at an precedented pace. Long hours spent combing through documentatio…