Enterprise software has had a remarkable thirty years. We digitized state: what the business owns, who it employs, what it ships. We digitized identity: who’s acting, what role they hold, which systems they can reach. We digitized activity: tasks, workflows, approvals, sequences, escalations. We built systems of record, systems of engagement, and systems of intelligence on top of all three.
One thing was never digitized: the decision as a primitive. Not what happened (that’s a log entry). Not who did it (that’s an identity record). Not the steps that led to it (that’s a workflow). The decision: why it was valid, under which conditions, by what authority, within what scope, and what it actually authorizes.
The gap has always been there. It’s been survivable. It wasn’t free. Entire industries formed to compensate. Business process outsourcing, shared services, consulting engagements that exist not because the work is complex, but because the systems can’t carry the judgment the work requires. People became the load-bearing wall between software that couldn’t represent commitment and organizations that couldn’t operate without it.
The failure shows up at the commitment boundary: the moment the organization becomes bound. Money moves. Access changes. Terms are set. And no system in the stack can prove whether the decision behind that commitment was well-formed.
For decades, the industry has tried to close this gap. Three tracks of investment, each more sophisticated than the last. Each made progress. None arrived.
”If We Assemble Enough Context, Better Decisions Will Follow”
Data warehouses gave way to data lakes, then data catalogs, then data mesh, then knowledge graphs. Each generation solved a real limitation of the last. More data became accessible. More relationships became queryable. More context became available.
But more context doesn’t produce a decision. It produces a bigger input to a judgment that still has no structure, no closure criteria, and no connection to what it authorizes. The 15% discount is visible in the CRM. The margin policy that constrained it, the delegation limit that bounded it, the segment classification that enabled it, and the strategic exception that justified it. None of that is in the data. You can warehouse it, lake it, catalog it, mesh it, and graph it. The reasoning still isn’t there, because it was never captured as data in the first place.
A parallel track tried a different fix: if the data we have is incoherent, maybe the problem is meaning. Master data management. Enterprise ontologies. Canonical schemas. The semantic web. If “revenue” meant the same thing everywhere, if “customer” had one definition, if “approved” carried a single unambiguous interpretation, then coherence would follow and decisions could be evaluated against a shared truth.
This is correct in theory. It is impossible in practice at enterprise scale.
“Active customer” legitimately means different things to Sales, Support, Finance, and Legal. Not because anyone is wrong, but because they’re answering different questions. “Revenue” changes meaning across accounting standards, jurisdictions, reporting periods, and business contexts. A twenty-year trail of abandoned MDM initiatives, stalled semantic web projects, and enterprise ontology programs that never converged is evidence enough. The approach is intellectually sound and operationally ruinous. You can’t pause the business until everyone agrees on meaning, and even when they do, the definitions drift faster than they can be maintained.
Now a new concept is gaining traction: context graphs. The idea is compelling: capture decision traces from workflows, stitch them across entities and time, and let structure emerge from how work actually happens. Don’t wait for global agreement on meaning. Learn it from observed patterns. Make “why” a queryable asset.
I wrote about this earlier: context graphs are powerful for discovery and navigation, but “capturing the why” is more ambitious than it sounds, because justification doesn’t fall out of traces. The operational “why” (what made the action allowed, under which definitions, under which authority) has to be bound at the moment of commitment, not reconstructed from exhaust.
What’s happened since is revealing. The term has already fractured into at least four competing interpretations: richer metadata on existing records, a data gravity play, a universal integration and lineage layer, and agent execution traces. Meanwhile, the semantic web community is attempting to fold the entire concept into existing standards for knowledge representation.
Each camp is describing something real. Each one is also extending existing primitives and applying them to a gap those primitives were never designed to address. And none of them are producing a decision. They’re producing richer context, better visibility, more traces, more metadata, all valuable for discovery and navigation. But at the commitment boundary, none of them can tell you whether this decision was well-formed or what it actually authorizes.
That’s what happens when you name a gap without identifying the missing primitive. Everyone extends what they already have.
The data track wasn’t the only attempt.
”If We Know Precisely Who’s Acting, We Can Control What They Do”
Role-based access control gave way to attribute-based access, then federated identity, then zero trust architectures, then fine-grained authorization. Each generation got more precise about who. The identity stack is mature, audited, and battle-tested.
But identity answers: who is this?
The question that matters once a system starts committing is different: what can this actor bind the organization to, on whose behalf, under which constraints, at this point in time?
That’s not identity. It’s authority. And the gap between them is where enterprises get hurt.
A procurement officer’s SSO session doesn’t know about the delegation expiry. A VP’s role membership doesn’t encode the board resolution that changed their signing limit. An acting head’s access group doesn’t track the backfill date. Identity is comparatively stable. Authority is contextual, temporal, and revocable.
Most systems collapse this into a simple assumption: if someone can invoke the system, they’re authorized to do whatever they’re asking. That works for access. It fails for commitment. I wrote about this distinction: identity answers access; authority answers commitment. They’re not the same question, and the gap between them is where accountability collapses.
A third track tried to solve the problem through automation.
”Automate the Workflow and Governance Follows”
Business process management gave way to case management, then robotic process automation, then workflow orchestration, then observability platforms, then agent frameworks. Each generation handled more complexity. Each one automated more of the sequence.
None of them governed the judgment at the commitment boundary.
Consider what a workflow actually captures. A request enters the system. It routes through three approvers. Each one clicks “approve.” The workflow completes. Every step is green. Every box is ticked.
Now ask: what question did each approver actually close? What did they verify? What would have made them reject it? Were they checking the same thing, or three different things? Did the second approver review the first approver’s reasoning, or just see that the first step was marked complete? Did the third approver have the delegation authority to approve this specific commitment at this specific dollar amount under the policy in force at that moment?
The workflow can’t answer any of this. It knows a sequence of steps completed. It doesn’t know whether the judgment at each step was sound, or even what the judgment was.
Tasks compress judgment into vague verbs. “Reviewed.” “Approved.” “Handled.” The task completes. The bar stays implicit. The distinction matters: a closed task is not a closed question. An approval click is not a decision record. Activity is not justification.
Robotic process automation made the same bet at a different layer. If we can automate what humans do repeatedly, the governance is inherited from the process it mimics. But RPA automated the motions without capturing the reasoning. The robot clicks the same buttons faster. It doesn’t know why the human clicked them, or whether the conditions that made those clicks valid still hold.
Observability tried to close the gap from the other direction: if we can see everything (metrics, traces, logs, dashboards) we can govern what’s happening. But visibility is not validity. You can observe every step of a workflow and still not know whether the decision at the commitment boundary was well-formed. Observability tells you what happened. It doesn’t tell you whether it should have been allowed.
Agent frameworks are the latest iteration. Agents sitting in execution paths, reasoning across systems, emitting traces of their work. This is the context graph thesis made operational: the agent acts, the trace persists, structure emerges. But an agent completing a workflow is still activity, not decision. The trace tells you what the agent did. It doesn’t tell you what made it legitimate. And traces emitted by agents inherit every limitation of the workflow exhaust that came before them: correlation without causation, patterns without authority, activity without closure.
The Pattern
Three tracks. Decades of investment. Billions of dollars in each. And the gap is exactly where it was: at the moment of commitment, no system in the stack can tell you whether this decision was well-formed, whether it’s consistent with how the organization decided before, or what it actually authorizes.
The data track assembled context without producing judgment. The identity track refined access without establishing authority. The process track automated activity without governing commitment.
Each track improved something real. None of them built the thing that was actually missing.
Most systems capture artifacts of a decision: an approval click, a log entry, a timestamp, a status change. They do not encode the decision procedure that made it valid. The artifact records that something happened. The primitive would encode why it was justified and what must remain true for it to still apply.
Judgment doesn’t compound in enterprises. Not because people don’t try to document decisions, or because systems don’t capture data, or because workflows aren’t sophisticated. It doesn’t compound because the decision itself, the structured record of what was decided, why it was valid, under which conditions, by what authority, was never a primitive in the infrastructure.
Precedent can’t transfer when the conditions were never captured. Institutional knowledge disappears when people leave, not because they failed to document, but because the thing they carried was never representable in the system. Every decision starts from scratch because there’s no structure that connects this judgment to the last one, and no way to check whether the prior reasoning still applies.
The primitive was never built.
Three decades of enterprise software produced extraordinary infrastructure for state, identity, and activity. It produced no primitive for judgment. Every attempted fix (more data, better schemas, global ontologies, tighter access, smarter workflows, richer traces) iterated on the three primitives that existed. None of them introduced the one that didn’t.
The decision needs to become a first-class primitive in enterprise software. Not a trace to be searched. Not an annotation on other data. Not a workflow step. Not a status field.
A decision is not a step in a workflow. It is a question that has been closed against explicit criteria: what evidence was required, what rules were in force, who had authority, what would have changed the answer. And it is only valid relative to the definitions, limits, and authority regime in force at that moment. Without closure, you have activity. Without time, you have an assertion that decays silently into fiction as conditions drift.
This isn’t a rules engine. Rules engines encode logic. Decisions encode what applied, with evidence, authority, and time, within the bounds of what would have changed the answer, including the exceptions that rules engines can’t pre-model.
And this isn’t better documentation. Documentation describes reasoning after the fact. A decision primitive structurally requires reasoning before commitment.
That’s the primitive nobody built. Not richer context. Not better metadata. Not smarter traces. A structured, closeable, time-bound record of judgment.
Until it exists, judgment can’t compound. It can only be re-litigated.