Journal

We Did This Last Time

Why organizations can't reuse judgment

Jayson Winchester · 12 January 2026

We Did This Last Time

Why organizations can’t reuse judgment

The promise sounds reasonable: give AI access to your documents, emails, tickets, and data, and it will learn how your organization actually works. Train on the artifacts, and the model will understand the rules.

It won’t.

Not because the technology is immature, but because the artifacts don’t contain what you think they contain. They record what happened. They don’t record what made it valid. And that gap, between outcome and the conditions that justified the outcome, is where organizational judgment lives.

The Scavenger Hunt

Someone says, “We did this last time,” and the room treats it like a shortcut.

Then the hunt begins. Someone opens Slack search. Someone says, “Check the Jira ticket.” A link dump appears. The room stalls.

Artifacts are plentiful. Closure is missing. Everyone can find traces of what happened. Nobody can state what question was actually closed.

So the room faces a fork: re-litigate from scratch, or replay the prior outcome and hope the context hasn’t changed. Neither is reuse. Both burn judgment you already paid for.

Last March, a sales rep approved a 15% discount for a mid-market customer. The CRM shows “Approved discount: 15%.” Six months later, a similar deal comes up. Someone finds the prior record. “We did 15% for a deal like this before.”

But what made 15% valid in March? The rep had delegation authority up to 18%. The margin model required 42% gross margin, and this deal cleared it. The customer was classified as “Growth Segment,” which allowed deeper discounts for land-and-expand. A strategic account exception had been granted by the regional VP because the customer was a reference candidate.

None of that is in the CRM field. What’s there is exhaust: the fact that a discount happened. Not the conditions that made it appropriate.

Now someone wants to reuse that precedent. But the margin model changed in Q2. Delegation limits were tightened after the reorg. The customer segment definitions were updated. The strategic exception expired.

Either the new rep copies the 15% number because “we did this last time,” and gives away margin the organization didn’t intend to give. Or someone realizes the context might have changed, and the team re-litigates from scratch, burning coordination hours to answer a question that was already answered, just never captured.

That’s not a documentation failure. It’s a missing primitive.

”The Model Will Learn Our Rules”

We have years of decisions embedded in our systems. Emails, tickets, documents, approvals. A sufficiently powerful model, trained on that corpus, will extract the patterns. It will learn what “approved” means, what counts as an exception, how we actually operate.

This sounds plausible until you look at what’s actually in the training data.

The CRM field says “Approved discount: 15%.” It doesn’t say that 15% was the maximum allowable under the margin policy in force at the time, that the approver had delegation up to 18%, that a strategic exception applied, or that the customer segment classification triggered a different rule set. The model sees the outcome. It doesn’t see the warrant.

Train on enough examples and the model will learn correlations: deals of this size in this region tend to get discounts in this range. But correlation isn’t validity. The model can’t distinguish between “this discount was correctly approved under the rules” and “this discount happened and nobody caught the error.” Both look the same in the artifact.

Worse, meaning drifts. “Approved” meant one thing when the delegation policy was written. It means something different after three reorgs. “Strategic account” had a definition in 2022. It has a different definition now. The model trains on artifacts that use the same words with shifting referents. It learns a weighted average of contradictory meanings.

This isn’t a training data problem you can fix with better labeling. The information was never there. The artifacts record that a decision happened. They don’t record what made it a good decision, or what would have made it a different decision, or what must remain true for the decision to still apply.

You can’t learn judgment from exhaust.

”We Already Capture This”

We have documentation. Wikis. Runbooks. Policy libraries. Knowledge management systems designed to preserve institutional understanding.

But look at what those systems actually contain.

Policies describe what should happen in general. They don’t record what question was closed in this case, under what interpretation, with what evidence. A pricing policy says “discounts above 10% require VP approval.” It doesn’t tell you whether the VP who approved this particular discount had the delegation authority at that moment, whether the margin threshold was met, or whether an exception applied and why.

Documentation decays. The wiki page was accurate when written. The policy changed. The page didn’t. Now you have documentation that looks authoritative but describes rules that are no longer in force. The more comprehensive your documentation, the harder it becomes to know which parts still apply.

Runbooks describe process, not judgment. They tell you the steps: “Check eligibility. Verify documentation. Submit for approval.” They don’t tell you what “eligible” meant for this decision, what counted as sufficient documentation, or what the approver actually verified.

Knowledge management systems optimize for retrieval, not validity. They help you find the artifact. They can’t tell you whether the artifact still applies. You retrieve the prior decision, then spend an hour figuring out if the context has changed enough to invalidate it.

Documentation doesn’t carry conditions. It’s a story you have to re-interpret every time, not a closed question you can verify.

”We Need to Unify Our Data”

The problem is fragmentation. Once we integrate our systems, decisions will become coherent.

Integration projects assume that if we unify facts, meaning will follow. Get the CRM, ERP, and approval system talking to each other, and we’ll have a complete picture.

But the hard part isn’t facts. It’s what the facts meant at the time.

Take three words that appear in every enterprise system: approved, eligible, active. Each seems simple. None is stable.

“Approved” depends on delegation, purpose, and time. Who approved it? Did they have authority? Under which version of the policy? For what purpose? The same approval checkbox means different things depending on answers that aren’t in the checkbox.

“Eligible” shifts with policy. A customer was eligible under the rules in force when the determination was made. Are they still eligible? The field doesn’t update when the policy changes. It still says “eligible,” but the conditions that made it true may no longer hold.

“Active” drifts with systems and incentives. An account is active. Active for billing? Active for support entitlement? Active in the sense that someone logged in this quarter? The word is reused across systems with different operational meanings.

Integration can unify the fields. It can’t unify the meanings, because the meanings were never pinned down at decision time.

There’s a sharper problem hiding inside “approved.” A system of record can tell you who clicked the button. It cannot prove they had the mandate to bind the organization at that moment. Identity is not authority. Login is not standing. When that distinction isn’t machine-legible, “approved” becomes a story we tell after the fact.

“Integrate everything first” becomes a permanent project because you’re trying to precompute coherence for unknown future questions. The meanings shift. The project never converges.

”It’s a Discipline Problem”

If people documented their reasoning, if they followed the process, we wouldn’t have this problem.

Half right. Discipline matters. But the problem is structural, not behavioral.

Organizations don’t have a primitive for closed questions.

They have tasks, which compress judgment into vague verbs. “Reviewed.” “Approved.” “Handled.” The task completes. The bar stays implicit. You know something got done. You don’t know what question it answered or what would have changed the answer.

They have workflows, which orchestrate steps without capturing closure. The approval workflow routes a request through three reviewers. Each clicks “approve.” But clicking isn’t closing. What question did each reviewer answer? What did they verify? What would have made them reject it?

They have artifacts, which record outcomes without conditions. The contract is signed. The ticket is closed. The field is updated. But artifacts don’t carry the reasoning that made the outcome valid, or the conditions under which it would need to be revisited.

When the primitive is missing, discipline can’t save you. You can train people to write longer notes, add more context, document their reasoning. But without a structure that distinguishes “question closed” from “task completed,” documentation becomes prose. Prose requires interpretation. Interpretation drifts.

The 15% discount again: even if the rep wrote a note explaining the reasoning, that note lives in a comment field somewhere. Six months later, someone would have to find the note, read it, figure out which parts still apply, and determine whether conditions have changed. That’s story reconstruction, not condition verification.

The discipline you need isn’t “document more.” It’s “close questions in a way that carries conditions forward.”

What Actually Compounds

Memory helps you reconstruct what happened. Decisions let you reuse judgment without reconstruction.

When someone says “we did this last time” and you have only memory, you get a scavenger hunt. Find the artifacts, reconstruct the context, infer what applied. That’s expensive in a five-person team. It becomes unsustainable as the organization scales.

When someone says “we did this last time” and you have a closed question, the path is different. You don’t reconstruct. You verify. Are the conditions still met? Has the policy changed? Is the delegation still valid? If the answer-changing facts haven’t changed, you reuse the judgment. If they have, you know exactly where to focus.

That’s the difference between story recall and condition check. Stories require re-interpretation every time. Conditions can be verified.

A decision, properly closed, records three things: what question was answered, what would have changed the answer, and what must still be true for the answer to apply. With those three things, precedent compounds. Without them, every instance is novel.

The 15% discount becomes reusable when you can point to: “The question was whether to approve a 15% discount for a Growth Segment customer at 44% gross margin. The answer would have been different if margin fell below 42%, if the rep’s delegation limit was below 15%, or if the customer wasn’t classified as Growth Segment. For this to apply to a new deal, those conditions must still hold under current policy.”

That’s not a longer note. It’s a different structure.

Why Now

Organizations have always struggled to reuse judgment. Automated systems entering the loop changes the math.

When decisions were human-paced, the inability to reuse judgment was expensive but survivable. You burned coordination hours on re-litigation. Experienced people carried implicit knowledge. The organization muddled through.

Without explicit closure, automation produces inconsistency at speed: different answers to the same question, replayed debates that were already settled, re-litigation that compounds rather than resolves.

Worse, you get answers nobody can defend later. When a decision is implicit, buried in a task that completed, inferred from patterns in prior behavior, there’s no record of what made it valid. If conditions change, no one knows. If someone challenges the outcome, no one can point to the bar it met.

The 15% discount, approved automatically because “similar deals got similar discounts,” becomes a liability when the margin model changes and the system keeps approving discounts that no longer meet the threshold. The system learned the pattern. It didn’t learn the conditions. When the conditions change, the pattern keeps firing.

The moment output turns into a real-world commitment, closure has to be explicit. Not to slow things down, but to make speed sustainable.

The Practical Shift

You don’t need to model the whole business. You don’t need an enterprise-wide initiative.

Start with five questions.

Every organization has recurring decisions that consume disproportionate coordination effort: pricing exceptions, approval escalations, vendor assessments, contract deviations. These decisions get made repeatedly, but each instance is treated as novel because no one captured what made the prior answer valid.

Pick the five that burn the most hours. For each, get explicit about three things: the question you’re actually closing, the facts that would flip the outcome, and what must remain true for the answer to be reusable.

For the pricing exception: “Can we offer 15% on this deal?” becomes “Does this deal meet the conditions for a 15% discount under current policy?” The conditions: margin above threshold, rep delegation sufficient, customer segment eligible, no conflicting exceptions. The answer-changing facts: margin calculation, delegation limit, segment classification. The reuse conditions: policy version, delegation authority, segment definitions.

That’s not documentation. It’s leverage. Each closed question becomes precedent that saves the coordination cost you would have spent on re-litigation.

Then “we did this last time” stops being a scavenger hunt. It becomes a condition check.


The goal isn’t perfect documentation. The goal is that “done” means a question was closed in a way the organization can stand behind and reuse.

Next time someone says “we did this last time,” you can point to the closed question and its conditions. You can verify whether the prior answer applies or identify exactly where circumstances have changed. Either way, you’re not starting over.

Precedent only compounds when it carries its conditions.

Stop trying to train on everything. Start closing questions you can actually reuse.

Get Started

See it on a real decision.

A Decision Infrastructure Blueprint Sprint takes one commitment boundary from your organization and produces the complete decision architecture. In weeks, not months.

Start a conversation