
The Decision Latency Tax
Most executives believe they should be able to answer in an hour. In practice, almost none of them can. The question travels through three or four systems, lands in a BI ticket, gets queued behind dozens of similar requests, and eventually returns as a deck two weeks later, with caveats about which definition of revenue the analyst chose to use.
Decision latency produces three categories of cost: direct, opportunity, and strategic. The first you can see in your data team’s capacity. The second you can see in your competitive position. The third you usually only see in retrospect, often in an exit interview.
Second, there’s fragmentation of metric definitions. Finance defines revenue one way. Sales defines it another. Marketing defines it a third. Each definition is correct within its own context, and each function has good reasons for the definition it uses. Across the enterprise, however, the definitions diverge, and the divergence is rarely reconciled. According to research from Strategy and Sapio, 99 percent of enterprise leaders cite consistent metric definitions as an ongoing challenge. That number is not a punchline. It is the most reliable predictor of why your executive meetings spend their first 30 minutes arguing about which version of the number is correct, before any actual decision happens. That argument is decision latency in pure form.
Third, and most importantly, there’s fragmentation of business meaning. This is the deepest layer and the one almost no one is measuring. Your organization runs on concepts: customer, contract, supplier, asset, route, batch, claim, incident. Without a shared, machine-readable representation of what those concepts mean and how they relate, every cross-system question requires manual reconciliation by a human who carries the institutional knowledge in their head. When that human goes on holiday, the answer takes a week longer. When that human leaves the company, the answer becomes structurally harder to produce. Most decision latency is created at this layer, and it stays invisible to executives because it has no obvious owner. Finance owns its definitions. Sales owns its definitions. No one owns the integration of meaning across the enterprise.
That gap, the time between a question being asked and a defensible answer being acted on, is decision latency. In most large enterprises, it represents the single largest unmeasured operational metric on the books. You measure cycle time, cash conversion, and time-to-market obsessively. You almost certainly do not measure how long it takes your organization to answer its own questions.
Why decision latency is the operational metric you’re not measuring
Decision latency sounds abstract until you put a clock on it. Once you do, it becomes the most useful operational metric most organizations have ever ignored.
At the same time, stop treating fragmentation as a tooling problem. Adding another dashboard, another lake, or another pipeline rarely resolves fragmentation of meaning. More often, it defers the cost by another budget cycle. The right intervention sits above the data layer, not parallel to it. If your team is proposing a fifth analytics tool to fix what the previous four did not, the proposal is almost certainly addressing the wrong layer.
Over the next five years, the enterprises that compete most effectively will not be the ones with the most data. They won’t even be the ones with the fastest data. They’ll be the ones with the lowest decision latency, because they have done the work to share meaning at the foundation rather than bolting faster plumbing on top.
So why does decision latency stay invisible? Because it never appears on a P&L statement. Instead, it hides as opportunity cost, BI ticket backlog, idle teams waiting for direction, and quietly missed market windows. Each individual delay feels reasonable when you look at it in isolation. The cost only becomes obvious in aggregate, and the aggregate is rarely measured. As a result, executives who would never tolerate a 12-week cycle time on a manufacturing line happily tolerate a 12-week cycle time on a strategic question, because no one has labelled it as such.
Reducing decision latency is not a technology project. It’s a measurement discipline that drives a set of architectural choices, and most of the value comes from the first step.
If your organization had decided to ignore decision latency for another decade, the next wave of AI will not let you. The technology doesn’t eliminate the tax. Instead, it relocates it, amplifies it, and forces it into the open.
The structural causes: why fragmented data quietly imposes the tax
Finally, hold AI initiatives to a latency standard. Any AI investment your organization approves should commit to a measurable reduction in decision latency for a defined set of business questions, alongside whatever accuracy or productivity targets it carries. If a pilot cannot articulate this, it’s unlikely to scale beyond the demo. That reframing matters. It moves AI out of the category of feature deployment and into the category of decision infrastructure investment, which is where it has always belonged.
Strategic costs are the ones that rarely get attributed correctly. Trust erosion in the data function is the most insidious. When dashboards consistently disagree, executives stop using them and revert to instinct or to whoever argues most persuasively in the room. That’s not a technology failure. It’s a meaning failure. Talent attrition follows the same pattern. The best analysts leave when most of their work is reconciliation rather than insight, and they tend to be replaced by less senior people who cannot push back on the underlying problem. Cumulative competitive drag completes the picture. Faster competitors compound learning loops while slower ones compound process, and over a five-year horizon, no amount of capital closes that gap.
Without a shared meaning layer, AI surfaces every inconsistency you have been living with for the past decade, and it surfaces them at the speed your customers, regulators, and competitors now expect you to operate. The tax does not disappear under AI pressure. It just becomes harder to ignore.
First, there is fragmentation of systems. Every large enterprise carries dozens of them. CRM, ERP, MES, PLM, finance, service, custom-built applications, and now a growing layer of AI tooling. This is the well-understood layer, and most organizations have already attempted to address it with warehouses, lakes, integration platforms, and pipelines. Solving it is necessary, but be clear-eyed about what you’re actually solving. Connecting the pipes does not produce shared meaning. It just makes more disconnected meanings reachable from a single query interface.
To begin with, AI inherits every inconsistency in your data. Large language models and agents reflect the data they are given. Therefore, if “customer” is defined four different ways across four systems, AI will produce four different answers, often with the same level of confidence. Where your senior analysts previously caught and silently reconciled inconsistencies through expertise, AI exposes those inconsistencies in a single user-facing interface, at machine throughput. Strategy and Sapio describe AI as effectively stress-testing data foundations, surfacing gaps that were previously hidden in separate dashboards. The gaps were always there. AI just makes them visible to everyone at once.
What the tax actually costs
Most discussions of fragmentation stop at the system level. That stops short of the real problem. Fragmentation actually operates at three layers, and the highest layer is where most of the latency tax accumulates.
So treat decision latency as the first-class operational KPI it has always been. Measure it. Hold your data and AI investments accountable to it. The companies winning the AI era will not be the ones with the smartest models. They will be the ones whose business already understands itself.
The financial stakes are not subtle. Bain & Company’s research on decision effectiveness, drawing on a survey of nearly 800 companies, found a 95 percent correlation between decision effectiveness and financial performance. Top-quintile companies generate average total shareholder returns roughly 6 percentage points higher than their peers. Bain’s framework treats speed, quality, yield, and effort as separate variables, but every one of them is degraded by the same upstream condition: an organization that cannot answer its own questions quickly enough to act with confidence.
The metric itself is straightforward. It captures the elapsed time from a business question being raised to a defensible answer being acted on. That clock includes everything: routing the request, pulling data from multiple systems, reconciling competing definitions, validating the result, escalating where needed, and then absorbing any rework caused by inconsistent inputs. It’s not the same as report turnaround. Report turnaround measures how fast a known query gets refreshed. Decision latency measures how fast your organization can answer something it hasn’t asked before.
Why AI is about to make the tax visible, at machine speed
Ask your leadership team a simple question. Where, exactly, are you losing the most revenue across the customer lifecycle right now, and why?
Furthermore, the tax compounds during growth. M&A activity is one of the largest accelerants. Each acquired company arrives with its own systems, its own metric definitions, and its own working understanding of what its core concepts mean. The unified-view problem multiplies, and the latency tax expands with the estate. Functional silos accelerate the same dynamic from the inside, because each function builds dashboards and definitions to move faster within its own boundary, which makes any cross-functional question slower. As a result, organizations that grow without addressing this layer end up paying compounding interest on an unfunded liability they have never put on the balance sheet.
The most consequential shift happens when agents enter the picture. When an autonomous agent reads from and writes to multiple enterprise systems, inconsistencies do not just produce wrong answers. They produce wrong actions, executed at scale, before anyone notices. The cost moves from “we missed a window” to “we just executed the wrong move across thousands of accounts.” That is why decision latency cannot be solved by faster pipelines or smarter models alone. It requires reconciling what your business actually means, before AI scales across the estate.
The direct costs show up first in analyst capacity. Quest’s research found that roughly 38 percent of total data team effort goes to non-reusable, one-off work, the kind of bespoke request that delivers value once and then evaporates. That is your senior analytical talent spending nearly two days a week reconciling definitions instead of finding patterns. On top of that, industry benchmarks suggest that data integration alone consumes around 27 percent of BI project timelines, and only 39 percent of organizations report high confidence in the quality of insights their BI investments deliver. Therefore, even when the work gets done, the trust in it does not.
How leaders should respond: making decision latency a first-class metric
Once you can see the metric, treat shared meaning as infrastructure rather than as a project. Definitions of your core concepts, what counts as a customer, what counts as a contract, what counts as a unit shipped, must be governed at the enterprise level and made consumable by every dashboard, every workflow, and every AI interface that needs them. This is architectural work, not analytics work. It belongs alongside identity, security, and observability in your platform thinking.
The numbers behind this gap are sobering. According to research by Quest, the typical enterprise data delivery project takes more than four months to deliver first insights and consumes over 800 person-hours of effort. Roughly 70 percent of projects then require significant rework, often extending timelines by another 30 percent. Meanwhile, McKinsey reports that only 37 percent of executives believe their organizations consistently make decisions both quickly and well. So the data takes months to arrive, the answers it produces are inconsistent, and the people receiving those answers do not trust the cadence.
Opportunity costs are larger and harder to attribute. When a decision is deferred until the data is ready, the decision window often closes before the data arrives. Capital allocation suffers. Pricing actions land late. Supplier renegotiations miss their leverage point. Regulatory responses arrive after the regulator has already made the assumption. Meanwhile, McKinsey’s recent research on top-performing CIOs shows that the highest-performing organizations have shifted from optimizing for efficiency to optimizing for velocity, with decisions moving from a monthly cadence to a daily one. If your competitors operate in days while you operate in months, the delta is not a productivity gap. It’s a strategic gap, and it widens every quarter.
In this article, I unpack what the tax actually costs, why it persists, and what leadership teams should do to reduce it.
That blind spot is becoming expensive. Fragmented data, fragmented metric definitions, and fragmented business meaning are quietly imposing a tax on enterprise competitiveness, and AI is about to make the cost visible at machine speed, whether your organization is ready for the visibility or not.
Speed is now strategy, but only on a foundation of meaning
Once you start measuring decision latency, you can’t unsee it. And once you see it, it changes which problems you treat as urgent.
Next, the bottleneck shifts from analyst capacity to trust. Self-service interfaces and conversational analytics collapse the time-to-query from hours to seconds. So far, so good. The catch is that without resolution at the meaning layer, the question gets answered faster, and the answer gets trusted less. An executive who used to wait two weeks for an answer and trust it now waits two minutes for an answer and second-guesses it. The clock improves on paper while confidence collapses in practice.
Start by measuring it. Pick three to five recurring strategic questions your executive team needs answered every quarter. For example, where margin is leaking, which customers are at risk of churn, which suppliers carry concentrated exposure, or which product lines are underperforming forecast. Then time how long it takes to produce a defensible answer today, end-to-end, including reconciliation, validation, and any rework. Publish the number internally. The act of measuring is, by itself, most of the value. It converts an invisible cost into a visible KPI and gives the data function a clear performance target.
By Julius Hollmann






