The Metrics Layer: Why Your Business Logic Doesn't Belong in Your BI Tool
Two dashboards. Same metric. Different numbers. Both built by people who knew what they were doing. The disagreement isn't a bug anyone can point to — it's the predictable result of encoding business logic in the wrong place.
This scenario is so common it has stopped surprising data teams. Someone builds a revenue dashboard in Looker. Someone else builds a revenue dashboard in Tableau. A third person pulls revenue from the data warehouse directly in a Metabase query. At the end of the quarter, all three numbers are different. The finance team has a fourth number from their own model. A meeting gets called. Ninety minutes later, the group has identified why the numbers differ — different period boundaries, different inclusion criteria for refunds, different handling of multi-currency transactions — and agreed on which one is "right." Three months later, the same meeting happens again.
The root cause is not the tools. Each tool is doing exactly what it was asked to do. The problem is that the definition of the metric was never written down in one place. It exists implicitly in the SQL of each dashboard, in the head of whoever built it, interpreted differently each time.
Where Business Logic Goes to Get Lost
BI tools are excellent at visualisation. They are not designed to be the canonical home for business logic. When revenue is defined by a calculated field in a Tableau workbook, that definition is only visible to people who open that workbook and inspect the field. It can't be tested. It can't be versioned. It can't be referenced by other tools. It changes silently when someone edits it. And it exists in parallel with a subtly different definition in the Looker explore built by someone else six months later who didn't know the Tableau workbook existed.
Business logic that lives in the BI layer is fragile by default. It reproduces whenever someone builds a new report. Every reproduction is a new opportunity for divergence.
What a Metrics Layer Actually Does
A metrics layer is a centralised location where metrics are defined once — in code, with explicit SQL logic, filters, and dimensional breakdowns — and every downstream tool consumes those definitions rather than implementing their own. The BI tool becomes a presentation layer. It asks the metrics layer what revenue is for a given time window and segment; the metrics layer computes it according to its single authoritative definition and returns the number.
The implementations vary. dbt Metrics and the dbt Semantic Layer embed metric definitions alongside the transformation models that produce the underlying data. Cube acts as a headless semantic layer that sits between the warehouse and any number of BI tools. LookML is Looker's proprietary implementation of the same concept. What they share is the core property: one definition, consumed consistently everywhere.
The consistency benefit is obvious. Less obvious are the downstream benefits that compound over time.
Metric definitions in a metrics layer are versioned alongside the rest of the codebase. When someone changes how revenue is calculated — say, to exclude a new category of internal transactions — that change has a commit, a timestamp, a message, and a code review. You can see exactly when the definition changed and why. You can answer questions from auditors or executives about why the number shifted in a particular period.
Metric definitions can also be tested. You can write assertions that active users are always greater than zero, that revenue never goes negative at the daily grain, that churn rate stays within a plausible range. These tests run as part of the pipeline and catch regressions before they reach a dashboard. Testing business logic inside a Tableau workbook is not really possible.
Onboarding is also materially different. A new data analyst joining a team with a metrics layer can read the metric definitions and understand exactly what each number means — what's included, what's excluded, how edge cases are handled. On a team without one, the same analyst reverse-engineers dashboard SQL and asks colleagues until they've assembled a mental model that may or may not be accurate.
The Organisational Problem the Technology Exposes
Here is where implementations stall: a metrics layer requires agreement on metric definitions before you can write them down. And in most organisations, there is no agreement. Finance defines revenue one way. Product defines it another way because they care about different things. Sales has a third definition tied to their commission structure. Each definition is legitimate within its context. None of them is wrong.
This is not a technical problem. No tool solves it. Cube and dbt Semantic Layer and LookML are all inert without a prior conversation — sometimes a difficult one — about what the organisation actually means when it says "revenue." That conversation requires someone with the authority to make decisions, the domain knowledge to understand the tradeoffs, and the political capital to get finance and product and sales to agree.
Metrics disagreements are almost never a symptom of bad technology choices. They're a symptom of missing ownership. Nobody is accountable for what "active user" means across the organisation, so everyone defines it in a way that serves their immediate purpose. A metrics layer makes ownership explicit — the definition is written down, it has an owner, changes to it require a deliberate decision. That accountability is often the real value, more than any technical property of the tool itself.
Where to Start
The practical starting point is not to instrument every metric simultaneously. Pick the five or ten numbers that cause the most disagreement — the ones that generate the most reconciliation meetings, the ones that finance and product fight over, the ones that senior leaders ask about and get different answers. Define those in a metrics layer first. Get the organisation to agree on the definitions as part of that process. Build the habit of "the number comes from here" before expanding scope.
The goal is not to have a comprehensive metrics catalogue. The goal is to eliminate the category of problem where two people look at the same business question and get different answers because they encoded different assumptions in different tools. That problem has a solution. It requires investment in both technology and the organisational alignment to decide what things actually mean — and then hold that line.
Written by ATHING
We design and build data infrastructure, automation pipelines, and AI systems for organisations that need them to work.
Talk to Us