How to Actually Prove the ROI of Your Data & AI Projects
What one Fortune 50 data leader did differently (and how you can copy their approach)
Here's what a senior data leader I've been working with told me recently:
"I've been at [Fortune 50 company] for 20 years, and I have NEVER seen anyone follow up on whether we actually achieved the savings from our initial business case. Never."
Damn. She said the quiet part out loud.
Twenty years. Hundreds of millions in supposed "savings" from data and technology investments. Zero follow-up.
(This was in the context of her having been part of my data product management course, where we go over methods to estimate ROI upfront and after the fact)
She's not alone. I'd estimate 90% of data and transformation initiatives never measure their actual ROI against their business case. We promise millions in value, get the budget approved, build the thing... and move on to the next shiny project.
But this leader didn't just have this realisation and accept it as "how things work." She's started changing it. I'll come back to exactly how she's transforming her approach later. But first, let's look at why this problem has become so widespread - and why it matters more than ever.
The scale of the problem
Think about this for a moment. According to Statista, global spending on digital transformation reached $2.5 trillion in 2024, projected to reach $3.9 trillion by 2027. AI spending alone hit $244 billion in 2025 and is forecast to reach $827 billion by 2030.
But here's the thing - we don't even know which initiatives are failing, because nobody's measuring.
Trillions invested… And $??? returned back.
I've seen this sort of disaster play out across all sorts of industries - in my own work, in my clients', and across all the data leaders I meet in the data product management meetups I organise.
The two most common patterns: (1) dashboard factories and (2) analytics assembly lines. Let’s expand on both before proceeding:
Pattern 1: The Dashboard Factory
Teams build BI dashboards, stakeholders seem happy (or at least don't complain much), but there's almost zero understanding of whether those dashboards actually improve decision-making.
These teams operate like service desks - they receive requests and deliver against them, with performance measured by metrics like time to completion and governed by SLAs. They track login frequencies and maybe time spent on each page, but they have no idea if anyone is actually making different decisions because of what they see.
Companies spend millions on dashboard platforms where the BI team couldn't name a single business outcome that had demonstrably materialised because of their work.
Pattern 2: The Analytics Assembly Line
A data science team builds an exciting prototype that shows real promise. Then it gets thrown over the wall to an engineering team to industrialise. Then that team hands it off to a "global business services" (GBS) team (usually offshore) to run and maintain - because the focus shifts from maximising business value to minimising IT costs.
The GBS folks running the analytics product become totally disconnected from the business problem it was meant to solve, left with little more than technical documentation and no real domain knowledge. There's rarely a proper product roadmap, and when there is, it's usually based on the original scoping from months or years earlier - not feedback from actual users. To make things worse, end users are usually "shielded" from speaking to the GBS team.
(In the better scenarios, there’s a more proper handover phase from builder to maintainer - but the structural disconnect persists regardless)
Why this happens everywhere (and it's not just laziness)
First reason: It's genuinely uncomfortable.
What if the savings didn't materialise? What if your model was wrong? What if the technology worked perfectly, but the business process changes never happened?
I've spoken to data leaders who privately admitted their biggest data & AI projects "technically worked" but delivered less than 10% of promised business value, but nobody challenged them on it. So they claim victory and move to the next project. Better to be known for delivering projects than for delivering disappointing results.
Second: It's institutionally hard.
The person who built the business case has often moved to a new role by the time measurement should happen. The assumptions have changed. The baseline shifted. Market conditions evolved.
But here's the real killer: Measurement & Evaluation is never part of the original scope. By the time you should check whether you delivered on your promises, there's no budget, no time, and definitely no appetite for "going backwards."
I've seen this pattern so many times:
Q1: "We need to build this AI system to save $5M annually"
Q4: "The system is live and working perfectly"
Q1 next year: "What $5M? We're focused on the new customer experience initiative now"
Third: Nobody asks for it.
Your CFO approved the budget based on promised returns, but they're not following up either. They've got 50 other initiatives to worry about. The board sees that you "deployed AI" or "became data-driven" - mission accomplished, right?
This creates a perverse incentive structure where success is measured by deployment, not by business outcomes.
Why this matters more than ever
The last 10-15 years were forgiving. Cheap money, growth-at-all-costs mentality, and "digital transformation" budgets that seemed unlimited. In that environment, you could get away with fuzzy ROI measurement.
But we're firmly back in "the CFO runs the show" territory:
Rising costs of capital mean every dollar spent needs to generate measurable returns. The days of betting on "strategic positioning" or "future optionality" are largely over.
AI efficiency plays are everywhere, but they're double-edged. Yes, AI can reduce costs - but only if you can measure and optimise those reductions. Otherwise, you're just adding AI complexity on top of existing inefficiencies.
Shareholder pressure for profitable growth means no more buying growth with unprofitable initiatives. Every major tech company has done layoffs in the past 18 months, even while reporting record revenues.
The signs are everywhere: Layoffs even when companies are making record profits. Pressure to "do more with less" and "use AI to replace expensive processes." Greater scrutiny over every budget line item.
If you can't prove your data initiatives deliver measurable business value, your budget is at risk.
By the way, I wrote more about this last year:
What the smart money is doing differently
Back to that data leader I mentioned at the start. Here's what she did differently, and why it's working:
She picked ONE initiative - the simplest, most measurable one in her portfolio. Not the sexiest AI project, not the most technically challenging. The one with the clearest path to measurement.
She built a specific business case: 2-5% savings on a 9-figure-a-year spend (meaning $millions in projected savings). Not "significant savings" or "efficiency improvements" - specific percentages with clear baselines.
But here's the crucial part: Before starting the project, she got the transformation team to commit to tracking actual vs. promised savings, and added Measurement & Evaluation as one of the project's workstreams from day one.
Not just "we'll save money." But "we'll save 2-5% and here's exactly how we'll measure it in Q1 vs our baseline."
If you don’t do #3 upfront, chances are you won’t do it later. You’ll be told you need to focus on the next initiative, that the stakeholders are now too busy, and so on. Even if you do something to measure value, it’ll read too much like grading your own homework. Things look very different if you’re grading against a rubric that’s been established before the work began.
When data quality becomes negotiation power
Without getting into specifics, her initiative had to do with supply chain optimisation - and so their data also came from their suppliers, not just internal sources. By linking business value directly to suppliers' data quality, their negotiation team now has commercial leverage.
Here’s what makes this setup a brilliant win-win scenario:
If suppliers improve their data quality: Better optimisation algorithms → indirect cost savings through efficiency gains → suppliers get rewarded with continued business (and better commercial terms)
If suppliers don't improve: The commercial team levies data quality penalties → direct cost savings through reduced shipping costs → suppliers get motivated to fix their data
Suddenly, data quality becomes a commercial negotiation point, not just an IT complaint that everyone ignores.
The result? For the first time in her 20-year tenure at the company, she's running a data initiative where:
ROI is reported on, not just estimated to write business cases
Success is measured in dollars saved, not models deployed or deadlines met
Business stakeholders are the most invested in data quality (!!!)
A framework you can steal
Here's the methodology that's working for her (and others I'm seeing succeed):
1. Start stupidly simple
Pick the most boring, measurable initiative in your portfolio. The one where:
Success can be measured in dollars, not "engagement" or "adoption"
The baseline is clear and historical data exists
Business stakeholders already care about the outcome
The data pipeline is relatively straightforward
The number of stakeholders is manageable (ideally this will be one ‘customer’ - not fifteen country leads and seven user personas)
2. Build measurement into the business case
Don't just promise savings - work together with your stakeholders to specify:
The metric(s) by which success will be defined
Expected improvements on those metrics
Timeline for measurement (quarterly, not annual)
Who will be responsible for tracking
What data sources will be used for verification
How you'll separate correlation from causation*
*this one is tricky. Sometimes, especially earlier on, you can ditch this and just do simple pre-/post-analysis, especially if you know that there aren’t lots of confounding factors.
3. Make data quality a commercial issue, not a technical one
Poor data quality is a killer issue in data initiatives - literally. But, unlike what a lot of vendors out there will tell you, it’s not the root cause. Sure, your model sucks because the data sucks. Garbage in, garbage out, and all that.
But the data usually sucks because fixing it is not an organisational priority - not because of inherent technical challenges.
So, if DQ is a challenge (current or anticipated), you need to find ways to tie data quality/compliance to business relationships:
Supplier contracts with data quality SLAs
Customer experience metrics tied to data accuracy
Internal team KPIs linked to data usage
Budget allocations dependent on measurable outcomes
You probably won’t get a fully committed SLA from day 1, and that’s fine. For now, start tying technical debt and data debt to commercial metrics.
I’ll be writing an article on this point soon because it deserves a lot more space, but for now, just keep in mind that the best way to prioritise your tech & data debt is to communicate its importance in terms of the business impact it’s hindering / enabling.
4. Track continuously, report regularly
Match your tracking to the business rhythm. Continuous tracking for high-frequency operations, weekly tracking for weekly decisions. Report to stakeholders regularly - quarterly usually strikes the right balance between engagement and overwhelm. Annual reporting is useless, as it gives you no time to course-correct.
Note: Reporting shouldn’t just be about passing on metrics updates. Too often, there’s gold in sharing quotes from end users or customers, or examples of the impact being delivered.
5. Celebrate small wins publicly
When you hit a milestone - even 0.5% savings in month 3 - make sure people know. Success breeds success, and visible wins make future projects easier to approve.
Always remember that no matter how clearly you think your stakeholders are aware of the value you’re delivering for them, the real answer is “less than you thought”.
And on the internal-facing side of things, it’s a real motivator for your team to be aware of how their work is connecting to business outcomes - and help them become even greater champions of customer- and value-centricity in their work.
It all compounds 📈
Here's what happens when you nail the measurement on one project:
Credibility compounds. Your next business case gets approved faster because stakeholders trust your numbers.
Learning compounds. You and your team understand what actually drives business outcomes vs. what just looks impressive in demos.
Relationships compound. Business stakeholders become your advocates instead of seeing data projects as "IT stuff they have to tolerate".
Budgets compound. CFOs give bigger budgets to teams that can prove ROI.
I’ve seen this play out countless times - in my own work, in my clients’, and in the folks I’ve learned from over the years.
During my course, I use the below diagram to explain this idea further: As you start delivering value (and proving it), trust with your stakeholders grows. As your stakeholders trust you more, you’re looped into decisions earlier on - such as selecting which initiatives your team should work on.
Your next steps
The lesson isn't revolutionary, but it's urgent: Start small. Pick one project. Measure the actual ROI. Then do it again.
Don't wait for the "perfect" AI initiative or the most strategic transformation project. Pick the most boring, measurable win you can find.
Because in 2025, being able to prove business value isn't a nice-to-have capability for data leaders.
It's table stakes.
What's your experience with measuring data initiative ROI? Have you seen similar patterns in your organisation? Hit reply and let me know - I read every response.
P.S. If you're working on getting better at proving ROI from data initiatives, you might be interested in my course on exactly this topic. But honestly, just start measuring something. The course can wait.
In other news
📺 I’m starting a YouTube channel!
This one’s been a long time coming.
You might have noticed that while I post on LinkedIn many times a week, this newsletter is a lot quieter. I’ve got so many drafts, but can’t seem to get over my perfectionism for long-form writing in the way I can for shorter-form posts.
So I want to give YouTube a try and see if I find it easier to turn drafts into videos than I do articles. Plus, video lends itself to having guests over much more easily… 👀
You can subscribe here - nothing’s uploaded yet, but watch this space!
🇬🇧 Speed data-ing in London
We tried something new this month: Speed networking for data product managers. Kudos to Luca for proposing the format AND running the event on the day 🙌
🇬🇧🇪🇸🇫🇷 More Data PM meetups coming up
🇬🇧 London: 23 September - the night before BigDataLDN!
🇪🇸 Barcelona: Two events in September! Regular meetup on the 9th, and a special event about going freelance/independent on the 18th
🇫🇷 Paris: No date set yet, but you can sign up to get invited when the next event is announced here
Do you live somewhere with a non-zero number of fellow Data PMs, but without a DPM meetup? I want to help you change that! See the article below:
By the way, I’ve already had folks reach out with an interest to start a local chapter in Dublin, Wroclaw, New York, and Melbourne. If you’re in one of those cities and want to help share the load, let me know! Hosting is much easier when you don’t need to attend 100% of meetups.






Good post Nick.
The other broader challenge is that many times, benefits cant directly be attributed to the data or AI project as data and AI are mostly a means to the end, not the end itself. This means you will need a benefits sharing agreement with the process owner that leverages your project to deliver benefits - this can be very difficult.
This is often compounded by any existing benefits framework which often don't factor in enablers like data.
Working in finance I see similar problems everyday. Capex projects won’t get approved without a clear ROI but no one actually measures the ROI after completion.
You bring up a great point that we are incentivized on delivering outcomes not reporting if the outcome meets expectations.