Have you ever spent months building a business-facing tool powered by machine learning or some other advanced analytics, only to later find out that nobody is using it?
I’m not talking about ML POCs that were built more to show off the technology than solve a business problem, or which would cost more to operate and maintain than the benefit they’d bring. I mean when you’ve ticked pesky boxes like:
“Is the problem we’re looking to solve valuable enough?”
“Do we have the right data?”
“Is our model good/accurate enough?”
“Do we have the right business/domain expertise guiding us?”
“Have we engaged the business stakeholder whose team will use our tool?”
And yet, after some digging, you find out that folks aren’t using your tool - a tool which you were sure would meaningful improve key KPIs1 and had built up a business case that leaders from tech, data, and business all validated.
A lot of data & product folks think of product validation as being about feasibility (can we build the thing), viability (does it make commercial sense to build it), and value (are we solving a valuable enough problem).
Here’s what that looks like in Venn form:
What’s missing from that view is usability (I know I call it out as part of the blue circle, so technically it’s not missing - but you probably didn’t notice it). Usability is one of the four big risks, as Marty Cagan calls them in his excellent book Inspired.
In short, you need to make sure that your product is usable to make sure it gets used. It sounds really basic, but is often totally overlooked.
Examples of data products with usability challenges:
Accessing the data requires (very basic) SQL knowledge, but the end user doesn’t know any SQL and finds the idea of learning it intimidating and/or something they don’t have time for
The dashboard is hard to use without browsing a pre-recorded training video and/or separate documentation, and the users either don’t remember the training they went through, or never bothered to go through it
Accessing the insights requires logging into a separate tool that’s not part of the user’s regular workflow, so it’s easy for them to forget to do it
The dashboard was built with desktop screens in mind, but the users are out in the field, so they only have smartphones and small tablets at their disposal. The dashboard doesn’t load everything in that view, and examining charts is impossible
The model and dashboard refreshes every Sunday at 11pm, which means that it’s out of date halfway through the week. Users spotted the out of date figures and no longer trust the dashboard or the model that sits behind it, because they were used to seeing live numbers in their previous (much less smart/powerful) tool.
The dashboard doesn’t let users drill down by {date/product/customer/whatever matters to them}. When the dashboard was being designed, it was the manager of the manager of those users who was part of the project as the business stakeholder / person giving feedback, and they were only looking at figures at a much less granular level, so they were happy with what you made.
The interactive webapp and ML model are perfectly aligned to users’ workflow, device type, technical understanding, and everything else, but none of the users were involved during the development and testing process. Instead, one day there were told “this is the new tool, stop using Excel now” and it left a very sour feeling. Who made this tool? What makes them think they know better? Who do they think they are, coming here telling us how to do our job? What do you mean they’ve automated some of my tasks? For many months since go-live users find excuses for why they aren’t using the tool, and many aren’t even responding to you with feedback - they think you’re out to automate their jobs and get them fired.
I can keep going with examples, but I think you get it. In our effort to do things quickly and without disrupting the business (or because we’ve been given 100 promises by business leaders that we are building the right thing for their org), we end up building something that isn’t right, or that for other reasons ends up not being used. It’s why change management and user experience (UX) design are so important in data transformations.
I’ll write about change management another time - this post is to recommend one of my favourite resources for applying UX design thinking to data work.
The Experiencing Data podcast
Experiencing Data is Brian O'Neill's podcast where he interviews data product leaders on how teams are integrating product-oriented methodologies and UX design to ensure their data-driven applications will get used in the last mile. It's been a fantastic learning resource for me, both for learning about new approaches and ideas and (perhaps more importantly) for affirming when I've been on the right track in my own thinking.
I’ve been listening to the podcast for 2+ years now, and recommend it to pretty much every data professional interested in the non-technical aspects of data work that I meet.
Listening to Experiencing Data was also what drove me to dust off the old Design Thinking skills I'd picked up when I first started working in consulting and product management, and spend 10 weeks on a UX Design course at Brainstation (I wrote about my learning highlights here). We talked a bit about the course, how it's been valuable on the job, and how I wish I'd learned some of these things sooner in my career.
One thing I always appreciate when someone recommends a podcast is being pointed to a few standout episodes to go check out - it gets too daunting otherwise. So here's mine for Experiencing Data (with the caveat that I still have loads of episodes left to go through, some recent and some old):
Manav Misra on how he established a data product approach at Regions Bank, creating new roles (including his CDAO role)
Emilie Schario (who wrote one of the older data product thought pieces I've come across) on running data teams as product teams
Eugenio Zuccarelli on helping paediatric cardiac surgeons make better decisions using machine learning. Super cool use case and really highlights the need for taking SMEs and users on the journey with you, not just handing them a 'finished' product
Sebastian Klapdor on how he implemented a data product team at Vista, which now employs 35 data product managers (nearly 1% of the company's workforce!)
Jon Cooke for a variety of golden nuggets on how to generate business value quickly and with a small team through data products
Peter Everill on carrying out product discovery to drive user adoption and business value at Sainsbury's
Nadiem von Heydebrand on treating data products a bit like how fund managers think of investment portfolios
Osian Jones on running a data platform team as a product team, especially re:coming up with the right ways to collect feedback, understand adoption, and quantify your impact when you're in a central/platform team (i.e. usually multiple steps removed from the end user impact you're enabling)
A year ago, I was also a guest on the podcast (episode #130). I can’t believe it’s only been a year - I listen back to what I’m saying and who I was at the time and it feels like so much longer ago. Lots has changed both personally and professionally, though I still stand by everything I said on the episode 😄
Besides the podcast, Brian has a number of other resources on his website, including articles, a newsletter, a course, and the Data Product Leadership community (which I am part of).
// in other news
Busy period for in-person DPM meetups:
London: Next meetup is on 9th Dec, and you can sign up for future meetups here
Montreal: Sign up for future meetups here
Barcelona: Next meetup is today (20 Nov), and you can sign up for future meetups here
Paris: Next meetup is on 3rd Dec, and you can sign up for future meetups here
Cambridge/Boston: It’s not explicitly branded as a DPM meetup, but seeing the invite list of the Boston low-key data happy hour, it’s not not a DPM meetup 😉
Dublin: Meetup isn’t launched yet, but my colleague Sagar is collecting interest via this form.
[Your city here?]: Hit me up if you’re thinking of starting a DPM meetup - I’d love to help! Arielle and I will be putting together some resources from our experience hosting 15+ DPM meetups over the past couple years to help others do the same 😊
DPM podcast: I recently recorded some episodes I’m super excited about, and can’t wait for them to get released. If you’d like to come along as a guest, fill this form!
Books I’m reading:
DPM-related: I’ve been loving re-reading Team Topologies. So much of the reason why technical teams do or don’t deliver value comes down to org structure, ways of working, and preserving (or not) domain knowledge. Team Topologies does an excellent job articulating the antipatterns involved in most tech orgs, and lays out a set of principles for overcoming them.
Nonfiction, but not DPM related: Good Work by
Paul Millerd is about working out what ‘good work’ means to you. Along with Paul’s first book, The Pathless Path, it’s a book I can’t shut up about, and if we’ve met recently I have almost certainly gone on a 20-minute ramble and/or gifted you a copy of the book.
Fiction: I love science fiction, and on a serious note I think it helps product managers think more creatively and be more visionary about the future. It’s also just really good fun. Over the past couple of months, I’ve listened to approx. 238 hours (!) of J.S. Morin’s series Galaxy Outlaws, Astral Prime, and Mercy for Hire. It’s so good, and there’s so much of it (I just started the 4th series, so another ~60 hours of quality listening). And I would especially recommend it in audiobook format. Super fun.
Look, I know the ‘K’ in KPI stands for ‘key’, but the term KPI is used so liberally that I really do think we need to clarify when we’re talking about metrics that are actually key. Plus, there’s something called RAS Syndrome - the reason why we use phrases like “ATM machine”, “PIN number”, HIV virus”, and “LCD Display”