When Björn Ulvaeus of Abba met with media for a rare personal interview the other day, he described how the band turned down the chance to tour America at the height of their fame, disappointing hundreds of thousands of fans and pissing of Abba’s legendary manager Stickan Andersson. It really seemed like the wrong move. Looking back however, Björn feels it was one of the best decisions they ever made.
Not touring America allowed for time in the recording studio, where Abba churned out classics that still bring joy to millions of fans long after everyone has forgotten what bands went on big tours some forty years ago.
I thought about that the other day when I heard Richard Eckles talk about how Amplitude uses something called North Star Metrics, a framework that helps in strategic decision making.
There seems to be a growing concern in the product community that we’re flying by the seat of our pants while other disciplines—from marketing to sales and HR—are all driven by metrics.
That’s probably a valid feeling, but there’s also a reason for it: product success is hard to measure. Traditional KPI’s are often lagging, meaning that they give you an idea of what’s already happened. How can we arrive at leading indicators, so that we can base product design on more than just gut feeling?
(Jake Knapp had a similar motivation when coming up with the design sprint methodology, which he describes in his book Sprint as “taking the hard part out of learning the hard way”.)
The idea of shifting focus from lagging to leading indicators is not new of course. It’s largely the point of the Objectives and Key Results framework, invented in the 50’s by Peter Drucker (who called it MBO) ripening during decades at Intel under Andy Grove, made famous by Google and now permeating startup culture worldwide.
Just like “agile”, OKR offers useful concepts. But as anyone who’s been playing around a little with either of these methodologies will have discovered, they can be quite tricky to translate into something that can be lived and breathed.
That’s why I liked how Richard Eckles give us concrete examples of real world implementation. The way they look at metric-driven decision making at Amplitude is it has to:
A) be based on measures of actual customer value,
B) be aligned with the product strategy and
C) give leading indicators of future success.
Eckles then moves on to give us a framework within the framework, so to speak. He says in order to arrive on which metrics should be your north star, you first have to know what game you’re in. He provides us with three options:
- The attention game, as played by Spotify
- The transaction game, as played by Amazon
- The productivity game, as played, incidentally, by Amplitude
In order to understand the true nature of a leading indicator, it’s interesting to note that Spotify focus on number of songs listened to by premium subscribers and Amazon’s north star metric is the number of orders placed by prime customers. See? Nothing to do with turnaround of profit margin. Those would be lagging indicators, whereas if you start seeing your most avid fans behave differently, it’s a sign that the ground is about to shift under your feet.
I like the focus on what provides actual value to your users (as opposed to the type of vanity metrics that makes you feel good when you report them to the board of directors). The hard part of course, is that in real life you still always have to balance different customer values?
In the case of Abba’s would-be American tour, it really came down to the fact that band members were in the process of starting families and didn’t want to stay away from their kids. In the long run, that ‘metric’ probably turned out to provide maximum value for the highest number of ‘customers’, including the US fans who didn’t get to see their favourite band live.