I'm doing some work for a client around Net Promoter Score at the moment. It's run by a company called Satmetrix.
Net Promoter Score (NPS) is a bit of a twist on recommendation/referral scores. It subtracts people who aren't likely to promote/refer your brand from those who are. The result is a score that fluctuates as both ends of the referral spectrum change.
It's not a bad idea. Makes lots of intuitive sense. And in general, I'm a fan of calculated metrics like this as they are more sensitive to variance than straight averages.
Satmetrix released a white-paper to explain the nuances of how NPS is calculated.
I loved everything about the NPS score until I read the white paper. Way to destroy an easy, intuitive and relatively obvious concept with some not so easy and less obvious logic and statistics.
There are two main things I didn't like about the way they presented the score and how it was calculated.
Firstly, this is a graphic of the inputs into their model:
(Sorry for the low resolution, I can't seem to get it much higher)
This is a pretty standard regression set up. On the left you have a bunch of things you think affect the things on the right. The things on the right (Behavior and Referral) are what you are trying to 'predict'.
The red flag was why have referral on the right? Especially when on the left you have 'likelihood to recommend'??? Essentially, you are attempting to predict referral behavior with intent to refer. Hmmm, I wonder if those might be related? Of course they are. You don't need a model to tell you that.
Why not just try and predict purchase behavior? That is, after all, what you are wanting to affect through good customer service.
The only reason to keep referral on the right is to make the result stronger. And sure enough, they get some high correlations - 80% of the time, Likelihood to Recommend was the strongest correlate with 'Behavior'.
But it's not really 'Behavior' - it's 'Behavior/Referral' and you are using things you know will affect referral ('will you refer me') to prove your point. It's like using rainfall to predict water levels and then being amazed when it does.
Ok, so there are some issues with the model. I could easily let that slide, they have all this in-market data collected over years of analysis to prove the usefulness of NPS.
Again, I it took me a couple of looks to believe my eyes. They are calculating an almost 90% correlation between NPS and 5 year revenue growth based on what are essentially three outliers.
The thing with linear correlations like this is that if they aren't weighted in some way, one (or a few) outlining points push the correlation up (or down) significantly. In this case, take out Southwest and Alaska and the relationship takes a dive.
Besides, should it even be linear? Surely a brand like Southwest should benefit MORE from a higher NPS as recommendations will propagate virally in a way - more recommendations mean more trial, which mean more recommendations.
Considering the complexity of what makes up 5 year revenue growth, I would be AMAZED if you could eyeball a relationship like the one above and actually SEE the effect of customer referrals. It's in there somewhere, but it's not likely visible.
So, while I like the idea, the evidence leaves a lot to be desired.
Just came across an article that does some validation research on NPS. Not surprisingly, it couldn't replicate the original NPS results. And this quote from the inventor Frederick Reichheld is interesting:
All we did was quantify this common sense in a way that made sense to business leaders—the target audience for my book. These practical leaders have little interest in advanced statistical methods. Frankly, we see little value in continued debate about cause versus correlation, timeframes, or statistical methods.If I was an executive in company that got sold on this score, I'd find that a little condescending!