ISSUE #16 - APRIL 19, 2023
NPS is flawed by nature as a product metric. In this post, I'm showcasing why it's not a good idea for product leaders to use it as a metric based on which they make important product decisions.
A few weeks ago, I had a coffee with the product director of an eCommerce SaaS company. At some point during the discussion, he brought up that their NPS was at 10 at the time and that one of the annual targets of the product team was to get this number to at least 25.
Now, let's go a few years further back in time. When I first heard about NPS as a metric, I thought it was a very interesting concept. The whole narrative of promoters and a self-sustained growth machine sounds very attractive and easy to grasp, especially if - like any product person that gives a damn - you want to see your user count grow. Soon enough, I realized that NPS is significantly flawed as a metric when tracked in the context of a product team and is used as a KPI based on which significant decisions are made.
Returning to the discussion from the beginning of the post, I started wondering why they emphasized the NPS so much and believed it to be so valuable that it ended up being used as a goal for the product team. So, I asked to see a small sample of their NPS responses to see what they were seeing for myself. And this actually verified my aforementioned opinion about the flawed nature of the NPS.
Still, NPS is a quite popular metric among product leaders and product teams. In a LinkedIn poll, I ran a couple of weeks ago, at least 4 out of 10 product leaders said they are tracking NPS in their product teams and making important product decisions based on this score.
In this article, I want to showcase why I believe this is not a good idea and that product leaders should look into other metrics with more substance to make product decisions.
Net Promoter Score was invented in 2003 by Fred Reicheld as a simple way to measure customer loyalty that could be applied across various industries. It's calculated based on the customer's answer to a single question: "On a scale of 0 to 10, how likely are you to recommend [company] to a friend or colleague?"
Then, you simply get the percentage of people who responded with anywhere between 0 and 6 (the detractors), and you detract it from the percentage of those that responded with 9 or 10 (the promoters), and there you go. You have your NPS score that can vary from -100 to 100. Simply put, the higher this score, the better for the company. But people also advocate that you will likely create "a self-sustained growth machine" when you get many people so happy that they fall under the promoters category since they will recommend you to their friends or colleagues.
So, before we jump into the main flaws, let's start with the apparent. NPS was not invented as a product metric but as one intended to measure loyalty. It could potentially be of value for teams that are specifically working on product growth, but it's far from being a metric of general use. But let's see why else I consider NPS to be flawed.
The "one question to rule them all" rationale seems quite questionable to me. In some contexts, this question (how likely are you to recommend [company] to a friend or colleague?) can make sense. Mostly in B2C contexts. However, in most B2B contexts, not so much. Many respondents of NPS surveys run by B2B products also agree with that view. Specifically, in the sample of responses that I talked about at the beginning of this post, there were quite a few cases with answers from 6 or below (detractors), who left a comment along the lines of "why should I recommend a software for work to a friend or colleague?" So, suppose that you were the product manager of that product, and you saw those responses. What would you make out of them? Would you trust them? Is this person really a detractor or just someone confused because of an out-of-context question they were asked?
Also, even if you actually want to measure customer loyalty, why should all companies and industries be using the same way to measure this? Each product and its business has its own characteristics. As product people, we should be able to measure loyalty or organic growth more accurately. For instance, from its very nature, the NPS question resembles a question asking for feedback on an intention of referral. So, you could launch a referral program, and instead of asking people to answer the NPS question, address the people you know that are actually neutrals or detractors (i.e., those who haven't referred someone) and ask them what prevents them from doing so. The feedback you will collect this way will be way more accurate, and most importantly, you will be way more confident that you are asking the right people.
The above is just an example. I am not proposing that this is how we should measure loyalty or get feedback from customers who do not belong to the "promoters" segment. But I suggest that each product team should think more deeply about the best way to measure customer happiness and success based on its unique traits.
When people are asked to rate something on a scale, as they are asked in the case of NPS, their responses are not directly comparable. This happens for a single reason: If I evaluate a product or service with a 7/10 and another person does the same, that doesn't mean we are equally happy. For example, one of us might be extremely satisfied and a frugal rater at the same time, while the other one could be dissatisfied but still a generous rater or a people pleaser.
Getting back to the sample of NPS responses we were talking about, I ran an analysis focusing only on the responses of the detractors. Out of 128 of those responses, I could confidently claim that at least 35% of them were not actual detractors. They were either neutrals or promoters, and they just were frugal raters. How do I know? From their comments in their responses. They all looked something like the following: "Amazing," "best tool I ever used," and "happy so far." So, what do you think? Are those people really detractors? And most importantly, would you really rely on a metric to make crucial decisions when 35% of the responses of unhappy customers (11% of total responses) are clearly inaccurate or untrue?
Another critical aspect to consider is that NPS responses can be affected by many factors, irrelevant to the product per se. The question used to measure NPS is quite general so that a response could encapsulate the customer's perspective for any area, from the product itself to customer support or their experience with their sales team. A good or a bad response could be connected with anything. So, as a product team, you are tracking a metric that could encapsulate aspects far from your influence.
To make it more accurate, getting back to the example I used before, several answers from detractors were complaining about customer support. In particular, people claimed it occasionally took too long to respond to or fully resolve their issues. In this particular case, as a product team, you are optimizing based on a metric skewed by a different team's efforts.
To build more on the previous argument, NPS doesn't give you specific information about what's going wrong or what you are doing right. Consequently, it is not an actionable point of feedback. In the sample I came across, at least 27% of all the responses were not specific. They either entirely lacked a comment (the majority of them), or even if they had one, the comment was too generic. It could be something like "buggy," "not good enough," or "needs improvement." Yes, this could provide you with a sentiment of the user about your product. Still, it's primarily a distraction from meaningful feedback and insights collection efforts, such as user interviews.
I have often encountered cases where it's hard to directly connect a feature with a measure for its success. So, it's not uncommon to answer the question "how do we know if we're successful or not" with "we'll measure the NPS of people that used that feature." Of course, this has absolutely no rational grounds as an approach and it's just lazy. When shipping something, you need to have a clear metric that shows you if you're successful or not and need to be able to connect this metric with value to the user. In such cases, choosing just to measure NPS is only a shortcut that prevents you from thinking hard about how a feature would add value to a user and how you can measure it.
So, that was my honest (and probably harsh) criticism on NPS. While it's a quite popular metric and when it's high enough, it makes management, investors and even teams happy, it cannot be used as a measure of success for a product. If you do use it though, do that with caution and even when using insights from NPS surveys as an input for product decisions, make sure to inform those decisions with further and proper research with users and customers.
ISSUE #15 - MARCH 28, 2023
A list of books that I found particularly useful to my product management journey up to now. And probably one of the few that doesn't contain Inspired.
ISSUE #14 - FEBRUARY 24, 2023
In this post, I am sharing my thoughts on formulating a solid interview process for hiring a product manager, given the role requirements and the available..
ISSUE #13 - JANUARY 24, 2023
In this post, I am sharing some thoughts on my OKRs and how they look like, planning and the time horizon of early-stage roadmaps, and opportunism when..
Once every month, I’m sharing my thoughts on product, growth & entrepreneurship.
Latest Newsletters
ISSUE #25
What to do when you don't know what to do next
ISSUE #24
When good docs go bad: Learning from a PM's misstep
Copyright © Manos Kyriakakis