"Customer Service" should be called "Customer Elation"
We are inundated lately with never-ending pressure to deliver higher and higher customer service, no matter what business or what sector we are in. We are all chasing the holy-grail "Net Promoter Score" (NPS), wondering what it will take to get a customer to answer the question "Would you refer us to your friends and family" with a 9 or a 10. I'm sure you've had this question multiple times.
NPS is a wonderful benchmarking tool, and allows a company to compare their customer satisfaction to any other company, whether inside or outside your own industry. If you want to compare yourself to an airline, a tech company, a hotel chain, or a restaurant group, the data is available. It's a remarkable tool when looking at customer service. However, like any such measurement, when looked at in a vacuum, and looked at as the sole arbiter of customer service levels - and often looked at somewhat slavishly - I feel the tool loses some meaning.
History and Calculation
The NPS is a simple calculation. If your respondents give you a 9 or 10, they are considered a Promoter. If they provide your firm a 7 or an 8, they are considered Passive, and are neutral. If they give your company a 6 or lower, they are considered Detractors. In the calculation, the total number of Detractors are subtracted from the total number of Promoters, with the Passives ignored; this yields your Net Promoter Score. A score above zero is considered good, while a score of over 50 is considered excellent.
Originally described in an article in the Harvard Business Review in 2003, in 12 short years this benchmark has become one of the single most sought after measurements. Part of this was the early adoption of the scoring system by Apple, Inc., who has achieved enviable scores on this scale - so much so that many companies wish to duplicate their successes. Simply running into a friend gushing about their new iPhone should tell you enough you need to know about what a Promoter looks like.
Blessing and a Curse
One of the benefits of the NPS is that it is a weighted score, meaning that it is much harder to win a customer as a Promoter (20% of possible responses) than lose one as a Detractor (60% of possible responses). That makes it a much stronger indicator than a binary Yes or No response, even though it is somewhat counter-intuitive to rate the "referral question" on a 1-10 scale. Making it harder to have a customer means, in theory, that the company needs to work harder on training and delivering core customer service values to their customers. This has an interesting corollary when driven down to front line staff that are measured based on the NPS received by a company, and often by themselves on individual ratings. Further, companies that are really trying to achieve an increase in NPS focus very hard on how to turn a Passive customer into a Promoter. This has lead to some fairly interesting strategies when driven down to front-line sales personnel and customer service agents.
I'm pretty certain that you have been on the phone or in a retail location of some kind within the last few years, and have been told that you will be receiving a survey - at which point the customer service rep or sales person immediately asks you to rate their service as a "10". I've even seen a letter from an auto dealership asking that if you would not rate their service as a 10, you contact the dealership manager to discuss it prior to completing a survey. This sort of "horsing the system" is naturally going to occur when such heavy focus is shifted to a single scoring mechanism.
Are we asking the right question?
I know full well that the folks at Harvard are much smarter than me. However, the question that I have is this: is the NPS the right scoring mechanism at all? When we complete surveys, it is fortunately very rare that it is the only question on the survey (although sometimes it is), which allows other data to be gathered, but with the near-fetishistic obsession with NPS, do the companies look very hard at the other survey results? And should this be the only arbiter of high-quality customer service - namely, how someone will answer a question on a survey, given how poor survey response rates are in the first place.
This was all brought to mind to me recently by a customer service interaction that I had with a local company in B.C.'s Lower Mainland. Cypress Mountain, a fantastic local ski area where I've chosen to spend my money for several seasons, had an abysmal season for snow last year, so much so that I didn't even bother to attend once. However, I purchased their discount card in advance - admittedly a very inexpensive item in the grand scheme of things, at about $70 - which provides both a one day lift ticket plus a discount towards any other lift tickets I purchase through the year. I called their shop and was completely elated to learn that the card I purchased last year would be valid for the entire season this year. That's how I still feel about that interaction - completely elated.
I will not likely be sent a survey by Cypress Mountain, because I didn't have to identify myself. I will not be sought out to rate their service. However, I've told at least 20 people about this verbally, and posted about this outstanding response from them all over social media - thereby letting many potential and past customers know of the strong feelings I have because of this customer service gesture that was necessitated by something completely out of their control. I've talked about how simple the discussion was - it took about 30 seconds - and the true happiness that this little gesture has given me towards the company. How much more is that worth than my survey results? And really - shouldn't customer elation be our goal, rather than a number?
We will still get measured on customer service, sometimes slavishly. But I think I have a sterling new example of outstanding customer service the next time we discuss this in my organization.