"Experience Investigators" crane logo
Eager to get started? Call us at 312-676-1315.

The Spectrum of Data-Driven CX (And Why There’s No Magic Metric)

Every team at an organization must prove how their work contributes to the bottom line — and customer experience is no exception. We know CX teams can deliver significant business advantages:

  • Satisfied customers are much more likely to purchase more
  • Revenue grows 40% faster by providing personalized experiences
  • Businesses achieve a 2.3x higher customer lifetime value by prioritizing CX

Now comes the tricky part: How can we connect day-to-day activities and specific investments to those undeniable business benefits?

CX teams use a variety of metrics to guide their efforts, drive improvements, and measure ROI. Artificial intelligence (AI) is also changing the game and making time-to-insights faster and more efficient.

But we see teams fall into an all-too-common trap when they don’t focus on why they’re collecting these metrics. It’s easy to focus so much on gathering data or finding the perfect metric… we end up spending more time measuring than actually executing our ideas. And that’s a problem.

We want to dispel the belief CX teams need perfect data to move forward. Let’s look at the common CX metrics to understand how they are helpful, as well as explore why an abundance of data can sometimes be an innovation blocker.

3 Metrics CX Teams Use to Measure Customer Feedback

There are a lot of customer experience metrics teams can track, but we see three most commonly used: Net promoter score (NPS), customer satisfaction score (CSAT), and customer effort score (CES).

Few experienced professionals dare to venture off from these tried-and-true metrics. This is often because there is a lack of resources — managing one or two metrics is all one person or team can handle. Other times, it’s simply a symptom of the embarrassment of riches; too many choices can be overwhelming, and we end up not making any moves.

Each of these metrics can identify underlying experience concerns (we’ll get to those next). Think of them like circles on a Venn diagram — some metrics address specific areas, many overlap with each other, and none are more important than the others.

And while metrics are great for measurement, it’s important to see these for what they are. Metrics like these help measure overall sentiment and help identify areas to improve for the customer. But they are part of a bigger view that needs to include operational data, usage rates, behavioral analytics, and more. Customers are nuanced. Leveraging their feedback needs to be, too.

For this article, we’re diving into these three metrics so you have ideas on the best ways to use them. Just promise us you won’t rely on a magic metric as the answer, ok?

Net Promoter Score

What is it? NPS measures how likely a customer is to recommend a brand to someone else (friends, family, colleagues).

How do you measure it? Ask customers “How likely are you to recommend us to a friend or colleague?” on a scale of 0 to 10. A 0 means highly unlikely, and a 10 means extremely likely. People who select 9 or 10 are considered Promoters, those who select 7 or 8 are Passives, and those who select 6 or lower are Detractors.

How do you interpret the results? Calculate your NPS by subtracting the percentage of Detractors (scored 6 or lower) from the percentage of Promoters (scored 9 or 10). An NPS rating above 0 is considered good, above 50 is considered excellent, and anything about 75 is considered world-class.

What are its pros? NPS is quite popular with executives because it’s easy to understand and communicate. It’s also a great indicator of how customers feel about the overall brand and relationship.

What are its cons? Many organizations don’t execute NPS in a way that gains the best results. Even the creator of NPS, Fred Reichheld, feels organizations have misinterpreted the framework and diminished its value by linking NPS to things like employee bonuses. He believes organizations should now measure earned growth instead (which he explains in the article linked above).

Customer Satisfaction Score

What is it? CSAT measures how satisfied a customer is with a specific product, service, or interaction, or the company as a whole.

How do you measure it? Ask customers “How would you rate your overall satisfaction?” with your company, its products, services, and interactions. It’s most common to use a five-point scale with these options: very unsatisfied, unsatisfied, neutral, satisfied, very satisfied.

How do you interpret the results? Companies can calculate CSAT as an average of the 1-5 responses, or by focusing on the 4-5 responses (which we recommend). The final score is represented as a percentage. Teams should typically aim for an 80% or higher.

What are its pros? Customers are fairly familiar with CSAT questionnaires, so it’s easy to understand and implement. CSAT can be a touchpoint metric, too, meaning it helps measure specific parts of the brand experience (as does your CES).

What are its cons? “Satisfied” is a fairly low bar that may give organizations a false sense of security, because it doesn’t necessarily lead to loyalty. Also, CSAT questions are often not standardized across organizations, making it difficult to compare results.

Customer Effort Score

What is it? CES measures how much effort was involved for your customer during a specific interaction.

How do you measure it? Ask customers to agree or disagree with the statement “[Your company name] made it easy for me to handle my issue.” CES surveys can also include an open-ended follow-up question asking for feedback on the response. The respondent can choose from seven answer choices ranging from strongly disagree (score 1) to strongly agree (score 7).

How do you interpret the results? Calculate your CES by finding the average of all responses. Generally speaking, an average CES higher than five is considered good. But your industry and history should also factor into how you interpret these results.

What are its pros? CES helps improve customer service and other routine interactions. It’s a touchpoint metric and can address specific roadblocks in the customer journey.

What are its cons? Effort scores are fairly limited in scope and don’t provide an abundance of information to improve the entire customer experience.

Complementary CX Metrics

Each of the above CX metrics reflects deeper elements of the customer experience throughout their journey. When assessing how to improve those scores, teams will often investigate several KPIs and operational metrics like the following:

  • Average handling time: How long does it take for customer service agents to resolve customer issues?]
  • Average purchase value: What is the average dollar amount spent by customers?
  • Customer churn rate: What percentage of customers stopped purchasing with a company during a specific time period?
  • Customer lifetime value: How much do customers spend across their entire time as a customer?
  • Customer retention rate: What percentage of customers continue to do business with the company over time?
  • First call resolution: What percentage of customer issues are resolved on the first call?

Each of the above metrics can lead teams down a rabbit hole to identify issues, propose solutions, and measure improvements.

The Trap of Data-Driven CX

This brings us to the big question: Which metric(s) should CX teams prioritize for the best result? (Please, let there be a magic solution!)

The truth is all of them are useful. But none of them are essential to move your program forward. Let me explain.

Only a portion of your customers will receive a survey. And only a portion of those who receive a survey will give you feedback. This small sample of respondents may provide useful insights, but it’s not an accurate representation of your customer base.

Further, each metric provides a glimpse at just one area of the customer experience. For teams of just one or two CX team members, it could easily take an entire year to assess each metric and the areas that impact it — and by then, new problems have likely arisen somewhere else in the journey!

If teams focus entirely on improving NPS or CSAT, they’ll inevitably miss out on other major opportunities to make the experience better for customers.

It’s understandable to want data to make calculated decisions, but it’s easy to get stuck while waiting for data to come in — or spend far too much time analyzing your data before it’s time to collect more.


More than NPS [Experience Action Podcast]

All CX Progress is Valuable

CX teams should prioritize making gradual improvements to the experience. CX metrics can provide a general indicator of progress, but we often have – and can often trust – a gut feeling of where to focus next.

Of course, we need to track some metrics to demonstrate the value of customer experience efforts. But don’t let data collection and analysis overrun your job. Constantly seek out areas you can improve, and keep listening to customers to identify the most pressing areas to focus on.

Use artificial intelligence to your advantage. It can greatly accelerate your measurement and free up your time to focus on more important things. Just be sure to ask why you’re tracking what you track, and have a clear purpose for your AI use.

As long as your efforts are aligned with your Customer Experience Mission , you’ll continue to make meaningful improvements.

Metrics matter for measurement. But progress is more than reporting on a metric. Focus on the actions you’ll take based on what your metrics tell you. (And keep going! Metrics like these don’t change overnight.)

Insights in Your Inbox

Subscribe to The Weekly Win and join thousands in our community receiving insider perspective from our Founder and Chief Experience Investigator, Jeannie Walters, CCXP.