What’s Your Customer Effort Score (CES)? – Aircall Blog
Customer service metrics are like puzzle pieces.
A customer satisfaction survey (CSAT) surfaces opinions about individual interactions with your support team. Net promoter score (NPS) reveals customer sentiment—and satisfaction—around your company and product on the whole.
Both metrics are valuable, but they don’t form the full picture. They may even get too much credit.
According to Gartner, the greatest predictor of customer loyalty isn’t customer satisfaction or net promoter score—it’s ease of experience.
To measure it, savvy organizations add a third and lesser-known survey methodology to the mix: customer effort score (CES).
In this article, we’ll cover what it is, when to use it, how to measure it, and most importantly, how to leverage survey results to give customers what they really want—an experience so easy they can’t help but stick around.
Mục lục
What is Customer Effort Score (CES)?
Customer effort score is a single-item metric that helps organizations identify friction points in the customer experience.
To do so, a CES survey typically asks customers to rank how easy (or difficult) it was to achieve a resolution—whether it’s getting a question answered, an issue fixed, a product purchased, or a request fulfilled.
Unlike net promoter score, which poses a much bigger question (How likely are you to recommend us to a friend?), customer effort surveys shine a light on specific service activities that make customers churn.
According to Harvard Business Review, some of these high-effort activities include:
-
Customers switching channels to get their problem resolved
-
Customers repeating or re-validating their information or issue
-
Customers receiving generic service
-
Customers getting transferred to different agents
Beyond customer service, CES surveys can also help a company understand its customers’ interactions with various products:
-
How does the UI help (or hurt) new feature adoption?
-
Where do customers become frustrated or confused?
No matter how a brand approaches CES, the overarching goal remains the same: to identify and improve upon rough spots in the customer experience.
Armed with contextual, real-time feedback, organizations that measure CES build an actionable understanding of customer interactions and leverage these insights to make better product and service decisions—the kind that drive loyalty and customer retention.
When to use Customer Effort Score
By definition, customer effort score is a transactional metric. To understand how hard (or easy) an experience is for customers, they must first have the experience.
This is where CES differs dramatically from NPS: Where the latter seeks to understand a customer’s overall impression of your brand, a CES survey asks about a specific event instigated by the customer.
There are numerous customer touchpoints where triggering a CES survey makes sense. Here are a few worthy of consideration:
1. After an interaction with customer service
Like customer satisfaction surveys (CSAT), the most common use-case for CES is immediately following a resolved interaction with customer support. Unlike CSAT, CES surveys don’t focus on customer impressions of agent competency. Instead, a CES survey zeroes in on the process of resolution by asking customers to evaluate how much effort it took to achieve.
Whether your team supports customers across multiple channels or focuses mostly on phones, purpose-built tools like NiceReply make it easy to collect CES feedback, sending automatic post-interaction surveys to customers after they contact customer service.
2. After a self-service interaction with your company’s website
Most modern customers want the option to help themselves, but only when self-service is done right. To remain effective, any business offering self-service support options must get customer feedback (at least periodically) to make sure they’re not hurting customer satisfaction more than helping it.
This includes knowledgebase articles, but it also includes chatbots and any other customer service automation that prevents customers from getting to a human.
3. After an interaction with your product or company website that led to a conversion
As CES gains more attention, a growing number of companies are leveraging the methodology to better understand the massive role product plays in the customer experience. What they want to know is pretty simple: How easy is it for customers to use (or transact with) our products?
Depending on what matters most to your business, your team may want to trigger a CES survey after someone signs up for a trial or demo, upgrades their subscription, or makes a purchase.
How to measure Customer Effort Score
Reducing customer effort includes intelligent survey design. Organizations that gain meaningful, actionable insights from CES start by crafting an effective CES survey.
At the minimum, this means the survey is:
-
Automated by event or service-based triggers
-
Easy to understand and act on
-
Optimized for mobile
Step 1: Build an effective CES survey
Design your question(s)
It all starts with the ask. The most effective surveys are short, simple, and difficult to misinterpret. They also take care not to steer customers towards an ideal response with straightforward language and easy-to-intuit visual cues—whether the survey asks a question or offers a direct statement.
-
How easy was it for you to solve your problem today?
-
Aircall made it easy to solve my issue.
-
How easy was it for you to sign up for our trial?
Choose your scale(s)
Likert scale: Also known as the “Agree/Disagree” continuum, this 7-point scale asks customers to rank how much they agree with statements like “The company’s website made it easy for me to make a purchase.” Responses may include associated numbers, colors, or both.
To calculate your CES using a Likert scale
You can either take the average of all scores or divide the total number of responses by the number of responses in the 5-7 range (and then multiply).
Numbered scale: Numbered scales (1-5, 1-7, 1-10) typically use questions to assess customer effort. They may also assign colors or statements—such as Extremely easy or _Very difficult—_to each numeric value to make it clear for customers how to appropriately respond to questions like “On a scale of X, how easy was it to get your issue resolved?”
To calculate your CES on a 1-5 scale
Take the average of all responses.
To calculate your CES on a 1-7 scale
Take the average of all responses, or…
Create buckets of scores based on response ranges like 1-3, 4-5, and 6-7.
To calculate your CES on a 1-10 scale
Take the average of all responses, or…
Create buckets of scores based on response ranges like 1-4, 5-7, and 8-10.
Emoticon scale: Also known as Easy, Difficult, Neither, this method doesn’t use numbers or excess dimensions. Instead, it offers customers three options: A frown, a neutral expression, or a smile. Due to its simplicity, this scale is well-suited to “simple” interactions with your product, knowledgebase, or company website. “Did this article solve your problem?”
To calculate your CES with an emoticon scale
Many people subtract positive responses from negative responses while ignoring neutral responses altogether.
Step 2: Get the most from survey collection
-
Don’t limit yourself to one scale. You may find that the Likert method works best for evaluating interactions with customer service agents, while numbered or emoticon scales provide better product and UX insights.
-
Leverage follow-up questions. Many customer survey tools allow users to leverage open-ended follow-up questions based on specific conditions. This usually means asking customers who rate their experience as difficult to briefly describe why—but it could also mean asking customers what made the interaction easy. In either case, this feedback is gold.
-
Experiment with data analysis. When CES was introduced in 2010, its creators proposed a 1-5 scale where the score was derived by calculating the average (i.e. dividing the sum of all individual scores by the number of customers who responded). Since then, new approaches to survey calculation and analysis have emerged (including an NPS methodology) that may be more effective.
-
Put negative responses into categories. Collecting survey scores is important**,** but what organizations really need is help identifying trends. You’ll stand a better chance if your team buckets survey responses—not just in terms of positive or negative, but also in terms of an issue’s source. For example, is this a process, product, or service issue?
Understanding your Customer Effort Score
Calculating your CES scores is the first step. Step two—understanding their implications and iterating on these insights—will benefit from thinking ahead.
We spoke with Siteminder’s VP of Customer Experience, Chris Ryan, about his approach to extracting the most meaningful information from customer effort score surveys. Here’s what Chris recommends:
-
Map the customer journey: The customer experience is the sum of all interactions an individual has with a business. When you understand it well enough, it can also define the relationship between value and effort delivered to the customer. You may overlook some touchpoints on your first pass, but whatever you can identify will inform what you measure.
-
Link customer touchpoints to their source(s): Tying CES surveys back to individual teams and unique categories (such as process, product, or service) will make it easier to understand weaknesses at an organizational level. It will also enable you to bring the data to the right people once it’s time to devise and implement solutions.
-
Don’t rely on averages: If you calculate your CES based on the classic 1-5 average method, you won’t glean much about the distribution of scores—you’ll just have a relatively meaningless mean. To get around this—and to get more actionable information—Chris recommends applying an NPS methodology to calculating your customer effort score. Doing so will enable your team to focus on the “detractor” segment—the people having the most difficulty—without wasting time on meaningless averages.
“Applying NPS methodology to CES has shifted how we see the results. When it’s leveraged as an average for reports, it never gets much notice since most companies that measure this way have similar scores.
When looking at NPS methodology where 1-4 is negative, 5 and 6 are passive, and 7 is a promoter, 5 and 6 are where you generally want 80 percent of your experience, so I am not as focused on that area.
But I do want to know if I have a considerable amount of people in the 1-4 bucket as opposed to the 7. If it sways to the negative, we have a problem that needs flagging—and it gets more executive buy-in when presented in a format they understand.”
Putting it together
The secret to customer loyalty may be simpler than we think. When customers interact with your brand, is it an ordeal or a walk in the park?
With all the opportunities for friction in the world today, companies that make life easier stand to gain repeat customers and a positive brand reputation.