9 Practical Methods for Measuring Service Quality

Emoji in different stages because of differing service quality.


9 Practical Methods for Measuring Service Quality

Share

We like to measure stuff. How long we can hold our breath, our weight before and after a workout, the IQ of our kids…

But some things aren’t so easy to measure. Like service quality. Yet there are high pay-offs for going through the effort.

Measuring service quality allows you to spot areas for improvement, assess and compare the performance of team members, set clear targets to aim for, and improve your

customer satisfaction

.

You can’t manage what you can’t measure.

Peter Drucker

Here are nine practical techniques and metrics for measuring your service quality.

1

SERVQUAL

This is the most common method for measuring the subjective elements of service quality. Through a survey, you ask your customers to rate the delivered service compared to their expectations.

Its questions cover what

SERVQUAL

claims are the

five elements of service quality

: RATER.


  • Reliability.

    The ability to deliver the promised service in a consistent and accurate manner.

  • Assurance.

    The knowledge level and politeness of the employees and to what extent they create trust and confidence.

  • Tangibles.

    The appearance of e.g. the building, website, equipment and employees.

  • Empathy.

    To what extent the employees care and give individual attention.

  • Responsiveness.

    How willing the employees are to offer a speedy service.

Here is an

example of a SERVQUAL questionnaire

.

2

Post-service ratings

This is the practice of asking customers to rate the service right after it’s been delivered. This is our favorite approach, because the memory of the service is still fresh and undiluted.

With

our live chat solution

, for example, you can set the

chat window

to display a service rating box once it closes. The customers make their rating, perhaps share some explanatory feedback, and close the chat.

image of example conversation and rating box

Something similar is done with helpdesk systems like

Help Scout

, where you can rate the service response from your email inbox.

image of help scout's rating system

It’s also done in

phone support

, although here the experience is a bit trickier. It requires the service rep to ask whether you’re satisfied with their service performance, or you’re asked to stay on the line to complete an automatic survey. The former distorts the results as one tends to be polite/agreeable; the latter is simply annoying.

As a general rule: The easier you make it for your customers to leave instant feedback, the better your results will be. Difficulty skews your results to only include your most happy and annoyed customers. Effortlessness, like the one-click rating after a chat support session, ensures that you also include the majority in between.

illustration of earth

Different scales can be used for the post service rating. Many make use of a number rating from 1 – 10. There’s a possible issue here, however, as

cultures differ in how they rate their experiences

.

People from individualistic cultures, for example, tend to choose the extreme sides of the scale much more often than those from collectivistic cultures. In line with stereotypes, Americans are more likely to rate a service as “amazing” or “terrible,” while the Japanese will hardly ever go beyond “fine” or “not so good.” It’s important to be aware of when you have an international audience.

Simpler scales are more robust to cultural differences and more suited for capturing service quality. Customers don’t generally make a sophisticated assessment of service quality.


“Was it a 7 or an 8…? Well… I did get my answer quickly… On the other hand, the service agent did sound a bit hurried…”

This type of judgement is unrealistic. Customers are more inclined to rate it as “Fine,” “Great!” or “Crap!”

image of smiley rating system

That’s why at Userlike we make use of a 5-star system in our live chat rating, why Help Scout makes use of three options (great – okay – not good), and why the US government makes use of four smileys (angry – disappointed – fine – great). Easy-peasy.

3

Follow-up surveys

With this method, you ask your customers to rate your service quality through an email survey – for example via

Google Forms

. It has advantages and disadvantages compared to the post-service rating.

cartoon of jet

One advantage is that it gives your customer the time and space for more detailed responses. You could send a SERVQUAL type of survey, with multiple questions instead of one. That’d be terribly annoying in a post-service rating.

It also provides a more holistic overview of your service. Instead of a case-by-case assessment, the follow-up survey measures your customers’ overall opinion of your service.

It’s also a useful technique if you didn’t have the post service rating in place yet and want a quick overview of the state of your service quality. You could send out a survey to your entire customer base, for example.

But there are downsides as well. Such as the fact that the average person’s inbox already looks more like a jungle than a French garden. Nobody’s waiting for more emails – especially those that don’t promise any benefit for the recipient.

With a follow-up survey, the service experience will also be less fresh in mind. Your customers might have forgotten about the experience entirely, or they could confuse it with another experience.

And finally, since such a follow-up survey constitutes more effort, you will mostly receive responses from your most positive and negative customers – filtering out everyone in between.

Here is a

good example of a follow-up survey

.

4

In-app surveys

With an in-app survey, the questions are asked while the visitor is on the website or in the app, instead of after the service or via email. It can be one simple question – e.g. “How would you rate our service?” – or it could be a couple of questions.

image of saasy's rating system

Convenience is the main advantage. The downside is that it’s not so targeted. People are likely to respond based on their entire experience, rather than specifically on the basis of your service quality.

SurveyMonkey offers

some great tools

for implementing something like this on your website. Also check out

Hotjar’s guide on website feedback

.

Userlike: Instant chats, long-term customer relationships

Over 10,000 companies like Toyota and Hermes trust Userlike to connect with their customers every day – via website chat, WhatsApp, chatbots and more.

Learn more

5

Mystery shopping

This is a popular technique used by retail stores, hotels, and restaurants, but works for any type of service, also digital. It consists of hiring an “undercover customer” to test your service quality – or putting on a fake moustache and doing it yourself, of course.

The undercover agent then assesses the service based on a number of criteria, for example those provided by SERVQUAL. This offers more insights than simply observing how your employees work. Which will probably be outstanding — as long as their boss is around.

6

Documentation analysis

With this qualitative approach you read through/listen to your written/recorded service records. Those doing this quality assurance then check whether the support agents took the right actions or not. They can then process this into constructive feedback, or follow up with the customer for damage control if necessary.

You’ll definitely want to go through the transcripts of low-rated service deliveries, but it can also be interesting to read through the documentation of service agents that always rank high. What are they doing better than the rest? It can be small things that separate a good from a great service delivery, such as the

proper use of emoji

in chat support.

The effort involved with doing this type of analysis largely depends on the

customer channel

. Live chat and email support offer instant documentation, and especially with the former it’s easy to pick out the outliers.

image of all conversations in Userlike

With phone support, however, it requires an annoying voice at the start of the call:

“This call may be monitored and recorded for quality assurance.”

What’s more, the one doing the analyzing has to listen through the conversations, which is time consuming.

7

Customer effort score (CES)

This metric was proposed in an influential

Harvard Business Review

article. In it, the authors argue that instead of delighting our customers, we should make it as easy as possible for them to have their problems solved. That’s what they found to have the biggest positive impact

on the customer experience

, and what they propose measuring.

cartoon of R2D2

Don’t ask:

“How satisfied are you with this service?”

Its answer could be distorted by many factors, such as politeness. Ask:

“How much effort did it take you to have your question(s) answered?”

The lower the

CES score

, the better. CEB found that

96% of the customers with a high effort score were less loyal

in the future, compared to only 9% of those with low effort scores.

First contact resolution takes place when a customer reaches out to support with a question or issue, and they receive a resolution in that first session. So no follow-up emails, call-backs, etc.

It’s a metric worth highlighting due to its direct positive effect on customer satisfaction. In

a Touchpoint research by CX Act

, they found that

“…customers who receive a first contact resolution are nearly twice as likely to buy again from a brand and four times more likely to spread positive word of mouth about it.”

To calculate this metric, divide the number of issues that were resolved through a single response by the number that required more responses.

Here, also, the customer channel has a big influence.

Email is a notoriously bad channel

for first contact resolution, because it lacks the opportunity to have a quick back-and-forth that is often necessary to clarify the customer’s issue. For this, you need live channels like

phone

and

website chat

.

It’s mostly for this reason that at Userlike we’ve implemented an option to

escalate from a customer chat to a call

. Website chat is the best channel to start a conversation due to its low-barrier nature. But when the topic becomes complex, or you notice that you have a warm lead, you can easily send an invitation for a call. If the customer accepts, the call opens directly in their browser.

9

Leading metrics analysis

The first contact resolution ratio is an example of a metrics analysis approach to measuring service quality.

cartoon of telescope

SERVQUAL, CES and the different types of above mentioned surveys focus on the

outcome

or the goal, i.e. the subjective experience of the customer. But there is also great value in focusing on the

inputs

, i.e. the elements that make for a quality service delivery.

In tracking terms, these input indicators are called


leading metrics

, while the outcome indicators are

lagging metrics

. Measuring the outcome of your service delivery is necessary to know where you stand, but input metrics can tell you where to go.

The below metrics are great as a basis for setting the targets of your service team. Customer satisfaction is elusive and dependent on many factors outside of one’s control. The following input metrics focus your team on the areas they can control.


  • First response time.

    This metric tracks how quickly a customer receives a response on their inquiry. This doesn’t mean their issues are solved, but it’s the first sign of life – notifying them that they’ve been heard.

  • Response time.

    This is the total average of time between responses. Let’s say your email ticket was resolved with four responses, with respective response times of 10, 20, 5, and 7 minutes. Your response time is 10.5 minutes.

  • Replies per ticket.

    This shows how many replies your service team needs on average to close a ticket. It’s a measure of efficiency and customer effort.

  • Backlog inflow/outflow.

    This is the number of cases submitted compared to the number of cases closed. A growing number indicates that you’ll have to expand your service team.

  • Customer success ratio.

    Good service doesn’t mean your customers always find what they want. But keeping track of the number who found what they were looking for versus those that didn’t can show whether your customers have the right idea about your offerings.

  • “Handovers” per issue.

    This tracks how many different service reps are involved per issue. Especially in phone support, where repeating the issue is necessary, customers hate handovers.

    Harvard Business Review

    identified it as one of the four most common service complaints.


  • Things gone wrong.

    The number of complaints/failures per customer inquiry. It helps you identify products, departments or service agents that need some “fixing.”

  • Instant service/queueing ratio.

    Nobody likes to wait. Instant service is the best service. This metric keeps track of the ratio of customers that were served instantly versus those that had to wait. The higher the ratio, the better your service.

  • Average queueing waiting time.

    The average time that queued customers have to wait to be served.

  • Queueing hang-ups.

    How many customers quit the queueing process. These count as lost service opportunities.

  • Problem resolution time.

    The average time before an issue is resolved.

  • Minutes spent per call.

    This can give you insight on who are your most efficient operators.

Look here for

more service metrics

.

Some of these measures are also financial, such as the minutes spent per call and number of handovers. You can use them to calculate your costs per service contact. Winning the award for the world’s best service won’t get you anywhere if it eats up all your profits.

Most service tools keep track of such metrics automatically. Within

Userlike

, we offer analytical Dashboards that show the

most important KPIs for chat support

.

image of missed opportunities in Userlike dashboard

One final word of caution. Measurements make for powerful incentives. This can be a great thing, when it points your team in the right direction. But there is also a danger of the measurement becoming the goal, instead of a reflection of the goal.

When a measure becomes a target, it ceases to be a good measure.

Marilyn Strathern

As customers, we’ve all gone through service experiences in which the agents were over-incentivized by efficiency metrics, seemingly eager to close the ticket as soon as possible.

This is obviously a losing strategy, as it leads to more questions and effort down the river. To get it right, be careful with what you measure, and balance output goals (i.e. customer satisfaction) with input goals (i.e. first response time).

The easiest way to improve your service quality (metrics)

One of the easiest ways to

improve the perceived quality of your customer service

is by

being more deliberate about the channels through which you offer it

.

Most companies still conduct the bulk of their customer interactions through email and telephone, even though it’s obvious that their customers

have moved on to a new channel: messaging

. Whether it’s WhatsApp, Messenger, SMS or Signal, everyone is messaging all day long.

We’ve built Userlike to be the software solution for bringing your customer communication to the messaging era. When your customers are on your website, Userlike’s

Website Messenger

allows you to support them while they’re at their most valuable. When they’re not on your website, Userlike’s

messaging channels

(WhatsApp, Messenger, etc.) allow your customers to still reach you with a few tabs of the thumb.

If you’d like to give this new way of customer communication a go,

sign up for a free 14-day today

.