Understanding Customer Experience
Anyone who has signed up recently for cell phone service has faced a stern test in trying to figure out the cost of carry-forward minutes versus free calls within a network and how it compares with the cost of such services as push-to-talk, roaming, and messaging. Many, too, have fallen for a rebate offer only to discover that the form they must fill out rivals a home mortgage application in its detail. And then there are automated telephone systems, in which harried consumers navigate a mazelike menu in search of a real-life human being. So little confidence do consumers have in these electronic surrogates that a few weeks after the Web site www.gethuman.com showed how to reach a live person quickly at ten major consumer sites, instructions for more than 400 additional companies had poured in.
An excess of features, baited rebates, and a paucity of the personal touch are all evidence of indifference to what should be a company’s first concern: the quality of customers’ experiences. In the first example, the carrier offered a jumble of phone services in part to discourage comparison shopping and thus price wars. In the second, the company offered a hard-to-obtain rebate to stimulate a purchase. And in the third, the goal was to slash staffing costs, despite soothing claims of 24-hour self-service availability. Unfortunately, such cunning makes for customer experiences that engender regret and then the determination to do business elsewhere.
Customer experience encompasses every aspect of a company’s offering—the quality of customer care, of course, but also advertising, packaging, product and service features, ease of use, and reliability. Yet few of the people responsible for those things have given sustained thought to how their separate decisions shape customer experience. To the extent they do think about it, they all have different ideas of what customer experience means, and no one more senior oversees everyone’s efforts.
Within product businesses, for example, product development defers to marketing when it comes to customer experience issues, and both usually focus on features and specifications. Operations concerns itself mainly with quality, timeliness, and cost. And customer service personnel tend to concentrate on the unfolding transaction but not its connection to those preceding or following it. Even then, much service is rote: Otherwise, why would service reps ask, as they so often do, “Is there anything else I can help you with?” when they haven’t even dealt with the original reason for the call or visit?
Some companies don’t understand why they should worry about customer experience. Others collect and quantify data on it but don’t circulate the findings. Still others do the measuring and distributing but fail to make anyone responsible for putting the information to use. The extent of the problem has been documented in Bain & Company’s recent survey of the customers of 362 companies. Only 8% of them described their experience as “superior,” yet 80% of the companies surveyed believe that the experience they have been providing is indeed superior. With such a disparity, prospects for improvement are small. But the need is urgent: Consumers have a greater number of choices today than ever before, more complex choices, and more channels through which to pursue them. In such an environment, simple, integrated solutions to problems—not fragmented, burdensome ones—will win the allegiance of the time-pressed consumer. (For more on making the buying process simpler, see James P. Womack and Daniel T. Jones, “Lean Consumption,” HBR March 2005.) Moreover, in markets that are increasingly global, it is dangerous to assume that a given offering, communication, or other contact will affect faraway consumers the same way it does those at home.
Although few companies have zeroed in on customer experience, many have been trying to measure customer satisfaction and have plenty of data as a result. The problem is that measuring customer satisfaction does not tell anyone how to achieve it. Customer satisfaction is essentially the culmination of a series of customer experiences or, one could say, the net result of the good ones minus the bad ones. It occurs when the gap between customers’ expectations and their subsequent experiences has been closed. To understand how to achieve satisfaction, a company must deconstruct it into its component experiences. Because a great many customer experiences aren’t the direct consequence of the brand’s messages or the company’s actual offerings, a company’s reexamination of its initiatives and choices will not suffice. The customers themselves—that is, the full range and unvarnished reality of their prior experiences, and then the expectations, warm or harsh, those have conjured up—must be monitored and probed.
Such attention to customers requires a closed-loop process in which every function worries about delivering a good experience, and senior management ensures that the offering keeps all those parochial conceptions in balance and thus linked to the bottom line. This article will describe how to create such a process, composed of three kinds of customer monitoring: past patterns, present patterns, and potential patterns. (These patterns can also be referred to by the frequency with which they are measured: persistent, periodic, and pulsed.) By understanding the different purposes and different owners of these three techniques—and how they work together (not contentiously)—a company can turn pipe dreams of customer focus into a real business system.
Mục lục
What Customer Experience Is
Customer experience is the internal and subjective response customers have to any direct or indirect contact with a company. Direct contact generally occurs in the course of purchase, use, and service and is usually initiated by the customer. Indirect contact most often involves unplanned encounters with representations of a company’s products, services, or brands and takes the form of word-of-mouth recommendations or criticisms, advertising, news reports, reviews, and so forth. Such an encounter could occur when Google’s whimsical holiday logos pop up on the site’s home page at the inception of a search, or it could be the distinctive “potato, potato” sound of a Harley-Davidson motorcycle’s exhaust system. It might just be an e-mail from one customer to another.
The secret to a good experience isn’t the multiplicity of features on offer. Microsoft Windows, which is rich in features, may provide what a corporate IT director considers a positive experience, but many home users prefer Apple’s Macintosh operating system, which offers fewer features and configuration options. A customer’s experience with an Apple device begins well before the purchaser turns it on—in the case of the iPod, perhaps with the dancing silhouettes in the TV advertisements. The origami-like (and recyclable) packaging enfolds the iPod as though it were a Fabergé egg made for a czar. A small sticker, “Designed in California, Made in China,” communicates the message that Apple is firmly in charge but also interested in keeping costs down. Even Windows users appreciate the device’s intuitive, Mac-like feel and find that downloading tracks from iTunes is easier than buying a CD on Amazon. Every Apple product is designed with the overarching purpose of making the time one spends with Apple an enjoyable experience.
A successful brand shapes customers’ experiences by embedding the fundamental value proposition in offerings’ every feature. For BMW, “the Ultimate Driving Machine” is much more than a slogan; it informs the company’s manufacturing and design choices. In 2000, Mercedes-Benz introduced a system that automatically controls the distance between a Mercedes and the car in front. BMW would not consider developing such a feature unless it amplified rather than diminished the driving experience.
Service quality and scope matter, too, but mostly when the core offering is itself a service. For example, the tracking and shipping support FedEx provides on the Internet and by phone is as important to customers as its fundamental value proposition—on-time delivery.
In their concern with logistics—how something is provided, not just what is provided—business-to-business companies take after consumer-service companies. For both, the goal is to provide a positive experience to the end user. The business partner or supplier of a B2B company helps the latter do that first by understanding where in its direct customers’ value chain the B2B can make a meaningful contribution, and then when and how. Those are different undertakings from capturing and parsing a given human being’s internal, ineffable experience. A business’s “experience,” one might say, is its manner of functioning, and a B2B company helps its business customers serve their customers by solving their business problems, just as an effective business-to-consumer company fulfills the personal needs of its customers. In a B2B context, a good experience is not a thrilling one but one that is trouble-free and hence reassuring to those in charge.
Thus, a supplier satisfies the purchasing department of its business customer by providing a balance of costs and benefits; it satisfies operations by offering products or services that are easy to use; and it satisfies a customer’s executives by expanding capacity at the same rate as the customer and in general evolving alongside it. Accordingly, sales and marketing do not necessarily monopolize points of contact with customers: Operations people at the first company deal directly with their counterparts at the second, and so forth. The functional nature of the relationship—indeed, the fact that it is a true relationship—creates a pervasive awareness of experience issues and priorities.
Corporate leaders who would never tolerate a large gap between forecasted and actual revenues prefer to look the other way when company and customer assessments diverge.
Whether it is a business or a consumer being studied, data about its experiences are collected at “touch points”: instances of direct contact either with the product or service itself or with representations of it by the company or some third party. We use the term “customer corridor” to portray the series of touch points that a customer experiences. What constitutes a meaningful touch point changes over the course of a customer’s life. For a young family with limited time and resources, a brief encounter with an insurance broker or financial planner may be adequate. The same sort of experience wouldn’t satisfy a senior with lots of time and a substantial asset base.
Not all touch points are of equivalent value. Service interactions matter more when the core offering is a service. Touch points that advance the customer to a subsequent and more valuable interaction, such as Amazon’s straightforward 1-Click ordering, matter even more. Companies need to map the corridor of touch points and watch for snarls. At each touch point, the gap between customer expectations and experience spells the difference between customer delight and something less.
People’s expectations are set in part by their previous experiences with a company’s offerings. Customers instinctively compare each new experience, positive or otherwise, with their previous ones and judge it accordingly. Expectations can also be shaped by market conditions, the competition, and the customer’s personal situation. Even when it is the company’s own brand that establishes expectations, the customer can be set up for disappointment. For example, Dell transformed buying computers over the Internet from a risky to a reliable experience. When it extended that set of procedures to the selection and purchase of expensive plasma HDTV sets, however, it disappointed. Dell did an effective job of creating positive customer expectations, but they turned out to be better fulfilled by the in-person sales force at Best Buy.
Ideally, good design makes both the most routine and the weightiest customer experiences—checking a price, getting a question answered, or placing a multimillion-dollar order—pleasant and efficient. However, even when dissatisfaction or wariness arises, artful control of consumer experience can overcome it.
In its development of a new AIDS drug, Gilead Sciences provides a good example of how a failure to understand the experience and expectation component of a consumer segment’s dissatisfaction can turn into a failure to reach that segment. Upon releasing the new medication, which had demonstrated advantages over existing ones, Gilead noticed that while sales to patients new to therapy were robust, sales to patients already undergoing treatment were growing far more slowly than expected. For HIV/AIDS patients, switching medications, Gilead discovered, is very different from choosing an alternative cold remedy. Switching requires ending a trusted relationship in the hope of reaching an uncertain improvement level. The company also learned that HIV-positive patients are far more interested in the potential adverse effects of a new drug than in its supposedly superior efficacy. With this new understanding, Gilead decided to emphasize in its marketing the new drug’s lower incidence of serious side effects. It also segmented the patients’ physicians by their willingness to prescribe a different medication from the ones they knew. Once Gilead made it easier for patients to switch drugs, the market share of the company’s main competitor dropped 33%.
Why the Neglect?
CEOs may not actively deny the significance of customer experience or, for that matter, the tools used to collect, quantify, and analyze it, but many don’t adequately appreciate what those tools can reveal. Three forces in the main conspire to preserve this gap.
Too much money already lavished on CRM.
Having spent millions of dollars on customer relationship management software, many CEOs consider their problem to be not a lack of customer information but a superfluity of it. Before investing more time and money, executives justifiably want to know how customer experience data are different and what their value is.
To put it starkly, the difference is that CRM captures what a company knows about a particular customer—his or her history of service requests, product returns, and inquiries, among other things—whereas customer experience data capture customers’ subjective thoughts about a particular company. CRM tracks customer actions after the fact; CEM (customer experience management) captures the immediate response of the customer to its encounters with the company. Employees accustomed to reading the marketing department’s dry analyses of CRM point-of-sale data easily grasp the distinction upon hearing a frustrated customer’s very words. (For a detailed account of the difference between the two approaches, see the exhibit “CEM Versus CRM.”)
CEM Versus CRM
Customer experience management (CEM) and customer relationship management (CRM) differ in their subject matter, timing, monitoring, audience, and purpose.
CEM
CRM
WHAT
CEM
Data for Captures and distributes what a customer thinks about a company
CRM
Captures and distributes what a company knows about a customer
WHEN
CEM
At points of customer interaction: “touch points”
CRM
After there is a record of a customer interaction
HOW MONITORED
CEM
Surveys, targeted studies, observational studies, “voice of customer” research
CRM
Point-of-sales data, market research, Web site click-through, automated tracking of sales
WHO USES THE INFORMATION
CEM
Business or functional leaders, in order to create fulfillable expectations and better experiences with products and services
CRM
Customer-facing groups such as sales, marketing, field service, and customer service, in order to drive more efficient and effective execution
RELEVANCE TO FUTURE PERFORMANCE
CEM
Leading: Locates places to add offerings in the gaps between expectations and experience
CRM
Lagging: Drives cross selling by bundling products in demand with ones that aren’t
Moreover, many CEOs don’t sufficiently appreciate the distinction between customer satisfaction, which they believe they have heavily documented, and customer experience, which always demands further investigation.
Lack of attunement to customers’ needs.
Leaders who rose through customer-facing functions, such as Cisco Systems CEO John Chambers, are more likely to act with reference to customer experience than those who have not. When competing new technologies are difficult to choose among, Cisco defers its choice until key customers have registered their reactions. Because the company knows there will be a market for the choice it finally makes, it can afford to commit itself later than its competitors.
In contrast, executives who rose through finance, engineering, or manufacturing often regard managing customer experience as the responsibility of sales, marketing, or customer service.
Fear of what the data may reveal.
It’s easy to say one’s business is customer-driven when there are no data to prove otherwise. Once data start flowing, the bogeymen come out of the closet. Can we afford to do what customers are asking for? How do we choose between conflicting preferences? Can we accept what customers say they are experiencing without first telling them what they should be experiencing? Corporate leaders who would never tolerate a large gap between forecasted and actual revenues prefer to look the other way when company and customer assessments diverge, as they do in the Bain survey.
Executives also hesitate to act on findings because experience data are more ambiguous than customers’ actions—the orders they place, for instance. However, statistical analysis has developed to the point where it can dependably quantify both the relative importance of each touch point and the experience it provided. It can also isolate key transactions, accounts, regions, customer segments, and so forth, and then parse the resulting data. About ten years ago, companies started collecting experience information electronically. Now they can instantly combine it with data collected from CRM systems and other customer databases, conduct analyses of both individual and aggregate responses in real time, and then automatically route and track issues needing resolution.
Squishier are observation studies and verbatim comments, which for that reason don’t get the attention they deserve. Approached, however, with the requisite empathy and insight, they can be in their own way more revealing than concrete findings. For one thing, even consumers sharply aware of a product’s or brand’s deficiencies can’t quite picture what might replace it. That’s why Henry Ford said that if he asked his customers before building his first car how he could better meet their transportation needs, they would have said simply, “Give us faster horses.” Properly understood, the currents beneath the surface that direct the flow of customer experience data will indicate the shape of the next major transformation.
All Hands on Board
Many organizations place responsibility for collecting and assessing customer experience data within a single, IT-supported customer-facing group. Doing so accomplishes at least three things: It saves money; it protects customers from redundant and annoying solicitations; and it permits direct comparison of customers on the basis of their location, choice of product, or some other criterion.
But it is a mistake to assign to customer-facing groups overall accountability for the design, delivery, and creation of a superior customer experience, thereby excusing those more distant from the customer from understanding it.
In contrast to this common pattern, Palm drew on customer experience to make the Treo one of its most successful products ever. A combination of cell phone and Palm Pilot, the original Treo used the same built-in rechargeable battery as the Palm organizers. When used as a cell phone, the device consumed far more power than it did when used as an organizer. So customers who were heavy users of the cell phone feature found that their Treos were often losing power—and often at an inconvenient distance from their rechargers. Complaints about this problem began showing up in Palm’s customer-service transaction surveys. But the customer service department could offer the Treo’s unhappy owners only minor power-saving tips.
Dissatisfied with the status quo, customer service vice president Dan Gilbert, showing unusual initiative, distributed the experience data his department had collected to product development, which went to work on the problem. The next-generation Treo came with a battery that users replace. In 2005, sales were 71% higher than the previous year.
Typically, however, a vigorous reaction to intelligence gathered on customer experience requires general management to orchestrate a response to customer problems. Intuit learned that when it tried to address the trouble customers were having installing a new release of TurboTax. The solution turned out to be cross-functional, but no one who had been asked to deal with it was senior enough to “own” the entire installation process.
Obtaining the Right Information
There are three patterns of customer experience information, each with its own pace and level of data collection. (For a detailed breakdown of the three patterns, see the exhibit “Tracking Customer Experience: Persistent, Periodic, Pulsed.”)
Tracking Customer Experience: Persistent, Periodic, Pulsed
Companies can monitor various patterns of interaction with customers to gain a better understanding of the customer experience they are providing. Depending on the precise information a company is seeking, it may choose to analyze past patterns, present patterns, potential patterns, or a combination. Each pattern requires a distinct method of generating and analyzing data and will yield different types of insights.
Past patterns
Present patterns
Potential patterns
Purpose
Past patterns
Capture a recent experience.
- Intended to improve transactional experiences
- Track experience goals and trends
- Assess impact of new initiatives
- Identify emerging issues
Present patterns
Track current relationships and experience issues with an eye toward identifying future opportunities.
- Keep a consistent yet deeper watch on state of relationship and other factors
- Look forward as well as backward
- Used with more-critical populations and issues
Potential patterns
Target inquiries to unveil and test future opportunities.
Examples
Past patterns
- Post-installation or customer service follow-up
- New-product-purchase follow-up
Present patterns
- Biannual account reviews
- “Follow them home” user studies
Potential patterns
- Ethnographic design studies
- Special-purpose market studies
- Focus groups
Owner
Past patterns
Central group or functions
Present patterns
Central group, business units, or functions
Potential patterns
General management or functions
Data Collection Frequency & Scope
Past patterns
Persistent:
- Electronic surveys linked to high-volume transactions or an ongoing feedback system
- Automatically triggered by the completion of a transaction
- Focused, short-cycle, timed data collection
- Feedback volunteered by users in online forums
Present patterns
Periodic:
- Quarterly account reviews
- Relationship studies
- User experience studies
- User-group polling
Potential patterns
Pulsed:
- One-off, special-purpose driven
- Interim readings of trends
Collection & Analysis Methodology
Past patterns
- Web-based, in-person, or phone surveys
- User forums and blogs
Present patterns
- Web-based surveys preceded by preparation in person
- Direct contact in person or by phone
- Moderated user forums
- Focus groups and other regularly scheduled formats
Potential patterns
- Driven by specific customers or unique problems
- Very focused
- Incorporates existing knowledge of customer relationship
Discussion & Action Forums
Past patterns
- Analyzed within functions, central survey groups, or both
- Cross-functional issues directed to general managers
- Strategic analysis and actions directed by general managers
Present patterns
- Initial analysis by sponsoring group
- Broader trends and issues forwarded to general managers’ strategic and operating forums
- Deeper analysis of emerging issues at the corporate, business unit, or local level
Potential patterns
- Centered within sponsoring group, with coordination by and support from central group
When companies monitor transactions occurring in large numbers and completed by individual customers, they are looking at past patterns. Enterprise Rent-A-Car is supposed to ask every driver returning one of its vehicles, “Would you rent from Enterprise again?” Any new service a France Telecom customer receives is followed by a brief questionnaire on the quality of his or her experience. As these two examples demonstrate, each attempt to determine the quality of the experience directly follows the experience itself. So companies receive by this method an uninterrupted, or “persistent,” flow of information, which they then analyze and communicate internally. Although surveys are the tool used most often for gathering data on past patterns, customers are sometimes approached through online forums and blogs. Companies are mostly guided by assertions that win customers’ strong agreement, but sometimes customers’ failure to react strongly to some feature or service can be just as telling. For this reason, the employees evaluating results must be attuned to areas of customer experience that a survey or other tool does not directly address.
Analyses of present patterns are not simply evaluations of the meaning and success of a recent encounter. They envision a continuing relationship with the customer. Consequently, questions may extend to the customer’s awareness of alternative suppliers, new features the customer might desire, and what it sees as challenges to its competitiveness. Given the broad scope of the inquiry, this type of monitoring shouldn’t be triggered solely by a customer-initiated transaction. Instead, information on a company’s key products and services should be gathered at scheduled intervals, or “periodically.” Hewlett-Packard and the consulting firm BearingPoint, for example, approach every key customer annually. By initiating contact with different customers at different times throughout the year, BearingPoint has created an almost persistent data flow that does not depend on the completion of a given transaction, while permitting comparisons among customers on a range of issues. BearingPoint learned in this fashion that the best practices it had established in one vertical-market group had not migrated to other groups.
Present patterns are collected through surveys or face-to-face interviews, studies tailored to the subject, or some combination thereof. It helps to prepare customers for the inquiry by telling them the purpose of the survey, how they will hear about the findings, and what role they might play in addressing them. Accordingly, Hewlett-Packard rewards its account managers on survey-participation rates as well as results.
Potential patterns are uncovered by probing for opportunities, which often emerge from interpretation of customer data as well as observation of customer behavior. Like the study Gilead conducted, such probes are outgrowths of strategies usually involving the targeting of particular customer segments and are therefore unscheduled, or “pulsed.” The findings are often used to inform the product development process.
Most companies apply a single summary metric to data on past and present patterns. The customer experience metric Net Promoter, for example, registers customers’ experiences in aggregate—that is, their positive ones minus their negative ones. Intuit’s founder, Scott Cook, uses Net Promoter scores for goal setting and engaging the organization’s attention, though he recognizes that a rising or falling score doesn’t begin to reveal what is driving the trend.
As relationships with customers deepen, companies tend to collect data with greater frequency. The patterns that emerge suggest further areas of inquiry. For example, present-relationship studies may indicate that on-site service experience is wanting. After improvements are made, it’s common to use a transaction survey following each service call to assess progress. A subsequent, more comprehensive survey may show good experience with service response time but low overall ratings, triggering a special study to identify customers’ priorities among a range of service experience factors.
Low cost and ease of modification make surveys the overwhelming favorite for measuring past and present patterns. E-mail–based surveys are superior to paper-based ones because they can be more easily shared; they allow rapid distribution; they give the surveyor the flexibility to extend or abbreviate the questioning according to the wishes of the respondent or the substance of the response; they minimize delays in analyzing the results; and they lead to quick action, such as a referral to a general manager should scores fall below a predetermined level. E-mail surveys can also be more easily tailored. For example, the surveys Marvin Windows and Doors sends to its distributors are different from those sent to architects who buy its products.
A well-designed survey is not simply one that elicits the desired information. It must itself avoid becoming an unfortunate aspect of the customer experience. Hence, it shouldn’t be onerous for the taker or deny him the chance to communicate the special nature of his experience. One way of keeping surveys mercifully brief is to avoid asking about matters like recent purchases that the company already has a record of. Nor should they be triggered by the transactions of regular customers such as purchasing agents. Such customers are, after all, among those a business can least afford to annoy. By the same token, corporate sanctions imposed on dealers who receive low scores shouldn’t be so harsh that retailers try to discourage customers from responding by offering to fix any problem on the spot. The individual customer may be placated, but widespread resort to this practice keeps general management from obtaining a broad picture of systemic problems.
A well-designed survey is not simply one that elicits the desired information. It must itself avoid becoming an unfortunate aspect of the customer experience.
Surveys do have their limitations, and focus groups, user-group forums, blogs, and marketing and observational studies can yield insights that surveys cannot. (For more on listening to users, see Dorothy Leonard and Jeffrey Rayport, “Spark Innovation Through Empathic Design,” HBR November–December 1997.) Intuit, for example, is a leader in “follow them home” studies. Company representatives visit customers where they live or work and observe how they use Intuit products such as QuickBooks. It was from watching the smallest businesses struggle with QuickBooks Pro that the company recognized a need for a product like QuickBooks Simple Start. These tools lend themselves to the measurement of present and potential patterns, for they entail more time, preparation, and expense than transaction-based surveys.
Acting on Experience Information
Let’s take a look at a company we’ll call HiTouch—which is actually a composite of companies—as it struggled to create a system for managing customer experience. HiTouch, a business-to-business global financial services provider, received a shocking wake-up call when a top customer shifted half its business to an archrival. HiTouch executives had just completed a quarterly account review classifying the relationship with this account as “superior.” The stunned executives wondered what they could have missed.
From their efforts to salvage the account, HiTouch executives learned enough to initiate a companywide effort to improve the experience of all other major accounts. After conducting a mini-audit of existing customer-experience programs, responsible parties, and results, it discovered that its vertical-market groups hardly went further than tracking leads and analyzing buying patterns. Most employees assumed customer experience was the job of marketing or sales. The company’s only CEM metric came from a mailed annual customer satisfaction survey whose wording hadn’t changed in three years.
HiTouch engaged consultants to help with the initiative. Rather than spending a lot of time establishing formal customer experience goals or a detailed plan, the consultants argued for a “fast prototype” relationship survey of top customers. HiTouch’s leaders identified the touch points they knew had disappointed their most important customers. Preventing further customer defections, they realized, would require customer experience goals for every stage of the value chain. These had to serve every vertical market’s financial objectives while being compatible with the company’s branding.
As the issues piled up, it became clear that the effort needed an executive leader, a budget, and dedicated resources. HiTouch’s top sales executive, having become a believer in the process, stepped up. To ensure a good response rate, he asked sales account executives to prep customers receiving the survey. A few showed a predistribution draft to customers so that they could help refine issue selection and tone. Of the various questions settled on, two key ones were “How important to your purchasing decision was HiTouch’s brand and the service promise it seemed to make?” and “Do you believe HiTouch delivers the experience promised by its marketing and sales force?” The pilot survey included a summary metric that permitted HiTouch to compare responses by location, service platform, and vertical market.
The sales executive noticed that meetings about the pilot survey, in which salespeople fed customer experience information back to the customers themselves, differed from the typical sales call by shifting the dialogue away from the individual transaction and toward relationship development. They also provided an excellent opportunity to introduce to the customers HiTouch’s nonsales employees who were in a position to fix customer problems as they arose. In this fashion, salespeople began to view their jobs less as a functional responsibility than as an organizational process.
Data from the survey began to flow within 24 hours of distribution. Many of customers’ verbatim comments were blunt. Some executives became defensive and tried to explain away what the data were saying rather than understand the concerns behind them. Some never quit demanding yet one more data point. Others strained to launch company responses before fully understanding what was being said.
With 60% of the responses in, it became clear which experiences were critical to overall satisfaction. However, they were different in each vertical market, with few exceptions. For each, summary scores were compared with customer revenue. On that basis, finance placed every customer in one of four quadrants.
- Model customers: good summary scores; good revenue.
- Growth customers: good summary scores; higher potential revenue. Candidates for cross selling and upselling.
- At-Risk customers: low scores; good revenue. Demanding decisive intervention.
- Dangling customers: low scores; low revenue. To be rescued or abandoned.
Auspiciously, the Growth segment had three times as many customers as any of the others. But on further examination it emerged that some of those customers didn’t buy as much as those in other quadrants. In fact, one of the largest remaining customers was squarely in the At-Risk quadrant.
The results of the initial survey coincided with the start of the strategic-planning cycle. By the following quarter, every vertical-market team, having shown some customers the findings and described what the team planned to do about them, was ready to send out transaction surveys of customers’ experiences with service installation and repair. Every team had also set experience goals for itself and scheduled relationship surveys.
A year later, current experience data had replaced ill-informed opinion at HiTouch. At monthly operations meetings, vertical-market general managers reviewed key customer experience issues, and actions taken, before reviewing financials. A rolling summary of relationship issues unearthed by customer surveys kicked off quarterly executive strategy discussions. Defections within each vertical-market group dropped by an average of 16%.
Not everything worked as hoped. The company set up an executive dashboard to keep track of installation experience issues, but the disclosure of high-volume transaction information so upset the managers responsible that they never got around to resolving the underlying issues. The dashboard was pulled in favor of automatic triggers that channeled problems to specialists or general managers, who began to make good progress in solving them. Increased analyst staffing and simplified reporting helped the general managers identify new opportunities, an area they had been neglecting.
The Employee Experience
Customer experience does not improve until it becomes a top priority and a company’s work processes, systems, and structure change to reflect that. When employees observe senior managers persistently demanding experience information and using it to make tough decisions, their own decisions are conditioned by that awareness.
Not long after breaking every software-industry growth record, Siebel Systems (now part of Oracle) saw its satisfaction ratings begin to drop. An adopter of customer experience management, the company had gathered data revealing that customers found a large disparity between actual and expected costs of ownership of Siebel 6, a sales-force automation tool based on a client-server architecture. The proposed solution, a shift to a Web-based architecture in Siebel 7, would require forgoing the development of other major features—and the revenues they generated—for two years. Yet Siebel’s leadership went ahead with the shift anyway. Satisfaction levels soon returned to their formerly lofty levels, and employees took heart as management placed experience ahead of revenues.
Once persuaded of the importance of experience, every function has a role to play.
Marketing has to capture the tastes and standards of every one of its targeted market segments, circulate that knowledge within the company, and then tailor all consumer communications accordingly.
Service operations must ensure that processes, skills, and practices are attuned to every touch point. (Present-patterns surveys are good for tracking high-volume touch points such as call centers.)
Product development should do more than specify needed features. It should also design experiences after observing how customers use products and services, learning why they use offerings as they do, and figuring out how existing products might be frustrating them. Ideally, product developers will identify customer behavior that runs counter to a company’s expectations and uncover needs that haven’t been identified.
Information technology that can collect, analyze, and distribute CEM data, integrate the information with that generated by CRM, and monitor progress must be in place. As the data flow stabilizes, the form of presentation and its degree of detail should be keyed to whichever internal audience the data are meant for. A level of detail that is appropriate for an analyst, for example, can easily overwhelm a line manager. CEM is a play within a play, so to speak; just as customers must have a good experience, employees need to have a good experience digesting information about themselves.
Human resources should put together a communications and training strategy that conveys the economic rationale for CEM and paints a picture of how it will alter work and decision-making processes. Since the front line determines the bulk of customer experience, it would be a good idea to study those employees’ individual capabilities, work processes, and attitudes. As for performance management, of course customer experience results should affect compensation. But as we have learned in recent years, incentives that are too powerful are more likely to distort behavior than channel it productively.
Account teams must progress from annual surveys to detailed touch-point analysis, then translate present patterns of customer experience and issues gleaned from recent transactions into action plans that are shared with customers. Not every significant implication is readily apparent. Leaders need to press the data to precipitate customers’ concealed longings.
. . .
Customer dissatisfaction is widespread and, because of customers’ empowerment, increasingly dangerous. Although companies know a lot about customers’ buying habits, incomes, and other characteristics used to classify them, they know little about the thoughts, emotions, and states of mind that customers’ interactions with products, services, and brands induce. Yet unless companies know about these subjective experiences and the role every function plays in shaping them, customer satisfaction is more a slogan than an attainable goal.
A version of this article appeared in the February 2007 issue of Harvard Business Review.