Measuring customer experience to boost the bottom line
Greater measurement can help deliver a focused customer strategy that will increase profits, says Mark Gentry, Research Manager at McCallum Layto. He examines the need for customer experience programmes to start delivering real value for clients.
Throughout the world, and across all sectors of the economy, many major organisations have embraced the vision of putting the customer at the centre of their decision-making processes.
Happy customers, the theory goes, will reward you with: repeat business; greater opportunities for cross and up-selling; word of mouth recommendations (and other forms of verbal support – increasingly important in the age of social networking); and lower cost to service.
By focusing their organisations on meeting the needs of the people who buy or use their products or services, these organisations expect to achieve greater business success. Once this has been set as a business goal, however, the question arises as to how to measure if it is being achieved.
Many organisations are now committed to measuring performance based on the quality of the experience they deliver to customers. Alongside internal performance monitoring and the use of mystery shopping to take “dipstick” measures of service quality, direct feedback from customers is the ultimate measure of success.
Gathering direct feedback from customers, however, can be costly. In times of severe budgetary pressure it is more vital than ever to ensure that the money spent on these measurement programmes deliver a return on the substantial investment they require.
These programmes can only deliver value for money if they identify the areas of the customer relationship that matter most to customers, clearly identify if customer expectations are being met, and give clear direction on what it is that customers want. This information can then be used to set clear objectives and detailed action plans for process improvements and training for customer-facing employees.
FOCUS ON BOTH STRATEGIC AND TACTICAL LEVEL MEASUREMENT
To provide a complete picture of the customer relationship, the measurement programme needs to provide strategic-level information, focusing on the relationship in its entirety, and tactical level information, focusing on the detail aspects of the relationship that drive the overall perceptions among customers.
1. Strategic-level information:
a. Relationship Monitoring: provide a regular “healthcheck” on the level of positive behaviour, feelings and opinions among the customer base as a whole
b. Priority Setting: identify which “touchpoints” have the greatest ability to influence the customers’ perceptions of the relationship, both in terms of their “reach” within the customer base and the level of impact that they have for individual customers
This information needs to be used at a senior level in the business, with key points and the actions planned to address them communicated throughout the business. This sets the overall agenda for action, and should be used to identify where service needs to improve and who to task with achieving it.
2. Tactical-level diagnostics
a. Performance monitoring: provide on-going measurement of customers’ opinions of service performance in those “touchpoints” and “moments of truth” in the customer lifecycle that are shown to have greatest impact at a strategic level
b. Understand customer needs: generate “rich” qualitative insight into what good service looks like to customers, to provide a model to work to
c. Targeted measurement: compare performance between specific customer groups, channels, teams etc. to identify where action is most urgently needed
This information needs to be delivered to the people who can act on it, along with clear action plans for enhancing the experience and, potentially, targets to motivate them to achieve improvements.
A best practice model for research that delivers on these levels would look something like the following:
Strategic | Tactical |
Sample from the whole customer base representatively | Sample from customers who are exposed to the service experience in question |
Take a “snapshot” of the customer base at a particular moment in time – includes customers who have interacted with you and those that have not | Measure on-going performance among those customers affected – often event-driven eg calling customer services, new customers
|
Robust and representative sample sizes overall for for accurate tracking wave on wave | Robust sample sizes for experiences, events, channels, teams etc.
|
Strategic level “soft” measures – overall perceptions of service, brand perceptions, loyalty/commitment. Also measures the incidence and perceptions of specific “touchpoints” at an overall level, with a few key diagnostic measures but not much detail | Focuses on the specific “touchpoints” that have greatest impact on customers. “Hard” measures eg time taken as well as subjective evaluations
|
Use this data to establish the level of priority that should be attached to different “touchpoints” within the customer relationship, enabling you to identify which areas to focus on in more depth and to concentrate action in – this is best done through statistical analysis rather than asking customers what they think is most important | Potentially establishes order of priority for service delivery within a specific touchpoint, to give further diagnostic guidance |
Track at intervals in which you would realistically expect to see some change – perhaps quarterly, or even bi-annually. Overall perceptions are lagged indicators and tend to move quite slowly in response to changing perceptions of service. The exception to this might be if a major corporate event occurs, which customers are aware of eg mergers/acquisitions, downsizing, sell-offs, rebranding, adverse publicity a timely wave of strategic level of research would help to gauge the impact of such an event | Tracked frequently, even continuously, depending on the number of customers experiencing each touchpoint. Touchpoint perceptions area leading indicators of performance, and tend to respond quite quickly to changes in perceived performance
|
Fairly long interviews 15 minutes, potentially | Short interviews, focusing on specific experiences in depth |
Relatively static, summary reporting – KPIs, dashboard metrics, for management level | Dynamic reporting – allow users to compare results for specific areas and specific time periods |
Some drill down comparative reporting, down to regional/sub-group level but not robust at branch level | Reporting in detail to local managers with summaries for senior management along with action plans. Targets for monitoring can be set and built into the reporting |
Telephone interviews continue to be the main methodology used – response rates remain much higher than self-completion approaches, ensuring that customers with low involvement are more likely to contribute, and often phone numbers are the only reliable contact information available for all customers. Telephone interviews are also more suitable for managed sampling, to ensure a representative sample overall and minimum base sizes for specific groups
| Telephone interviews are also widely used for event driven research, particularly where the interaction was over the telephone e.g. call centre performance monitoring |
Some businesses have been able to move towards online interviews for relationship studies, where they are able to contact the majority of their customer base via electronic means and where the bias inherent in excluding those without access to these channels is not significant. If a critical mass of customers can be reached this way, then online research becomes a more cost effective methodology
| If the interaction is online, or if customers‟ email addresses are captured, it may be appropriate to consider an on-line survey, though take into account that response rates will be lower and sample sizes may, therefore, be limited. With low responses rates, there is also a danger that you would have more data from “extreme” cases, where customers were very unhappy with their service, and use the research as a tool to complain
|
A mixture of telephone and on-line is possible, however, this is typically a relatively costly exercise, as set-up costs are duplicated for both methodologies. There are also issues with combining and comparing data derived from different methodologies, as customers tend to respond to verbal questions slightly differently to self-completing on a screen |
• Select competitors that provide valid and useful comparisons – there is no point spending money on comparing your performance with companies that are fundamentally different or where you cannot identify why differences appear
• Although attractive, it can be very costly to reach customer of high quality “niche” competitors in sufficient volume to give you robust data
• Take into account the difference in the profile of competitors’ customers when analysing results – there may be a natural tendency for your results to look worse on paper if you have a more demanding customer base
• Taking into account the different way that competitors deal with service to their customers – your customers may have a substantially different set of experiences dealing with you than they would have dealing with a competitor, and some comparisons may not be comparing like with like
DESIGN YOUR CUSTOMER SAMPLING PROCESS APPROPRIATELY
As outlined above, it is important to ensure that the research is being conducted with the right sample size and frequency for each level of measurement.
Tracking strategic level indicators on a monthly basis is unlikely to show much change over short periods of time, which will lead to perceptions that nothing can be done to change them. Tactical surveys, on the other hand, must be conducted frequently enough, and in sufficient volume, to be sensitive to fluctuations in service levels
Sometimes it is also appropriate to switch certain elements of the tracking “on” or “off” depending on company developments and new initiatives – e.g. if a new system is rolled out, it is appropriate to measure experiences prior to the implementation and then again, once it is up and running. By separating the tactical tracking from the strategic measurement, there is greater flexibility to do this.
PLAN THE IMPLEMENTATION
An ideal model for devising and conducting a customer experience measurement research programme would consist of the following elements:
• Qualitative research with customers to establish their overall views on service, identify “moments of truth” and their likely behaviour in response to good or bad service
• Stakeholder interviews to gather the views of key internal personnel
• Benchmark quantitative research at strategic level
• Key Driver analysis to establish priorities – evaluate which experiences have the greatest positive/negative impact on customer loyalty
• Establish event-driven tracking
• Tracking waves of strategic research
• Conduct further qualitative research into specific events if needed to give greater depth of insight into customer experiences
• Conduct local workshops with staff responsible for delivering key customer experiences
ESTABLISH BUY-IN TO USING THE DATA
It is also crucial for any customer experience measurement programme to establish buy-in within your organisation before and during its implementation. At the outset, it is advisable to involve key stakeholders in the process by consulting with them about what the key issues they think the company faces in dealing with customers, and also about the information they want and how they want to use it. One-on-one discussions with these key stakeholders is an important stage in the process of designing a customer experience measurement programme.
On a practical level, responsibility for the implementation of the research should be clearly allocated to a specific department and individual(s), usually within market research or customer insight functions.
However, this should be supported with the involvement of a “steering group”, consisting of colleagues from a range of business areas that will be using the information. This could include customer services, CRM/ databases, billing and other teams.
Finally, reporting and usage of the data is key to maximising the return on the investment
MAXIMISE USE OF THE INFORMATION
Information from the customer experience programme should be delivered to various levels within your organisation, in an appropriate format for the users at each level.
We continue to find that presentations to senior management have an important part to play in establishing buy-in to the research, but we also believe that detailed information should be delivered to the users at the “coal-face”, enabling them to take action at a micro level and view the results. The development of web-based reporting tools has made the latter much more accessible and cost-effective.
• SENIOR MANAGEMENT – face-to-face presentations, dashboard metrics
• CHANNEL/TEAM MANAGEMENT – presentations, specific metrics, sub-level comparisons
• TACTICAL LEVEL (TEAM, etc) – access to tracking data on the areas they are responsible for, targets, action plans, customer comments, “red flag” customer issues highlighted by the research
• BRANCH LEVEL – rolling data on good/poor performing branches to monitor maintenance and improvement activity
We continue to find that presentations to senior management have an important part to play in establishing buy-in to the research, but we also believe that detailed information should be delivered to the users at the “coal-face”, enabling them to take action at a micro level and view the results. The development of web-based reporting tools has made the latter much more accessible and cost-effective.
• SENIOR MANAGEMENT – face-to-face presentations, dashboard metrics
• CHANNEL/TEAM MANAGEMENT – presentations, specific metrics, sub-level comparisons
• TACTICAL LEVEL (TEAM, etc) – access to tracking data on the areas they are responsible for, targets, action plans, customer comments, “red flag” customer issues highlighted by the research
• BRANCH LEVEL – rolling data on good/poor performing branches to monitor maintenance and improvement activity
SUMMARY
Customer experience measurement has the potential to be a vital management tool if implemented effectively. By following the principles we have outlined above, and carefully evaluating how they fit with the context of your business, we believe that you can devise a measurement programme that delivers genuine returns on investment.