Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact...

58
Section 4 Ev aluating Service Delivery Customer Relationship Management

Transcript of Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact...

Page 1: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Section 4

Evaluating Service Delivery

Customer Relationship Management

Page 2: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With
Page 3: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Contents

Measuring Customer Satisfaction

1. Customer Satisfaction Measurement Principles .............................................................1

2. Sources of Customer Satisfaction Data .........................................................................8

3. Survey Methodologies..................................................................................................11

4. Sampling and Analysis Principles .................................................................................17

Contributors to Customer Satisfaction

5. Identifying Contributors to Customer Satisfaction .....................................................22

6. Isolating Root Causes of Dissatisfaction.......................................................................25

7. Measuring Accessibility Across Channels .....................................................................30

8. Interpreting Customer Feedback [Strategic]................................................................36

9. Leveraging Customer Feedback [Strategic]..................................................................40

10. Barriers to Serving Customers Effectively...................................................................45

Exercises............................................................................................................................51

Reference Bibliography .....................................................................................................54

Introduction to Evaluating Service Delivery

Page 4: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With
Page 5: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

1 2 3Ready?

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 1

1. Customer Satisfaction Measurement Principles

Key Points

• Determine what is most important to the customer for all channelsof contact through an initial baseline survey.

• The key elements of an effective ongoing customer satisfactiontracking system are:

• A random sample• A short, three-minute questionnaire• Measurement across all channels of contact• Timely surveying• Measurable, achievable, relevant, controllable (MARC)• Periodic, management-oriented reporting

Explanation

The first step in measuring customer satisfaction with the service deliveryprocess (i.e., the call center) is to determine what is most important to thecustomer for all channels of contact (e.g., phone call, email, correspondence).This is accomplished using a baseline survey that provides a comprehensivelook at the service delivery process, customer expectations, satisfaction andresulting changes in loyalty.

A baseline survey is longer than an ongoing tracking survey, but should still notbe more than four pages (if a written survey instrument) or 15 minutes (if aphone survey). The baseline survey will also provide a baseline measure orbenchmark that can be used as a comparison point going forward. A baselinesurvey should be conducted at the beginning of the process to develop acustomer satisfaction measurement program and then every two to three yearsto validate any changes.

The content of a baseline survey should include questions related to:

• Reason(s) for contact

• The contact experience (e.g., length of time on hold, number oftransfers, time until final action, number of contacts to get resolution)

• Customer satisfaction:

Page 6: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

• Overall, with the contact experience

• With the personnel (e.g., professionalism, knowledge, etc.)

• With the response (e.g., timeliness, clarity, etc.)

• Overall, with the organization, products and services

• Customer loyalty:

• Willingness to repurchase or continue to use

• Willingness to recommend the organization products or services

• Demographics (e.g., age, gender, products and services used, etc.)

The baseline data should be analyzed to help set customer-driven standards, todetermine a baseline measure of customer satisfaction with the contactexperience and to determine the key drivers of customer satisfaction and loyalty.(See The Value of Customer Satisfaction and Loyalty, Section 3.) In addition,the baseline data analyzed by reason for contact will help the call centermanager refine response guidelines and processes.

For example, in the following table, data from a baseline survey indicates thatwhile the call center achieves an overall 89 percent loyalty rate with an averageof 1.9 contacts per issue, contacts regarding backorder status achieved a loyaltyrate of only 67 percent with an average of 3.3 contacts per issue. On closerinvestigation, it is apparent that contacts involving Web, email and phone havethe highest number of contacts and lowest loyalty rates. With this type ofactionable information, the call center manager can examine the processes andresponse guidelines in place to handle backorder status contacts via the Weband email, and objectives can be set to decrease the number of contacts toresolve an issue and increase customer loyalty.

2 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

(continued, next page)

Page 7: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Ongoing Customer Satisfaction Tracking

The next step in effective customer satisfaction measurement is to design atracking system based on what is most important to the customer (i.e., keydrivers of customer satisfaction and loyalty). The key elements of an effectivecustomer satisfaction tracking system are:

• A random sample: The sample should be drawn from an automateddatabase of customers who have recently (in the last two to four weeks)contacted the call center. (See Sampling and Analysis Principles, thissection.)

• A short, three-minute questionnaire: This questionnaire should cover

Percent Definitely Total NumberIssue/Channels Used Intend to Repurchase of Contacts

Shipment Status 91% 1.5

Phone 89% 1.4

Email/Phone 82% 2.1

Web ✔ 95% 1.1

Product Return 87% 1.9

Phone 91% 1.2

Email ✔ 90% 1.4

Web/Phone 78% 2.4

Email/Phone 86% 2.1

Backorder Status 67% 3.3

Phone ✔ 80% 1.3

Email 64% 3.1

Web/Phone 57% 4.3

Email/Phone 66% 4.8

Overall contact center 89% 1.9

✔ = Ideal Channel

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 3

Example of Analysis of Loyalty and Numberof Calls By Issue and Channels Used

Source: TARP industry specific research 2001

Page 8: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

the key drivers of customer satisfaction with the contact (usually no morethan 10 questions).

• Measurement across all channels of contact: The tracking systemshould measure customer satisfaction with all contact channels (i.e.,phone, email, postal mail, Web). While methodologies might vary foreach contact channel, the questions asked, scales and reports should besimilar to facilitate an integrated view of the call center.

• Timely surveying: It is best to survey the customer within two to fourweeks after they have contacted the call center. That is soon enough thatthe customer will remember the contact experience, but long enoughthat they are likely to have received anything that the call centerpromised to send out.

• Measurable, achievable, relevant, controllable (MARC): Surveys shouldbe constructed to provide data on items that can be measured objectively,are possible for the center to achieve, are relevant to the customer’sexperience with the center and are under the center’s control. Since thefinal result of surveys should be improvements to service, survey contentmust reflect this objective.

• Periodic, management-oriented reporting: The responses should besummarized on a monthly or quarterly basis to provide an ongoing read-out of caller satisfaction. The reports should be easy to read andactionable (i.e., the call center manager can immediately see what actioncould impact the scores).

One such useful report is illustrated in the table on the following page.This report shows the market-at-risk estimate by problem experienced. Itshows both the problem frequency and the percent of customers who willlikely be lost as a result. The report is not complicated to read, butpresents the data in a way that allows management to prioritize whichimprovements will be most valuable to customer retention.

4 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

(continued, next page)

Page 9: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Survey Classifications

There are four common ways to classify surveys. The specific classificationdepends on the survey’s objectives.

• Timeframe/frequency: One-shot surveys, referred to as snapshot, areintended to measure objectives at a single point in time. Longitudinalsurveys gather data over a period of time, usually to detect trends.

• Breadth of survey: Surveys can either gather data from a cross-section ofcustomers across all segments (or the full range of possible issues acrossthe customer lifecycle) or they can be “pinpoint” surveys that look atonly one segment or set of issues.

• Customer vs. market: Surveys can be conducted with existing customersonly or can include a cross-section of the marketplace. A marketplacesurvey allows for the comparison of your organization’s customers withthose of other organizations.

• Customer experience vs. customer expectations: Customer experiencesurveys are very relevant to customers and usually get a good responserate. Surveys measuring customer expectations, needs and wants usuallyrequire incentives unless the respondents are loyal customers who want

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 5

Bill was incorrect

Accessories not delivered on time

Dropped phone calls

Signal coverage

Ease of understanding message recall

Bill hard to understand

Cost of mobile phone too high

Original contract not clearly explained

Poor customer service in store

Replace phone service inadequate

Hard to access customer services

Timeliness of service response

27

23

21

20

19

16

16

16

15

15

14

14

52.6

7.7

70.0

66.7

50.0

100.0

12.5

46.2

88.9

33.3

87.5

60.0

6.4

0.8

6.6

6.0

4.3

7.2

0.9

3.3

6.0

2.2

5.5

3.8

Overall %problem

experienced

% specificproblem

frequency

% customersnot likely/willing to

repurchase

% customersat risk

Problem experienced (45%)Problem

frequency1% will not

repurchase2% customers

potentially lost

1Based on multiple problem selection2Based on definitely/probably will not and might/might not repurchase

Source: TARP, as used in the article "Winning Customer Satisfaction and Loyalty in Today’s Multi-ChannelEnvironment" published in the Internal Journal of Call Centre Management

X X =

Market-at-Risk Estimate

Page 10: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

to tell you what else they want. The problem with customer expectationsurveys is that respondents usually say they want everything until facedwith the prospect of actually paying for them. Therefore, these surveyresults are more speculative.

6 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Key Tenets of Measuring Customer Satisfaction

Three key tenets must be considered when evaluating the way you measure customersatisfaction and your call center’s effectiveness:

1. The devil is in the details.

2. Be careful what you ask for...you just might get it.

3. It’s what they pay, not what they say.

The Devil Is in the Details

Pointing out the importance of understanding the details behind the numbers sounds likean insult to seasoned customer service professionals. Nevertheless, it is surprising howmany customer satisfaction surveys are virtually meaningless and how the survey resultsare often abused.

Consider the overall rating of 4.2 out of 5. This most likely would be interpreted as anexcellent result, but the rating may be hiding a set of underlying problems. You may haveone group of customers elated with your service, while another group is ready to defect tothe competition. Just as you would not staff for peak periods based on simple monthlyaverages, you must understand the distribution of the customer satisfaction ratings inorder to interpret them correctly.

One of our retail banking clients provides a prime example of the dangers of resting onthe laurels of high customer satisfaction ratings. The bank was puzzled by its poor “shareof wallet” and stagnant growth in products and services sales. Yet the average rating on itscustomer satisfaction surveys was consistently above 4.5. It turned out that the majority ofits survey respondents were low-revenue, single-product, demand deposit accountcustomers. Thus the rating did not reflect the fact that its precious large depositors withmultiple bank products were defecting at an alarming rate.

To ensure valid analysis of overall customer satisfaction, your survey must report resultsthat reflect the necessary level of detail across key customer segments and access channels.Analyze customer satisfaction survey results in terms of distributions rather than averages.Educate the executives in your organization so that they understand this level of detail.Overzealous executives in search of good news may get themselves into trouble withgeneralized customer satisfaction ratings ... and we all know which way “trouble” rolls.

Be Careful What You Ask for...

In many organizations, the “customer satisfaction” number becomes the overridingobjective for the call center. Individual and group rewards are often tied to customersatisfaction numbers. Call center managers then find ways to encourage performance thatwill ensure a good score on the customer satisfaction survey. In some cases this behavior isquite effective. But in other situations, this performance may not be related to corporateperformance. If completely satisfied customers are low-revenue customers and those whoare dissatisfied or just satisfied are high-revenue customers, the call center may meet or

Page 11: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

This item was developed by TARP. Contents copyrighted to TARP, 2002.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 7

exceed its customer satisfaction rating objective, but it has not done its job in servicingthe customers critical to corporate performance.

It’s What They Pay, Not What They Say

Numerous studies verify that the majority of customers who purchase products andservices from a competitor give high scores on customer satisfaction surveys just beforedefecting. Customer satisfaction surveys represent static, point-in-time information, whichin many cases does not adequately represent the continuum of the customer relationship.

Customer satisfaction should be measured primarily by customer retention, revenue andprofits. Initial purchases, repurchases, additional purchases and loyalty are the mostaccurate measures of how happy your customers are with the service your call centerprovides. This does not apply only to sales-oriented call centers. Customer service centershave a significant impact on revenue protection and customer retention. Without agentsat these call centers providing friendly and accurate information, customers are likely tocheck out the competition.

Steps to Help You Quantify and Secure Your Center’s Success

1. Ensure that the customer satisfaction survey maps to your customer segmentation andidentifies satisfaction levels by customer type. Make sure there is a common under-standing of these customer segments and their value across the organization. Finally, linkcall center rewards with those customer satisfaction measurements that are most related tocorporate performance.

2. Enlist the help of individuals from your finance and marketing departments to buildcustomer “lifetime value” measurement data, processes and tools. In many organizations,the concept of customer lifetime value is understood but not measured.

3. Build reports around retention, revenue and profitability contribution. Measure andreward call center staff at all levels based on this information. This will create a lasting tiebetween your call center and corporate performance. The strategic value of investment incall center technologies, training, staff or facilities will become much clearer.

4. Use technology, such as computer-telephony integration (CTI), to maximize theintelligence found in customer profiles, marketing databases and data warehouses fordifferential/preferential routing and treatment decisions, as well as persistent data. Theleading CTI packages have the ability to track and record “cradle-to-grave” call handlingactivities and merge data from business applications. This provides a source ofinformation on which to build reports pertaining – to retention, revenue and profitabilitycontribution.

Satisfying loyal customers should be a strategic mandate for any call center. Success shouldbe measured in terms of retention, revenue and profitability. Customer satisfaction scorescan be one index of success... but only if you are getting scores for the right game.

Excerpt from “The Customer Satisfaction Game: What Does a 4.2 out of 5 ReallyMean?,” by Bruce Calhoon, Call Center Management Review, November 1998.

Page 12: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• Call centers typically gather customer satisfaction data from foursources:

1. Accessibility data2. Internal quality data3. External outcome data from customer satisfaction surveys4. Combined data collection

• Although there are many customer touch points besides the callcenter, the call center is often the best place to integrate and analyzecustomer satisfaction data since it typically generates the most data.

Explanation

Call centers typically gather customer satisfaction data from four sources. Thefirst three are universally available methods of data collection while the fourth isa newer approach, which requires robust technology applications. Eachmethod has its strengths and weaknesses and the best approach often considersdata from all of these sources.

1. Accessibility Data

Accessibility is only one factor impacting the customer’s satisfaction with theway the call is handled. Due to the variety of factors affecting caller tolerance(e.g., time available, degree of motivation, level of expectations, etc.), it isdifficult to determine customer satisfation based solely on accessibility data.That said, many surveys have shown the importance of accessibility to customersatisfaction and its impact should not be underestimated.

Strengths: The strengths are that the data is easily available and root causes ofproblems are fairly easy to identify.

Weaknesses: The data is only a partial guide to satisfaction because if thetransaction is critical to the customer, e.g., finding out why a health claim wasdenied, the customer will wait as long as necessary without abandoning, butwill be dissatisfied.

1 2 3Ready?

8 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

1 2 32. Sources of Customer Satisfaction Data 1 2 3

Page 13: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 9

2. Internal Quality Data

Call quality monitoring data is initally used to coach individual agents;however, the same data can and should be used at three other levels formeasuring satisfaction. First, the individual customer’s satisfaction can beindicated by the tone of voice and comments made by the caller. Second, theperson monitoring can identify types of issues where the customer is notsatisfied even though the agent followed all the procedures. Third, the results ofall monitoring data can be combined to identify systematic issues wherecustomers are generally dissatisfied.

Strengths: The major strength of this data is that it is timely, if routinelyanalyzed. Further, it is inherently available from monitoring and coachingefforts.

Weaknesses: The primary weakness is that the data is somewhat subjective andrequires ongoing training to be interpreted accurately (e.g., one supervisor mayinterpret a quality problem where another would not, without specificguidelines).

3. External Outcome Data from Customer Surveys

One of the most reliable sources of customer satisfaction is gathering customerfeedback via surveying. The information is typically more reliable than internalsources because the customer directly indicates their level of satisfaction. (SeeSurvey Methodologies, this section.)

Strengths: Survey data tends to be the most valid in terms of identifying theactual level of customer satisfaction with the contact experience.

Weaknesses: The data must be collected from the customer at some level ofextra time and cost.

4. Combined Data Collection

There is a final approach to data collection and analysis that has become moreavailable in recent years. Combined data collection is mass internal call datacollection and analysis which combines digital voice recording with all contentdata logged by the agent or input by the customer via CTI. The approach usesvoice recognition to do content analysis through key words, e.g., “I have calledyou three times” or “I am very unhappy.” It combines the data collectedthrough voice recognition with coding by the agents (e.g., reason for call andtype of customer), number of contacts and duration of the calls to identify key

Page 14: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

trends and opportunities.

Strengths: Integrated information can lead to a better understanding of causeand effect, and provides a more complete understanding of the entire customercontact experience.

Weaknesses: Integrated systems reports require relatively advanced technologies,which are not available to all call centers. Further, when available, these systemsmust be programmed to provide desired information.

Besides the call center, customer satisfaction data comes from many sources inthe organization. For example, these touch points may include data from fieldsales offices, the Web and brick-and-mortar retail stores. However, the callcenter typically generates the most customer satisfaction data. As a result, it isoften the best place to integrate the data to determine what actions arenecessary for the organization to improve satisfaction levels. (See InterpretingCustomer Feedback, this section.)

This item was developed by TARP. Contents copyrighted to TARP, 2002.

10 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 15: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• There are four commonly used methodologies for surveyingcustomer satisfaction with the way the contact was handled by thecall center:

• IVR surveying• Mail surveying• Email and Web surveying• Phone surveying

• Surveys that will be most useful to managers will:• Provide valid information; i.e., representative of the actual body

of customers• Be simple to implement and administer• Use an effective rating scale• Be cheap enough to use at least quarterly, if not a monthly• Allow analysis by type of contact• Result in reports that are timely, action-oriented and easily

understood by call center management and supervisors

Explanation

There are many different reasons to conduct customer satisfaction surveys.Some of the most common include:

• Measure overall customer satisfaction with the contact experience tomaintain a high and consistent level of customer satisfaction and loyalty.

• Use customer satisfaction data to refine call center processes and responseguidelines. For example, one call center noticed that customers werescoring it lower than average on contacts involving fulfillment ofbrochures that involved another department. The data helped the callcenter to work with the other department to streamline the process forbrochure fulfillment and increase customer satisfaction.

• Use customer satisfaction data as part of performance measurement forgroups of agents.

• Use customer satisfaction data as part of performance measurement ofindividual agents.

1 2 3Ready?

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 11

1 2 33. Survey Methodologies

Page 16: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Basic Steps to Creating a Survey

There are seven steps to creating a sound customer survey:

1. Develop objectives: What’s the reason for the survey? What needs tobe learned? How will the information be used?

2. Gather quantitative and qualitative requirements: What needs to bemeasured from a quantitative standpoint? What kinds of qualitative inputare desired (e.g., specific customer comments or explanations)?

3. Determine the format of the survey: Through which channels will thesurvey be delivered (e.g., IVR, email, phone, etc.)? How many and whattypes of questions will be asked? How will the answers be captured?

4. Determine sample size: What is the expected response rate? What isthe desired level of confidence in results? How many customers need to besurveyed? (See Sampling and Analysis Principles, this section.)

5. Develop survey questions: This involves developing the specificquestions that will provide for necessary customer input.

6. Test the survey: This involves distributing the survey to a small sampleof customers to ensure that questions are clear and responses are addressingthe issues as desired.

7. Execute the survey: This involves distributing the survey to customers,collecting input, tabulating data and distributing final results to appropriateparties in the organization. This final step also involves creating any actionplans and followup necessitated by the survey.

Survey Methodology

There are four commonly used methodologies for surveying customersatisfaction with the way the contact was handled by the call center.

IVR Surveying

Interactive voice response (IVR) surveying allows customers to respond tosurvey questions using their telephone keypads or through speech recognition.It is recommended that no more than six questions be asked since customerstypically tire of the survey beyond that point.

There are three ways that an IVR survey can be administered in a call centerenvironment.

12 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 17: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

1. Agents can offer the survey before or after the transaction is completeand transfer the caller to the IVR.

2. The customer is asked by the system to pre-commit to the survey priorto being connected to an agent.

3. The customer is given an invitation to be a quality inspector by theorganization and is asked to call an 800 number to answer questions aftercontacts with the call center.

Strengths: This method is very low-cost and has a reasonably good responserate – often between 20-30 percent. It is especially useful when a customer callsa call center multiple times, which would make it difficult for the customer toremember the specific details of a particular call when asked in a survey severaldays later. Also the data from an IVR survey is typically available in a verytimely manner.

Weaknesses: There can be severe bias in the data if the agent offers the surveyafter the transaction is complete. Agents may not offer the survey if they knowthe customer is unhappy and may simply note that the customer declined totake it when, in fact, it was never offered. This bias is eliminated in the othertwo IVR approaches. All three approaches, however, will only measure thecustomer satisfaction with the immediate interaction and not the impact offollowup actions (e.g., sending something to the customer, fixing an accountproblem). This followup action can have a large impact on customersatisfaction and loyalty. Most customers will not tolerate long IVR surveys,which limits the amount of information that can be collected about the call.

Mail Surveying

In this methodology, a sample of customers is selected and sent a surveythrough the mail. The most effective mail surveys are only one page in lengthwith no more than five to eight questions. A postage-paid business replyenvelope should also be provided to increase response rates.

Strengths: Typically, a very thoughtful reply is received because the customerfills out the survey at his or her convenience. Comments can also be easilycollected. The cost is much lower than telephone surveys and can usually begenerated very easily using customer identification information after the call.Identification information can also be scanned for analysis. Mail surveys usuallyhave two key biases that tend to cancel each other out plus or minus 2 to 3percent. The customers who love you and who hate you tend to respond more

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 13

Page 18: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

often than customers who really don’t care or who are ambivalent about yourorganization. Also, more frequent customers are more likely to respond thaninactive customers.

Weaknesses: A primary weakness is the delay in receiving the results of thesurvey, which can be up to four to six weeks. However, if the survey data isused in conjunction with call quality monitoring, the delay is not as important,since the monitoring provides the short-term feedback and the survey identifiesareas in which the agent and the call center as a whole are failing tosystematically satisfy the customer. Another weakness is the need to manuallyinput data unless scanning technology is available.

Email and Web Surveying

Email and the Web provide many benefits as survey delivery methods. Theyare usually low-cost, generate responses very quickly and facilitate easy datacollection and analysis. There are three general approaches to email and Websurveying:

1. Email invitation: The customer is sent an email that invites him or herto click on an embedded URL, which takes the customer to a Web-basedsurvey.

2. Email dispatch: The customer is sent an email survey either embeddedin the invitation email or as an attachment to the invitation email.

3. Web intercept: A pop-up survey appears during a customer’s interactionwith a Web site to determine his or her satisfaction with the experiencewhile on the site. This is used most often for gathering feedback on theWeb experience and not as much for ascertaining the satisfaction with arecent email or phone contact.

Strengths: Using email or the Web for surveying generally achieves a 5 to 10percent higher response rate than mail. The responses are more rapid, usuallywithin 36-48 hours. Also, the email invitation approach allows the data to beautomatically entered into the host database so the process is fast and efficient.The Web intercept also usually allows the data to be immediately coded andreported. These advantages are not available for the email dispatch approach.Email and Web surveys also tend to generate rich verbatim replies to questionssuch as “Why were your dissatisfied?” Finally, email and Web surveys have theflexibility to determine what question will be presented next in the survey basedon the respondent’s previous answers.

14 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 19: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 15

Weaknesses: The key problems are the lack of email addresses for manycustomers and the fact that email addresses change rather frequently. The keybias can come from the fact that only certain market segments use the Webfrequently so the data will only come from customers who are comfortableusing the Web.

Phone Surveying

With this methodology, customers are called either by the organization or anoutside contractor in an interval after the contact with the call center. Thefollowup survey usually requires about seven names and phone numbers foreach survey completed. This results in a response rate of 15-20 percent of thesample.

Strengths: The major advantage of phone surveys is that they can becompleted quickly, often making the data available within a week of theoriginal contact with the call center. The phone survey can also probe thereasons for the customer’s feelings. Finally, the agent has the flexibility todetermine what question will be presented next in the survey based on therespondent’s previous answers.

Weaknesses: The major disadvantage of phone surveys is that there is usually asignificant (e.g., 10-15 percent) positive bias in the results when compared to amail or email survey. The reason for this bias is that the customer wishes to getoff the phone and the fastest way to complete the survey is to avoid expressingdissatisfaction. Additionally, the surveys are invasive and can be perceivednegatively by customers. Phone surveys are also usually twice as expensive asmail, email or Web surveys.

Other Considerations for Surveys

Developing effective surveys takes practice and an understanding of thecomplexities of conducting surveys. Before administering a customersatisfaction survey, identify best practice methodologies and find out whatprinciples have worked for others. In particular, you should be sure the survey:

• Provides valid information, that is, the results are representative of theactual body of customers. In many call centers, only very satisfied or verydissatisfied contacts are logged, resulting in an incomplete sample andskewed data.

• Is simple to implement and administer. If the methodology is too muchof a burden, it will not be used effectively and consistently.

Page 20: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

• Uses an effective rating scale. The most common rating scale is 1-5 with5 being the highest rating. This scale allows customers to indicate“middle of the road” service. The rating scale should always includedescriptions of what each number represents (e.g., 1 is very dissatisfied, 2is somewhat dissatisfied, etc.).

• Is cost-effective enough to be used on at least a quarterly basis.

• Allows analysis by type of contact. Unless the data provides satisfactionby type of contact and by identity of agent, it cannot be used to makeimprovements.

• Results in reports that are timely, action-oriented and easily understoodby call center management and supervisors. Overly complex researchreports may not be practical for ongoing call center management.

This item was developed by TARP. Contents copyrighted to TARP, 2002.

16 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 21: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• The sampling plan is dependent on the objectives of themeasurement.

• Specific principles to keep in mind when sampling contacts forcustomer satisfaction surveys include:

• Use a random sample• Sample by contact channel• Segment by unit of analysis

• Two key reports that can effectively display actionable data forperformance measurement include:

• Ranking report• Diagnostic report

Explanation

Specific principles to keep in mind when sampling contacts for customersatisfaction surveys include:

• Use a random sample: In general, the sample of customers to be surveyedshould be pulled randomly from a database of customers who have recentlycontacted the call center. To ensure that the survey results are representativeof the total population of contactors, it is important that the databasecontain all customer contacts or a representative sample. The “randomness”of the sample will allow the results to be representative to the totalpopulation of contactors.

• Sample by contact channel: When measuring overall customer satisfaction,it is best to pull a random sample of contactors by contact channel. Forexample, select x number of telephone contacts, x number of letter/faxcontacts, and x number of emails received during a given timeframe.Sampling by contact channel ensures customer satisfaction survey resultsrepresent customers regardless of the contact channel they chose to use.

• Segment by unit of analysis: In addition to sampling by contact channel,the sample may need to be segmented by the unit of analysis. The following

1 2 3Ready? 1 2 34. Sampling and Analysis Principles 1 2 3

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 17

Page 22: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

table shows simplified sampling requirements as they relate to objectives ofthe measurement system. For instance, if the main objective is to measureoverall customer satisfaction with the call center, then the call center maypull a random sample of 500 contacts per contact channel. On the otherhand, if the primary objective is to use customer satisfaction data to refineprocesses by contact type, the call center should pull a random sample of 100contacts per type of contact. In reality, of course, there can be multipleobjectives for measuring customer satisfaction so the sample may need toaccomplish several objectives at once. Note: The sample size is determinedby a variety of factors as the following discussion will explain.

Determining Sample Size

The topic of determining the appropriate sample size for the survey populationinvolves a variety of statistical techniques that are beyond the scope of thisstudy guide. However, there are several variables involved in these techniquesthat are important to understand. They include:

• Confidence interval: The confidence interval indicates how close the surveyresults of the sample are to the actual results you would get if you surveyedthe entire population. The confidence interval is expressed as a plus orminus deviation. For example, if the confidence interval shows that 5 and 24percent of the sample population select “yes” as their answer to a question,then somewhere between 19 percent (24-5) to 29 percent (24 + 5) of theactual population would select “yes” if you were to survey the entirepopulation.

• Confidence level: This percentage indicates how certain you can be that the

18 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Approximate size ofObjective Pull Sample by: sample to pull each month *

Measure overall customer Channel of contactsatisfaction by method (e.g., phone, email, 500/contactof contact correspondence/fax) channel

Measure by type of contact (e.g., reasonfor contact or type of response) to By major type 100/type refine call center processes of contact of contact

Performance measurement Team by team (group of agents) 100/team

Performance measurement by agent Agent 30/agent

* Assumes sample is pulled monthly and reports are produced quarterly, a 20 percent responserate and managerial significance. Actual number to be pulled would change with differentfrequencies of sample pull and reporting and desired level of confidence.

Page 23: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 19

actual population falls within the confidence interval. For the example above,a 97 percent confidence level would indicate that you are 97 percent surethat 19 to 29 percent of the actual population would indeed select “yes” ifyou were to survey the entire population. Most researchers use a 95 percentconfidence level.

There are several sample size calculators available online. One such calculatorprovided by Creative Research Systems can be found athttp://www.surveysystem.com/sscalc.htm.

In general, the larger the sample, the more reliable the results; however, there isalways the tension between reliability of results and the cost of surveying alarger sample.

Analysis and Reporting

Responses to customer satisfaction surveys should be analyzed on a regular basisto determine trends and isolate reasons for change in customer satisfactionlevels. Reporting and analysis information is useless unless the call center takesappropriate action based on the results. (See Identifying Contributors toCustomer Satisfaction and Isolating Root Causes of Dissatisfaction, thissection.)

If customer satisfaction data is used for performance measurement, it isimportant to provide managers and supervisors with actionable reports that canbe reviewed in a few minutes. There are two key reports that can effectivelydisplay actionable data: a ranking report and a diagnostic report.

• A ranking report orders the teams and agents from high to low based onaverage customer satisfaction scores.

• The diagnostic report provides the average customer satisfaction witheach attribute of the call for each team and agent.

The tables on the following page show examples of these types of reports. Note:Customer satisfaction index is abbreviated as CSI.

(continued, next page)

Page 24: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Ranking Report

Diagnostic Report

In the ranking report, Agent E received the lowest customer satisfaction indexfor the three-month and current month time periods. A look at Agent E’sdiagnostic report reveals that the greatest difference to the average occurs inprofessionalism and clarity of response. A supervisor could use these reports inconjunction with other internal metrics (e.g., call quality monitoring and

20 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

ABC CompanyAgent E November, 2001

Current3-month month 12-month

CSI Agent E 39 29 57

Company 68 73 71

Timelinessof Response Agent E 61 71 62

Company 62 70 64

Professionalismof staff Agent E 39 44 40

Company 70 75 72

Clarity of response Agent E 48 38 53

Company 57 62 58

ABC CompanyCall Center Ranking November, 2001

3-month Current 12-monthRank Agents CSI month CSI CSI

1 J 90 93 88

2 A 87 87 86

3 F 85 90 82

4 B 80 77 89

5 C 76 77 76

6 G 69 75 60

7 H 61 65 47

8 D 59 58 60

9 I 48 39 52

10 E 39 29 57

Page 25: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

supervisory observations) to effectively coach Agent E to improve his/herperformance.

This item was developed by TARP. Contents copyrighted to TARP, 2002.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 21

Page 26: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• The customer satisfaction data should be analyzed, using variousstatistical techniques, to understand the key drivers of customersatisfaction.

• Key drivers of customer satisfaction within the call center’s controlinclude:

• Accessibility (a customer dissatisfier)• Resolution on first contact• Follow through on promised action• Knowledge of the agent

Explanation

The basic formula for maximizing customer satisfaction and loyalty is to do thejob right the first time (i.e., limit the number of problems customers experienceacross the life cycle of the customer) and effectively manage customer contacts,including the feedback loop to product and service improvement.

1 2 3Ready?

22 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

1 2 35. Identifying Contributors to Customer Satisfaction

1 2 3

Customers will:

Buy again

Buy more

Tell others to buy

Buy your other products &services

+ =DOING

THE JOBRIGHT THEFIRST TIME

MAXIMUMCUSTOMER

SATISFACTION& LOYALTY

ImprovedProduct &

ServiceQuality

Respond toIndividual Customers

Identify Sourcesof Dissatisfaction

Conduct RootCause Analysis

Feedback onPrevention

EFFECTIVECUSTOMERCONTACT

MANAGEMENT

Source: TARP

Formula for Maximizing Customer Satisfaction and Brand Loyalty

Page 27: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

“Doing the job right the first time” requires effective product and service designand development, manufacturing processes, product and service delivery, andsales practices. This will ensure that, in most cases, customer needs andexpectations are met.

No product or service completely meets customer needs and expectations 100percent of the time. Customers do experience problems and they do havequestions. Thus, an organization needs to have an effective system in place tohandle these problems and questions when they arise.

In addition, data gathered from customers with problems/questions is used toimprove the quality of goods and services by identifying and correcting the rootcauses of customer problems and questions.

In short, the call center plays a role in all three components of the formula formaximizing customer satisfaction and brand loyalty. (See The Call Center’sRole in Customer Relationship Management, Section 3.)

Call Center Contributors to Customer Satisfaction

Using various statistical techniques, such as correlation and regression, the callcenter manager can isolate those parts of the call center process (e.g.,accessibility, response timeliness, knowledge of agent, follow-through onpromised action, etc.) that have the greatest impact on customer satisfactionand loyalty. The performance in these critical areas can be assessed againstcustomer expectations and tracked to help determine overall customersatisfaction.

Customer satisfaction data collected from all sources should be gathered in sucha way that will allow managers to determine the typical causes of dissatisfaction.This determination is not an exact science and many customers will bedissatisfied for multiple reasons. (See Isolating Root Causes of Dissatisfaction,this section.)

Key drivers of customers satisfaction within the call center’s control include:

• Accessibility: TARP has found that accessibility, as measured by servicelevel and response time, can be a significant customer dissatisfier if it isoutside the customer’s expectations. However, as long as the call centerprovides a service level within the customer’s expectations, answering thecall faster doesn’t significantly contribute to higher customer satisfaction.

• Resolution on first contact: On the other hand, TARP has found inhundreds of studies that “resolution on first contact” is nearly always a

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 23

Page 28: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

key driver of customer satisfaction. Increasing the percentage of contactsthat are resolved to the customer’s satisfaction on the first call willincrease customer satisfaction.

• Follow through on promised action: A close second to “resolution onfirst contact” as a key driver of customer satisfaction is “follow throughon promised action in the expected time frame.” Agents need to knowthat an overly optimistic promise to a customer can backfire if theorganization is not able to meet that promise. It is better to beconservative with promises, such as shipping times, product replacementtime frames, and order processing times, and delight customers with evenbetter actual results.

• Knowledge of the agent: For certain types of call centers, “knowledge ofthe agent” can also be a key driver of satisfaction. In some industries,contact with an agent is the service provided to the customer. Forexample, customers rely on the knowledge and experience of agents intechnical support centers and information desks to receive their primarydeliverable – knowledge. Customers can also experience frustration ifthey reference marketing promotions or products, for example, whichagents are unaware of. It is hard to satisfy customers that are moreknowledgeable than the agent that is attempting to help them.

A good source for examining potential contributors to customer satisfaction iscustomer expectations. (See Identifying and Quantifying CustomerExpectations, Section 3.) The goal is to understand the key drivers of customersatisfaction and dissatisfaction and manage to those expectations.

This item was developed by TARP. Contents copyrighted to TARP, 2002.

24 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 29: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• Multiple sources of data are necessary to identify and confirm causesof dissatisfaction.

• A number of quality analysis and improvement tools are useful inunderstanding processes and locating the root causes of problems,including:

• Checklists• Flow charts• Cause-and-effect diagrams• Scatter diagrams• Pareto charts• Control charts• Benchmarking

• For the analysis to be actionable, it must identify the specific eventsthat must be prevented, the causes of these events and the changesneeded to prevent them, and the cost to the organization in revenueand extra cost for each month that action is not taken.

Explanation

In addition to examining key drivers of customer satisfaction, the call center isin a position of leverage in terms of identifying and isolating root causes ofcustomer dissatisfaction. Determining the root cause is always the first step inimproving customer dissatisfaction. Root causes of dissatisfaction can occureither in the call center or outside the call center. Generally, the causes ofdissatisfaction with the call center can be narrowed into the following areas:

• Actions of employees

• Processes within the call center

• Processes that link the call center to other parts of the organization

• Incorrect customer expectations

Causes of dissatisfaction that occur outside the call center can come from amuch broader range of possibilities. Examples include:

1 2 3Ready?

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 25

1 2 36. Isolating Root Causes of Dissatisfaction

Page 30: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

• Marketing promises that don’t match with product or service capabilities

• Problems with products or services (e.g., malfunctions)

• Shipping delays or inaccurate order fulfillment

• Unclear or inconsistent policies and procedures

Customer satisfaction data should be analyzed by type of contact, time of dayand day of week. In addition to customer satisfaction data, other sources ofdata (e.g., operational data, call monitoring data) and interviews/brainstormingexercises with agents and supervisors can help identify and confirm causes ofdissatisfaction and potential actions.

Isolating Root Causes

As illustrated by the figure, a call center itself is a highly interrelated system ofcauses. And it is just one part of an organizationwide system of causes.

The central focus of the process can be any key performance indicator orvirtually any other measure or objective. Since just about everything isinterrelated, the causes of performance problems are often difficult to isolateand measure. There is little use exhorting agents (or individuals in otherpositions) to improve quality and fix causes of dissatisfaction without also

26 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Agent

LightingNoiseChairScreensAppearance

ACD system

AdvertisingProblems

Market forces

Result

KnowledgeCommunications skillsAptitudes and abilitiesTrainingEmpowerment (authority)After call workAdherence to scheduleHealthMotivationPacing

Contact type

Information

LoadCommunications speedCPU speedProgrammingCTI

Accessibility

Web/e-contacts

Network/Lines/Trunking

QuantityGrouping

Distance from exchangeAnalog/digitalTesting

Info. systems/CRM

VRUBranching

Scripting

Response timeWork offloading

Management reports

Speech rec./DTMF

Staffing

SchedulesWork rules

ForecastAbsenteeism

Part-time ratioOverflow groups

Group configuration

Desktop tools/capabilitiesManagement reports

Customer

Communication skillsLanguage abilityMoodKnowledge

Access preference

Expectations

Environment

SecurityService available

Mgmt. reports

Internet speed/bandwidthDegree of integration

Routing/distribution rules

Cust. configuration

Connection speedCPU capacity

Mgmt. reportsUser capabilities

Routing/distribution rules

Lifestyles

Objective

The Call Center Process

Page 31: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

making improvements to the system of causes (the processes).

Without the appropriate methodology and tools, identifying the root causes ofproblems in a call center is a significant challenge. Consider a recurringproblem, such as providing incomplete information to callers. Maybe the causeis insufficient information in the database. Or a need for more training. Ormaybe a lack of coordination with marketing. Or carelessness. Or agent stressfrom a chronically high occupancy rate. Or a combination of any of thesefactors coupled with many other things. To make improvements and leverageopportunities, you need to have a systematic approach for improving quality,such as the one illustrated in the following graphic.

Further, there are a number of quality analysis and improvement tools that areuseful in understanding processes and locating the root causes of problems, forexample:

• Checklists: Lists of process steps; e.g., key procedures.

• Nominal group technique: This weighted ranking technique is effectivefor determining priorities. Team members individually rank issues byimportance and the issues receiving the highest votes are worked on first.

• Flow charts: Used to analyze and standardize procedures, identify rootcauses of problems and plan new processes; e.g., system programming orsteps to handle a contact.

• Cause-and-effect diagrams: Illustrate the relationships between causesand a specific effect you want to study; e.g., repeat calls.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 27

Identifyimprovementopportunities

Track results

Identify allpossiblecauses

Agree onactualcauses

Decide whichcauses to

remedy first

Develop aplan ofaction

Implementthe solution

Select keyperformance

indicators

A Process for Quality Improvement

Page 32: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

• Scatter diagrams: Assess the strength of the relationship between twovariables; e.g., average handling time versus customer satisfaction.

• Pareto charts: Bar charts that rank events in order of importance orfrequency; e.g., sources of customer dissatisfaction by type.

• Control charts: Provide information on process variation; e.g., salesratios or contact quality scores.

• Benchmarking: The process of measuring your products, services andprocedures against other organizations.

For the analysis to be actionable, it must identify the specific events that mustbe prevented, the causes of these events and the changes needed to preventthem, and the cost to the organization in revenue and extra cost for eachmonth that action is not taken. Further, the current level of the problem shouldbe articulated along with the data sources that confirm the stated level. Finally,the analysis should provide the expected impact of the proposed actions alongwith the cost of those actions.

(Quality improvement tools and processes are covered in detail in ICMI’s CallCenter Operations Management Study Guide.)

Case Studies

Several case-study examples illustrate how customer satisfaction data, in concertwith other operational data, can be used to isolate the root cause of customerdissatisfaction.

Case Study One: Customer satisfaction data for a packaged goodsmanufacturer’s customer relations call center showed a sudden drop in customersatisfaction in one month, especially satisfaction with responsiveness andfollowup (two specific attributes the call center tracked). Analysis by type ofcontact and type of response showed that the largest drop in customersatisfaction was for contacts requiring a followup mailing of acatalog/promotional item from another department within the organization.Review of contact data also showed an increase in calls from customers whohad requested and not received certain promotional items in a timely manner.Further investigation showed that, in fact, the other department had fallenbehind in its delivery times for catalogs/promotional items due to an increase incustomer requests and the inability to justify more staff to cover the increasedvolume. By using the customer satisfaction data, translated into impact oncustomer loyalty, revenue and costs to the organization, the call center manager

28 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 33: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

was able to help the fulfillment department justify increased staff and restorefulfillment times to an acceptable level.

Case Study Two: A financial services call center found that customersatisfaction was significantly lower for evening/night calls than during the day.Further analysis found that there was little difference in the type of contacts,type of response or accessibility of the call center during these times. The dataalso showed that customer satisfaction with agent knowledge, clarity andprofessionalism to be particularly low for the evening/night time periods. Uponfurther investigation, the call center manager found that the evening/nightshifts were staffed entirely by new agents with limited supervision or assistanceby senior agents. Working with the agents and supervisors, the call centerworked out a system for more experienced agents to work on a rotating basiswith small groups of new agents in the evening hours, thus increasing theirknowledge and confidence.

This item was developed by TARP and ICMI. Contents copyrighted to TARP and/or ICMI,Inc., 2002.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 29

Page 34: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• Service level and response time are the best measures of accessibilityfor channels that require agent interaction.

• Other measures of accessibility include:• Abandoned calls• Blocked calls• Average speed of answer• Downtime• Number and type of error messages generated• Percentage complete in IVR• Number of contacts continued in another channel

• Each channel’s accessibility needs to be evaluated both on its ownand in context with all of the organization’s other customer accesschannels.

Explanation

The best measures of accessibility for customer access channels in a call centerare service level and response time. One or the other of these measures shouldbe in place for every channel that requires agent interaction, and fullyautomated channels should have their own specific measures. Each channel’saccessibility results should be evaluated together with customer satisfactionmeasures for that channel in order to assess the effectiveness of service deliveryacross the call center.

Definition and Importance of Accessibility

Accessibility of the customer to the organization includes two components:

1. Ease of locating service channels (e.g., how easy is it for customers tofind 800 numbers or how easy are they able to locate the self-serviceknowledge base on the Web): This first component is often the directresponsibility of another department, such as marketing, but call centermanagers should represent the customer by providing feedback on howeasy it is for customers to locate service channel contact information.

1 2 3Ready?

30 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

1 2 37. Measuring Accessibility Across Channels 1 2 3

Page 35: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

The organization may also decide to make some service channels moreaccessible to customers than others. The call center manager can providevaluable insight into the ramifications of these decisions. For example,requiring all customers to make initial contact with the organization viaemail may drive those customers who are uncomfortable with thismedium to your competitors.

2. The time that elapses between the customer’s attempt to contact theorganization and the organization’s response: This second component isthe direct responsibility of the call center for each contact channel ithandles. Determining how accessible the call center is by each channelshould be based squarely on customer expectations. This aspect ofaccessibility is embodied in two key measures: service level and responsetime.

Service Level and Response Time

Service level and response time objectives tie the resources you need to theresults you want to achieve. They measure how well you are getting thetransactions “in the door” and to agents so that you can get on with thebusiness at hand. Service level and response time are stable, concrete targetsthat can be used for planning and budgeting.

Service level is defined specifically as: “X percent of contacts answered in Yseconds”; e.g., 90 percent answered in 20 seconds.

Response time is the equivalent of service level for transactions that don’t haveto be handled the moment they arrive. Response time is defined as “100percent of contacts handled within n days/hours/minutes”; e.g., all email will beanswered within 120 minutes or all faxes will be responded to within 24 hours.

The table on the following page illustrates the measure that is most likely to beappropriate for each customer access channel.

It is likely that some channels appropriate for response time objectives todaywill migrate to the service level column as technology, competition andcustomer expectations evolve. Whenever the objective for response time reachesone hour or less, then service level calculations are required to accuratelydetermine the staff needed to handle the work.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 31

Page 36: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

32 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

There is no industry standard service level. What is appropriate for oneorganization’s customers may not be appropriate for another’s. When choosinga service level objective, it is important to consider caller tolerance levels, theorganization’s strategic objectives, labor costs, network costs and the value of acall. (For more information on service level and response time, see ICMI’s CallCenter Operations Management Study Guide.)

Impact of Differences in Accessibility Across Channels

Service level must be in “parity” across contact channels. Being in parity in thiscontext doesn’t necessarily mean being equal; e.g., it doesn’t mean you respondto email as fast as you respond to a phone call. Rather, it means you areoperating within customer expectations across contact channels. The customerwho expects a reply to an email within a few hours but doesn’t get it may pickup the phone. Now you’ve got two contacts going, which degrades bothproductivity and quality. Similarly, if a customer ends up in an endless phonequeue, they may send an email, try alternative numbers or work through adifferent set of IVR options.

What are your customers’ expectations? Ask them. And observe their behavior.Also, tell them. Tell them what to expect. Email response management systemscan automatically send replies acknowledging that messages have been receivedand providing information on when to expect a response. Your Web site,customer literature and other sources of information can also help establishexpectations, as well as provide information or services that may obviate thecontact in the first place.

Access Channel Use Service Level Use Response Time

Inbound calls X

Outbound calls X

Email X

Text chat X

Web “call me back now” X

Web “call me back later” X

Web call through X

Fax X

Postal mail X

Page 37: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

In the end, service level and response time must be viewed in the context of amuch larger objective: customer loyalty. Neither guarantee a loyalty-buildingcustomer experience, but they must not be minimized, either. They areenablers.

Additional Measures of Accessibility

In addition to service level and response time, the following measures provideadditional information about accessibility across customer access channels.

• Abandoned calls: Abandoned calls is the measure of callers who hang upafter waiting for at least some time in the queue. However,abandonment can be a misleading measure of accessibility. There arenumerous factors that go into how motivated a caller is to wait andcallers have individual levels of tolerance, which may be different eachtime they contact the center depending on why they are calling and howbusy they happen to be that day. These factors are out of a call centermanager’s direct control making it unreasonable to use as a basis forplanning activities.

• Blocked calls: Blocked calls, or busy signals, can be purposefullygenerated by the center when a certain volume of calls have been reachedor can be the result of insufficient agent or trunking capacity to handlethe call load. How many callers experienced busy signals does not tellyou anything about how accessible the center was to those callers whowere able to get through. Therefore, blocked calls does not give acomplete picture of accessibility. (For more information on abandonedand blocked calls, see ICMI’s Call Center Operations Management StudyGuide.)

• Average speed of answer (ASA): The average delay of all calls, includingthose that receive an immediate answer. ASA is based on the same dataas service level. However, ASA can be misleading and does not reflect theexperience of a typical caller. Since ASA is an average, most calls getanswered more quickly than ASA and some wait far longer. Very fewcustomers have the “average” experience as expressed by ASA. (For moreinformation on average speed of answer, see ICMI’s Call CenterOperations Management Study Guide.)

• Downtime: The time that a system is unavailable. This is the mostfundamental measure of accessibility. For example inbound telephonesystems are expected to be operational 99.999 percent of the time,commonly referred to as “five nines.” Computer systems have not

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 33

Page 38: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

34 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

historically achieved this level of reliability, but are getting close.

• Number and type of error messages generated: Automated systemsgenerate error reports to detect and help solve technical problems. Someof these reports can provide insight about the ways customers are usingthe system, and may be interpreted as measures of accessibility.

• Percentage complete in IVR: This is total IVR users divided by thenumber of users who exited the IVR application after completing theirtransaction. This has long been a measure of IVR accessibility. Themeasure fails to provide a complete picture, however, if it is usedsimplistically. Your customer access or segmentation strategy may call forhuman intervention, in which case the percentage complete in IVR willnot be an accurate measure of success. As many have noted, it is hard toupsell or cross-sell products and services using an IVR system.

• Number of contacts continued in another channel: Tracking how oftencustomers contact you via one channel and then move to anotherchannel can provide valuable insight regarding your overall customeraccessibility. Channel hopping may be a sign that the first channel is notas accessible as it should be, so customers are abandoning it in favor ofanother. For example, a Web site that offers insufficient information maybe the starting point for customers, but to get the information they needa phone call or text-chat may be required.

It bears repeating that accessibility measures should always be considered incontext with other performance results and qualitative information. Except fordowntime and error messages, results from these measures cannot beinterpreted in a vacuum.

Monitoring Accessibility in Real Time

Call center managers must put the appropriate people and processes in place toproactively monitor system performance in real time. This involves monitoringtrunks to ensure all are live, monitoring IVR ports for availability, checking thatthe Web site is up and running, making sure applications like PC banking arefunctioning for customers, and making sure calls can connect to agentsproperly. Failure to monitor key systems in a dynamic environment canquickly and unnecessarily lead to customer dissatisfaction.

Accessibility also involves ensuring that key processes are operating effectivelyon an ongoing, real-time basis. These include:

• Content accuracy and provisioning (See Planning and Management

Page 39: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Processes, Section 7.)

• Information accuracy and dissemination (See Desktop Tools andWorkflow, Section 7.)

• System response time (The dynamics of system response time, callhandling time, service level and other contributors to real-timeperformance are covered in detail in ICMI’s Call Center OperationsManagement Study Guide.)

All of these processes are components of an integrated whole: ensuringcustomers access the information and service they want, when they want it,through the channel they choose.

Evolution of Accessibility

As call centers develop multiple channels of customer access, an almost organiccross-pollination effect is taking place: When unique characteristics of thesechannels find favor with customers, demand grows to develop ways to applythose characteristics to other channels, as well. For example:

• The Web browsing experience is being adapted for both wireless phones(via tiny screens) and regular phones (through IVR applications thatboth “read” Web pages aloud and recognize voice commands).

• The technology underlying Web sites that offer excellent self-serviceoptions is being leveraged for call center agents to use in responding tophone inquiries.

• The helpfulness of live agents enriches the Web-browsing contactchannel when “call me” or “call through” buttons are provided.Continued, even accelerating, change appears to be the rule for customerexpectations about multichannel call center accessibility.

Consequently, call centers managers will need to continually monitor, track andevaluate customer satisfaction or risk being left behind.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 35

Page 40: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• The call center is one touch point within the overall context of thecustomer experience. The data derived from the call center is apredictable portion of the total input, but there are also othersources, such as sales, Web and retail contact areas.

• The call center can be the primary focal point for collecting data onboth overall customer satisfaction and satisfaction of the customerwho contacts the organization; in addition, the call center can be thefocal point for identifying opportunities to improve overallsatisfaction.

• The solution to the challenges of multiple complaint rates, acrossmultiple channels, across a range of time, is to develop a map ofcustomer-contact behavior by type of issue.

• The best way to provoke action based on customer feedback is toconvert the number of instances into revenue lost.

Explanation

The call center is one touch point within the overall context of the customerexperience. The data derived from the call center is a predictable portion of thetotal input, but there are also other sources, such as sales, Web and retailcontact areas.

Call center data is often more timely than survey data or information receivedfrom field offices or sales reps; e.g., the customer tends to call or email at thetime a problem occurs. The only touch point nearly as timely as the call centeris Web information.

Leading organizations make use of the following sources of customer data:

• Survey data

• Employee input

• Customer complaints from the field and the call center

1 2 3Ready?

36 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

1 2 38. Interpreting Customer Feedback [Strategic] 1 2 3

Page 41: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 37

• Call quality monitoring recordings

• Customer advisory panels

Integrating Multiple Sources of Data

Data from multiple touch points can be integrated to create a single picture ofquality. However, it is important to understand customer behavior to accuratelyintegrate customer feedback data.

The following chart, which is based on a farm equipment manufacturer,illustrates the challenge of integrating customer feedback and quality data. Themanufacturer and retailer both inspect the product prior to sale. Then thefarmer buys the tractor and uses it in the field. When the tractor breaks down,the first call is from the farmer to the help desk. Then a technician may callthe factory to ask for assistance in repair. Then a parts order comes in. Then afactory rep reports multiple failures. Finally, warranty claims and surveys arereceived. All of these data sources are describing the same set of events.However, the survey arrives many weeks after the original complaint call andparts order.

The solution to the challenges of multiple complaint rates, across multiplechannels, across a range of time, is to develop a map of customer contactbehavior by type of issue. The “map” on the following page was developed bysurveying a random sample of customers and asking to whom they complain bytype of problem. This map addresses where people complain uponencountering a rude gate agent in an airport.

Time Frame

Before Failure Failure Event After Failure

Line Inspection

Factory Audit

RetailerAssembly

Field Tech

RequestField

ReportWarranty

ClaimSatisfaction

Survey

Call to Help Desk Parts Order

Types of Failures: Assembly Operator Error, Design, Mismarked, PoorManufacturing Process

Data Sources for Product FailureFarm Equipment Manufacturer

Source: TARP

Page 42: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

38 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

For each touch point, you can estimate a rough ratio of the number ofinstances that exist in the marketplace for each problem reported at a touchpoint. You can then extrapolate the problems to the total instances in themarket as shown below. Finally, an overall estimate of the number of instancesof the particular problem is determined by averaging the estimates, also shownin the following chart:

As a satisfaction data system is only as effective as the degree to which itprovokes action – reports that cause no action are a waste of time. The bestway to provoke action is to convert the number of instances into revenue lost.

This next-to-the-last step is conversion of the estimate of the number of

** For these channels, the consumer may have first complained elsewhere and then escalatedtheir complaint to this channel.

80% Don’t Complain

100 AirlineCustomers

EncounteringRude Gate Agent

2% to flight attendant0.8% to consumer affairs/

customer relations** 7% to supervisor on site**

0.5% to executive by letter**0.2% to executive by email ** 1% to frequent flyer 800#** 4% to reservations 800# 1% airline Web site3.5% other

20% Complain

TARP industry specific research and consulting

Percent of Customers Complaining to Different Touch Points

Ratio ofInstances to Total Best Estimate

Customer Problem Reported Estimated # InstancesTouch Point Reports Problems Problems (Average)

Frequent Flyer800 # 6 100 600

EmployeeInput System 20 20 400

Reservations 14 25 350

ExecutiveComplaint 2 500 1,000

Consumer Affairs 4 120 480

Survey 0.5% 100,000 500

555 Instances In

a Month

Page 43: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

customers affected into revenue impact as shown in the following chart:

In our example, we will assume that 25 percent of customers who experiencethis problem will translate into lost business and the value of a customer is$1,000. Multiply the impact per customer by the number of customersencountering the problem in the last month, multiplied by the annual revenuevalue of the customer to estimate the revenue impact of the issue per month.TARP has found that revenue impact per month is the most effectivetimeframe for presentation of this data.

This item was developed by TARP. Contents copyrighted to TARP, 2002.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 39

# Customers Damage to Value of Monthly Revenuein Month Loyalty Customer Impact

555 X .25 X $1,000 = $138,750

Page 44: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• Understanding the linkage between customer satisfaction andcustomer behavior will enable call center managers to:

• Tie investments in the call center to business objectives• Set priorities for improvement• Establish customer-driven standards

Explanation

While the general linkage between customer satisfaction and customer behaviorhas been demonstrated by research (see The Value of Customer Satisfaction andLoyalty, Section 3), there is often a corporate current of “we’re different” or“our customers don’t behave that way.” Therefore, it is important for each callcenter manager to measure the specific relationship of customer satisfaction andcustomer behavior for his or her organization.

Understanding the linkage between customer satisfaction and customerbehavior will enable call center managers to:

• Tie investments in the call center to business objectives

• Set priorities for improvement

• Establish customer-driven standards

Tie Investments in the Call Center to Business Objectives

All businesses have limited resources and senior management is constantlystriving to determine the best way to use those resources. For instance, shouldthe organization invest more in advertising or in the call center? With data thatshows both the customer satisfaction level and resulting customer behavior ofcallers, the call center manager can demonstrate the ROI (return oninvestment) of the call center or changes to the call center (e.g., increasestaffing, increase training).

One effective model to calculate the ROI of a service delivery system is TARP’sMarket Damage ModelTM. The following graphic shows example data collectedfrom a customer satisfaction/loyalty survey. In this example, out of 500,000customers in a given time frame, 70 percent experience some type of question

1 2 3Ready?

40 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

1 2 39. Leveraging Customer Feedback [Strategic]

Page 45: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 41

or problem, with half contacting the organization for resolution. Of those whocontact, 40 percent are satisfied, 35 percent are mollified (i.e., not completelysatisfied), and 25 percent are dissatisfied. Of those who are satisfied, 95 percentwill definitely remain loyal (i.e., purchase the product/service again), resultingin 66,500 loyal customers due to contact-handling. Following through on themath for customers who are mollified and dissatisfied, the total number ofcustomers who are loyal after contact-handling is 125,563.

However, to show the impact of the contact-handling capabilities of the callcenter, the number of customers whose loyalty was not affected by the callcenter should be subtracted from the total number of loyal customers. Theseunaffected loyal customers would fall into three categories:

• They did not have a problem (132,000).

• They had a problem, but did not complain (96,250).

• They would still be loyal despite their dissatisfaction (the 30 percent[52,500] of dissatisfied customers who would be loyal regardless). In ascenario without contact-handling, it can be assumed that all 50 percentof the customers who would have complained would be dissatisfied.However, based on experience, 30 percent of those dissatisfied customerswould still remain loyal.

So, the total number of loyal customers unaffected by the contact-handlingservice of the call center would be 280,750.

The result is the net number of loyal customers due to contact-handling(73,063). If the average customer is worth $25, the value of the contact centerwould be $1,826,575. If the cost of running this call center is $1 million, theROI is 83 percent (i.e., $1.83 million minus $1 million divided by $1 million).

(continued, next page)

Page 46: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

42 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Set Priorities for Improvements

Customer satisfaction and loyalty data can help the call center manager setpriorities for improvements. There are three main ways to increase the bottom-line impact of the call center.

• The call center can identify root causes of customer problems andunnecessary questions and thus help the organization decrease costsrelated to resolving customer problems.

• The organization can promote the call center (e.g., through advertising),thus increasing the number of contacts to the call center and theopportunity to increase customer satisfaction and loyalty. This, of course,will also mean an increase in the costs associated with the call center tomeet the additional contacts.

• The call center can capture information useful to marketing, productdevelopment and other areas of the organization.

The table on the next page illustrates sensitivity analysis for three potentialinitiatives for the call center. Initiative A is a training program that is expectedto significantly increase caller satisfaction. Initiative B is a software programthat will allow better logging and analysis of customer reasons for contact andthus decrease customer problem experiences. Initiative C is a comprehensiveprogram to decrease customer problems and increase customer satisfaction.

% Definitely will

repurchase

Number of Loyal

Customers

132,000

45,938

Total Loyal Customers = 353,813

125,563

55%

88%

75%500,000 Customers

ProblemExperience

70%

No ProblemExperience

30%

50%Complained

50%Did Not

Complain

Dissatisfied25%

Satisfied40%

Mollified35%

95%

30%

Minus Customers who would have been loyal anyway, even if no contact handling and all customers with a question/ complaint went away dissatisfied (i.e., 500,000 x 70%x 50% x 30%) + Other Loyal Customer who would not contact (228,250) = 280,750

Equals Net Loyal Customers due to Contact Handling = 73,063

% Definitely will

repurchase

% Definitely will

repurchase

Number of Loyal

Customers

66,500

45,938

13,125

96,250

Total Loyal Customers = 353,813

55%

88%

75%500,000 Customers

ProblemExperience

70%

No ProblemExperience

30%

50%Complained

50%Did Not

Complain

Dissatisfied25%

Satisfied40%

Mollified35%

95%

30%

Minus Customers who would have been loyal anyway, even if no contact handling and all customers with a question/ complaint went away dissatisfied (i.e., 500,000 x 70%x 50% x 30%) + Other Loyal Customer who would not contact (228,250) = 280,750

Equals Net Loyal Customers due to Contact Handling = 73,063

Example Data Collected from a Customer Satisfaction/Loyalty Survey

Page 47: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 43

Establish Customer-Driven Standards

The third way that customer satisfaction and loyalty data can be used by thecall center manager is to set customer-driven standards. The data can be used tohelp determine what most impacts customer satisfaction and loyalty. Forexample, in the chart on the following page, there is a significant drop incustomer satisfaction (and thus loyalty) when the email is not handled withinfour hours. Therefore, the call center should set the standard for response toemails within four hours.

Baseline Initiative A Initiative B Initiative C

Percent Problem Experience 70% 70% 65% 65%

Percent Complaining 50% 50% 50% 50%

Percent Satisfied 40% 45% 40% 45%

Net Loyal Customersgained due toContact Handling 73,063 78,750 73,063 73,125

Change due toContact HandlingImprovement Initiative 5,687 2,031

Increase in LoyalCustomers due todecreased ProblemExperience 9,406 9,406

Value of LoyalCustomers gaineddue to ImprovementInitiative $142,175 $235,150 $285,925

Cost of ImprovementInitiative $50,000 $150,000 $200,000

ROI of Initiative 184% 135% 43%

(continued, next page)

Page 48: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

This item was developed by TARP. Contents copyrighted to TARP, 2002.

44 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

89%81%

69% 64%

47%

15%

0%

20%

40%

60%

80%

100%

Lessthan 1hour

1-4hours

5-23hours

1-2days

3-7days

Morethan 7days

Length of time until final action

% c

om

ple

tely

sat

isfi

ed w

ith

acti

on

tak

en

Percent of Satisfied Customers at Various Email Response Times

Length of time until response

Source: TARP

Page 49: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Key Points

• Key barriers to serving customers effectively include:• Lack of connected vision• Lack of a supporting strategy• Lack of investment in building skills, knowledge and leadership• Lack of enabling technologies• Lack of supporting operational plans and processes• Lack of required investments• Lack of processes for ongoing innovation

Explanation

Today’s customers demand user-friendly, self-service processes and systems aswell as the means to reach capable and knowledgeable call center agents, whenand if they need them. Delivering on these expectations takes more than luck.It requires a solid plan, for today and beyond.

Unfortunately, for many organizations, transitioning systems and processes tomeet the needs and expectations of today’s fast-paced, multichannelenvironment have been less than successful. Pick up almost any business, salesor technical publication today and it is easy to see that barriers to servingcustomers effectively continue to plague many organizations.

Some of the key barriers include:

• Lack of connected vision

• Lack of a supporting strategy

• Lack of investment in building skills, knowledge and leadership

• Lack of enabling technologies

• Lack of supporting operational plans and processes

• Lack of required investments

• Lack of processes for ongoing innovation

Lack of connected vision: Overall, your vision is your organization’s purpose

1 2 3Ready?

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 45

1 2 31 2 310. Barriers to Serving Customers Effectively

Page 50: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

or reason for being, both today and in the future. A business’s “identity” is afunction of how faithfully it pursues its vision. The vision can help guideeveryone from senior management to frontline employees when they are facedwith decisions. Every decision from whether to buy new technology, hire moreagents, make an account adjustment or add a new product or service can be“tested” against the vision to decide what action to take.

Without a clear vision decisions may achieve objectives, but they will not bealigned with the reason for being. For example, a call center may decide to buynew technology that will reduce handle time and automate transactions. But ifits mission is to “get closer to our customers by providing a human touch,” thistechnology may not be the right priority.

Disconnects between what the organization states as their mission and thestrategies it deploys for its people, processes, technologies and resources willultimately lead to confusion among both customers and employees. The falloutwill be retention and loyalty issues because you have stated what your purposeis and you have failed to fulfill it. On the other hand, a clear mission that issupported by top management, well-communicated, understood and fulfilledcan lead to loyal customers and employees.

Lack of a supporting strategy: If the vision is where the organization is andwhere it is going, then the strategy is how it is going to get there. Thedevelopment of a customer access strategy is essential to being able to fulfill thecall center’s vision. This strategy must establish clear responsibilities, includethe necessary resources and provide a foundation for operational changes andimprovements. The strategy must be aligned with the call center’s vision. (SeeComponents of an Organizationwide Customer Relationship ManagementStrategy, Section 5.)

Lack of investment in building skills, knowledge and leadership: The rise ofthe Internet and the latest technological developments may seem to havechanged everything about the way organizations do business. However, thereality is that in today’s competitive world it is not just about the excellence ofthe product. It is increasingly about the excellence of the service delivery, ofrelationship building and branding. And these require people – dedicated,committed and knowledgeable individuals. Organizations that lose sight of thiswill quickly find themselves with dissatisfied customers, or worse, no customers.

Organizations must understand the value of call center agents. These positionscan no longer be perceived as low paying and entry-level if the call centerexpects to create and deliver customer value. Organizations must assess the

46 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 51: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

impact and demands they are placing on agents and provide the necessary toolsand training. Without this, significant barriers to serving customers will occurbecause agents will be ill-equipped to:

• Deal with the increasing level of customer sophistication and knowledge

• Adjust to rapid changes in products, services and technologies

• Operate in a time-sensitive, rapid-paced, multimedia environment

(See Aligning Hiring and Training Initiatives, Section 7.)

Lack of enabling technologies: While technology can provide support,streamline complex processes and make it easier for both your employees andyour customers to interact, all too often, organizations create barriers to servingtheir customers through one or more of the following:

• Viewing technology as a “quick fix.”

• Not establishing a customer access strategy first. Lack of an accessstrategy means that the technology selected will not be focused oncustomer segments, contact types, process improvements and so on.

• Not getting buy-in from the top.

• Trying to fix management problems with technology.

• Not identifying and addressing the challenges that new technologiescreate. New technologies must be viewed in light of the impact they willhave on existing people, processes and customers. For example, agent-assisted browsing is great for customers; however, agents will need tounderstand the basics of browsing technology and be ready for associatedtechnical questions.

(See Customer Relationship Management Technologies, Section 7.)

Lack of supporting operational plans and processes: Organizations that fail toimplement key operating plans and processes that support their strategy willcreate any number of customer service barriers. While there are literallyhundreds of processes in a call center, they can generally be grouped into fivemain categories.

• Resource planning and management

• Content provisioning

• Reporting/communication

• Organization structure and design

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 47

Page 52: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

• Quality improvement

(See Planning and Management Processes, Section 7.)

Lack of required investments: Building the business case for funding requires alook beyond satisfying the expectations of customers today. It requires a lookinto the future, as well. Organizations that continue to view budgets for thecall center in terms of plus/minus last year’s budget will find they may becompromising their ability to serve their customers.

Lack of processes for ongoing innovation: While it is true that the basicpremise of customer expectations has not changed much, the definition ofthose expectations has. For example, customers have always expected us to beaccessible – yesterday that meant by phone or mail; today that means by email,web, chat, video, etc., and it often means 24x7, as well. What this implies isthat organizations must have processes that allow them to capture and analyzedata and turn it into the knowledge they need to create services around theseevolving customer expectations, which, in turn, will mean revisiting all aspectsof the strategic planning process from our vision to skills, training, technology,investments and processes. (See Identifying and Quantifying CustomerExpectations, Section 3.)

By eliminating the barriers to serving customers effectively, you are likely toexperience increased customer satisfaction, lower costs over the long-term, moresatisfied employees and increased profitability. When customers are servedeffectively and efficiently, it benefits everyone involved.

This item was developed by Rose Polchin, Certified Associate of ICMI and President of RosePolchin Consulting and Training.

48 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 53: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Evaluating Service Delivery

Exercises

Customer Satisfaction Measurement Principles

1. True or false

______A baseline survey is longer than an ongoing tracking survey.

______Baseline data should be analyzed to help set customer-driven standards,to determine a baseline measure of customer satisfaction with thecontact experience and to determine the key drivers of customersatisfaction and loyalty.

______For ongoing tracking surveys, it is best to survey the customer within 1week after they have contacted the call center.

2. What are four criteria for determining survey content?

M _________________

A __________________

R __________________

C__________________

3. Match the following terms with their definitions. You will use eachdefinition only once.

______Cross-sectional surveys

______Longitudinal surveys

______Marketplace survey

______Pinpoint surveys

______Snapshot surveys

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 49

a. Allows for the comparison of yourorganization’s customers with those of otherorganizations.

b. Gather data from a sample of customersacross all segments or across the full range ofpossible issues across the customer lifecycle.

c. Gather data over a period of time, usually todetect trends.

d. Intended to measure objectives at a singlepoint in time.

e. Look at only one segment or set of issues.

Page 54: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Sources of Customer Satisfaction Data

4. Select the most appropriate answer to each question.

Which source of customer satisfaction data is inherently the most subjective?

a. Accessibility data

b. Combined data collection

c. External outcome data from customer surveys

d. Internal quality data

A weakness of which source of customer satisfaction data is that it requiresrelatively advanced technologies?

a. Accessibility data

b. Combined data collection

c. External outcome data from customer surveys

d. Internal quality data

50 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

Page 55: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Survey Methodologies

5. Select the most appropriate answer to each question.

Which of the following survey methods does NOT provide feedback in atimely manner?

a. Email and Web surveying

b. IVR surveying

c. Mail surveying

d. Phone surveying

Which of the following survey methods has two biases that tend to cancel eachother out?

a. Email and Web surveying

b. IVR surveying

c. Mail surveying

d. Phone surveying

Which of the following survey methods enable verbatim comments to becaptured easily?

a. IVR and Phone surveying

b. Phone and Mail surveying

c. Email, Web and Mail surveying

d. Email, Web, Mail and Phone surveying

Which of the following survey methods is generally the most expensive?

a. Email and Web surveying

b. IVR surveying

c. Mail surveying

d. Phone surveying

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 51

Page 56: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Sampling and Analysis Methodologies

6. Complete the following statement regarding sampling.

When determining sample size, there is always the tension between_________________of _________________and the _________________ofsurveying a larger sample.

Identifying Contributors to Customer Satisfaction

7. Fill in the blanks with the appropriate phrase to complete the formula.

The basic formula for maximizing customer satisfaction and loyalty is

________________________________plus

________________________________including

________________________________

Isolating Root Causes of Dissatisfaction

8. Match the following terms with their definitions. You will use eachdefinition only once.

______Benchmarking

______Cause and effect diagrams

______Checklists

______Control charts

______Flow charts

______Pareto charts

______Scatter diagrams

52 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002

a. Assess the strength of the relationshipbetween two variables.

b. Bar charts that rank events in order ofimportance or frequency.

c. Illustrate the relationships between causesand a specific effect you want to study.

d. Lists of process steps.

e. Provide information on process variation.

f. The process of measuring your products,services and procedures against otherorganizations.

g. Used to analyze and standardize procedures,identify root causes of problems and plan newprocesses.

Page 57: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Measuring Accessibility Across Channels

9. Fill in the blanks with the appropriate word or phrase to complete thestatements about call center accessibility.

_________________is defined specifically as “X percent of contacts answeredin Y seconds.”

_________________is defined as “100 percent of contacts handled within ndays/hours/minutes.”

_________________is the measure of callers that hang up after waiting for atleast some time in the queue.

How many callers experienced _________________does not tell you anythingabout how accessible the center was to those callers who were able to getthrough.

Very few customers have the “average” experience as expressed by_________________.

_________________ is the time that a system is unavailable.

Answers to these exercises are in Section 10.

Note: These exercises are intended to help you retain the material learned.While not the exact questions as on the CIAC Certification assessment, thematerial in this study guide fully addresses the content on which you will beassessed. For a formal practice test, please contact the CIAC directly by visitingwww.ciac-cert.org.

Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002 53

Page 58: Evaluating Service Delivery - ICMI · Evaluating Service Delivery • Overall, with the contact experience • With the personnel (e.g., professionalism, knowledge, etc.) • With

Evaluating Service Delivery

Evaluating Service DeliveryReference Bibliography

Related Articles from Call Center Management Review(See Section 9)

Challenges of Creating a Customer Relationship Feedback System

Key Components of a Multi-Channel Customer Satisfaction Survey Strategy

Using the Internet to Measure and Enhance Customer Satisfaction

The Impact of Service Delivery on Customer Satisfaction

For Further Study

Books/Studies

Cleveland, Brad and Julia Mayben. Call Center Management on Fast Forward:Succeeding in Today’s Dynamic Inbound Environment. Call Center Press, 1999.

ICSA/TARP Benchmarking Study of Electronic Customer Service. TARP WhitePaper, www.tarp.com, March 2001.

Articles

Calhoon, Bruce. “The Customer Satisfaction Game: What Does a 4.2 out of 5Really Mean?” Call Center Management Review, November 1998.

Grimm, Cynthia J. and Marlene Yanovsky. “Winning Customer Satisfactionand Loyalty in Today’s Multi-Channel Environment.” International Journal ofCall Centre Management, June/July 2001.

“Market Damage Model Overview.” TARP White Paper, www.tarp.com, 1988,revised 2001.

Seminars

Effective Leadership and Strategy for Senior Call Center Managers public seminar,presented by Incoming Calls Management Institute.

54 Call Center Customer Relationship Management Study Guide • Version 2 • Copyrighted to ICMI, Inc., 2002