AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING...

8
VOLUME 5 • 2004 ISSUE 1 There are many quality and service elements that contribute to overall customer satisfaction and loyalty. With these elements come numerous product, service, and other performance issues, as well as internal processes and activities, that require continuous improvement and innovation in order to meet customer needs and requirements. Assuming that an organization has limited people, financial, and other resources to invest in improvement/innovation, the crux is: How can priorities best be set? How can an organization determine in which areas to invest limited resources? How can it identify the quality and service elements which, if addressed effectively by improvement/ innovation efforts, will lead to the greatest gains in customer satisfaction and loyalty, financial and market performance, and overall organizational effectiveness? This paper describes an “outside-in” approach to defining customer-driven priorities for improvement and innovation. Founded on the principles of the Malcolm Baldrige National Quality Award, Kaplan and Norton’s (1996) “Balanced Scorecard” and Heskett, et al.’s (1997) “Service-Profit Chain” models, this approach emphasizes performance management based on alignment of all elements of an organization’s value chain. The goal is to ensure that “upstream” processes and activities can be managed in a way that leads to desired “downstream” business results. The paper will show how an “outside- in” approach leads to the development of priorities for organizational improvement and innovation that drive business results. Advantages of an “outside-in” approach relative to other strategies also will be discussed. DEFINING IMPROVEMENT PRIORITIES: IMPORTANCE- PERFORMANCE ANALYSIS A sound performance management system incorporates a number of analytical and performance review mechanisms in order to set priorities for improvement, innovation, and resource allocation. The many questions that must be addressed during the process of determining where and how to invest organizational resources include: In which product and service areas does the organization have the greatest need to improve? How and to what extent will alternative product and service quality improvements impact key indicators such as customer retention, revenues, and/or market share? What are the cost/revenue implications of allocating resources among alternative improvement initiatives? How easy or difficult will it be for the organization to accomplish alternative action plans and improvement initiatives? For more than two decades, numerous organizations have utilized an approach generally known as importance-performance analysis to address the first of the preceding questions, and to define customer-driven priorities for improvement and innovation. Introduced by Martilla and James (1977), this approach, also known as quadrant analysis, focuses on pinpointing those quality and service elements that: (a) are most important to customers and/or are likely to make the strongest contribution to overall customer satisfaction and loyalty; and (b) are in need of improvement because customers’ evaluations of the company’s performance on these elements are relatively unfavorable (i.e., customers are dissatisfied and/or perceive that the company’s performance is in need of improvement). 1 In a classic importance-performance or quadrant analysis, data regarding customer perceptions about alternative product and service elements (typically gathered via surveys) are examined. Typically, the following procedures are followed: Respondents (consumers or customers) rate each element or attribute on: (a) an importance scale (e.g., not important =1, extremely important = 10), and (b) a performance scale (e.g., poor=1, excellent = 10). Summary scores (e.g., means, percentages, etc.) are computed for each element, for both importance and performance dimensions, and these scores are then plotted in a scatter diagram or xy graph for all product and service elements. Some criterion is used to split each axis in order to establish low and high levels of importance, and low and high levels of performance, respectively. This yields four categories or quadrants into which the various product and service elements are placed, and from which priorities for improvement are derived. AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION

Transcript of AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING...

Page 1: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

V O L U M E 5 • 2 0 0 4 I S S U E 1

There are many quality and serviceelements that contribute to overallcustomer satisfaction and loyalty. Withthese elements come numerous product,service, and other performance issues,as well as internal processes andactivities, that require continuousimprovement and innovation in ordert o m e e t c u s t o m e r n e e d s a n drequirements. Assuming that anorganizat ion has l imited people,financial, and other resources to investin improvement/innovation, the crux is:How can priorities best be set? Howcan an organization determine in whichareas to invest limited resources? Howcan it identify the quality and serviceelements which, if addressed effectivelyby improvement/ innovation efforts, willlead to the greatest gains in customersatisfaction and loyalty, financial andmarket per formance, and overal lorganizational effectiveness?

This paper describes an “outside-in”approach to defining customer-drivenpr ior i t i e s for improvement andinnovation. Founded on the principlesof the Malcolm Baldrige NationalQuality Award, Kaplan and Norton’s(1996) “Balanced Scorecard” andHeskett, et al.’s (1997) “Service-ProfitChain” models, this approach emphasizesperformance management based onal ignment of a l l e lements of anorganization’s value chain. The goal isto ensure that “upstream” processesand activities can be managed in a waythat leads to desired “downstream”business results.

The paper will show how an “outside-in” approach leads to the developmentof pr io r i t i e s fo r o rgan iza t iona limprovement and innovation that drivebusiness results. Advantages of an“outside-in” approach relative to otherstrategies also wil l be discussed.

D E F I N I N G I M P R O V E M E N TP R I O R I T I E S : I M P O R TA N C E -P E R F O R M A N C E A N A LY S I S

A sound performance managementsystem incorporates a number ofanalytical and performance reviewmechanisms in order to set priorities forimprovement, innovation, and resourceallocation. The many questions thatmust be addressed during the process ofdetermining where and how to investorganizational resources include:

• In which product and service areas doesthe organization have the greatest needto improve?

• How and to what extent will alternativeproduct and service quality improvementsimpact key indicators such as customerretention, revenues, and/or market share?

• What are the cost/revenue implicationsof allocating resources among alternativeimprovement initiatives?

• How easy or difficult will it be for theorganization to accomplish alternativeaction plans and improvement initiatives?

For more than two decades, numerousorganizations have utilized an approachgenerally known as importance-performanceanalysis to address the first of the precedingquestions, and to define customer-drivenpriorities for improvement and innovation.Introduced by Martilla and James (1977),this approach, also known as quadrantanalysis, focuses on pinpointing thosequality and service elements that: (a) aremost important to customers and/or arelikely to make the strongest contributionto overall customer satisfaction and loyalty;and (b) are in need of improvement becausecustomers’ evaluations of the company’sperformance on these elements are relativelyunfavorable (i.e., customers are dissatisfiedand/or perceive that the company’sperformance is in need of improvement).1

In a classic importance-performance orquadrant analysis, data regarding customerperceptions about alternative product andservice elements (typically gathered viasurveys) are examined. Typically, thefollowing procedures are followed:

• Respondents (consumers or customers)rate each element or attribute on: (a) animportance scale (e.g., not important=1, extremely important = 10), and(b) a performance scale (e.g., poor=1,excellent = 10).

• Summary scores (e .g . , means ,percentages, etc.) are computed for eachelement, for both importance andperformance dimensions, and thesescores are then plotted in a scatterdiagram or xy graph for all product andservice elements.

• Some criterion is used to split each axisin order to establish low and high levelsof importance, and low and high levelsof performance, respectively. This yieldsfour categories or quadrants into whichthe various product and service elementsare placed, and from which priorities forimprovement are derived.

A N “ O U T S I D E - I N ” A P P R O A C H T O D E T E R M I N I N GC U S T O M E R - D R I V E N P R I O R I T I E S F O R I M P R O V M E N TA N D I N N O V A T I O N

Page 2: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

As Oliver (1997; p.36) observes, the fourquadrants are given various names, butessentially result in the followinginterpretations, illustrated by the templateshown in Figure 1 (see below):

1. High importance, high performance. Theseelements or attributes are assumed to bekey drivers of customer satisfaction/preference, and management’s job is toensure that the organization continuesto deliver/perform well in these areas.

2. High importance, low performance. Theseelements or attributes, also assumed tobe key drivers of customer satisfaction/preference, should be viewed ascritical performance shortfalls, andmanagement’s job is to ensure thatadequate resources are invested inimproving performance in these areas.These areas are priorities for improvement.

3. Low importance, low performance. Theseelements or attributes are assumed to berelatively unimportant, such that poorperformance should not be given a greatdeal of priority or attention by management.

4. Low importance, high performance. Theseelements or attributes, also assumed tobe relatively unimportant, should beviewed as areas of performance “overkill,”and management may want to redirectresources from these elements tohigh-priority areas in need ofimproved performance.

Vavra (1997) articulates the logic underlyinguse of importance-performance or quadrantanalysis to identify customer-driven prioritiesfor improvement and innovation. He states

C O M M O N A P P R O A C H E S T OE VA LUAT I N G P E R F O R M A N C E I NQ U A D R A N T A N A LY S I S

Most often, analysts use one of two methodsto split the performance axis in a quadrantanalysis:

1. A distribution-based approach:The distribution of importance andperformance scores, regardless of theirlocation or magnitude, determinesthe split.

2. A performance comparison approach:For the performance dimension, the splitis determined based on how one firm’sperformance scores compare to somenormative or competitive benchmark.

In the case of the distribution-based method,one of the following is used as thecutting-point:

• The mid-point on the scale is used (e.g.,a “3” on a 1-5 rating scale): Attributesor elements having means above themid-point are placed on the high endof the importance or performance axis,and those having means below themid-point are placed on the low end.

• The average of mean importance andperformance ratings across all attributesis used: Attributes or elements havingmeans above the all-attribute averageare placed on the high end of theimportance or performance axis, andthose having means below this averageare placed at the low end.

• The median of mean importance andperformance ratings across all attributesis used: Attributes or elements havingmeans above the all-attribute medianare placed on the high end of theimportance or performance axis, andthose having means below the medianare placed at the low end (ensuring, ofcourse, that each quadrant will containtwenty-five percent of the attributes).

When a performance comparison is usedas the basis for the cutting point, the analysiscenters on:

• A comparison of how a brand or firm’sperceived performance compares to thatof a key competitor, or

• How a brand or firm’s perceivedperformance compares to that of somenormative or “world-class” benchmark.

Most descriptions of importance-performance analysis use some variation ofeither the distribution-based or performancecomparison approach (Martilla and James,

that “if the organization is truly listeningto its customers, then attributes oughtto be delivered in proportion to theirimportance…(thus) attributes lyingin the lower left or upper right areperceived to be supplied in proportionto their importance, and the requisiteaction for attributes in these twoquadrants is to maintain their currentlevels of delivery…(while) attributes inthe upper left-hand quadrant areperceived as being underdelivered…andsignal an opportunity for improvement”(pp.311-312).

There are a number of methods fordetermining which quality and servicee lements are most important tocustomers,2 and even more proceduresby which performance strengths andareas for improvement are defined.Regardless, the method used to split theperformance axes is a key issue, becauseit ultimately determines whether a givenelement is viewed as being important ornot, as well as whether it is defined asbeing a performance strength orweakness. Unfortunately, and as Oliver(1997) has noted, “guidelines for thesedichotomous splits are murky” (p.36). In fact, one of the most frequentcriticisms of quadrant analysis is thesomewhat arbitrary manner in whichthe importance and performance axesare split. While this issue is relevant toboth dimensions, the remainder of thispaper will focus on the performancedimension, for it is this aspect of quadrantanalysis that an “outside-in” approach hasthe most to offer.

Figure 1: Template for a Quadrant Analysis

IMP

OR

TA

NC

E

Lower

Higher

AverageImportance

AverageImportance

Lower HigherP E R F O R M A N C E

“Priorities forImprovement”

“Keep Up theGood Work”

“LowestPriority”

“PossibleOverkill”

Page 3: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

quadr an t ana l y s i s i n co rpo r a t e sinformation on how that organizationcompares to one or more key competitors.In effect, the un/favorableness of thecomparison is the standard by whichperformance in the area of customerservice/satisfaction is evaluated, andstrengths or areas for improvementare defined.

types of comparisons identified byBrandt (1998) , compet i t ive andbenchmark comparisons most often areused in conjunction with importance-performance analysis.

Whereas a traditional quadrant analysislooks at an organization’s perceivedperformance in isolation from thecategory or marketplace, a competitive

1977; Sethna, 1982; Hawes and Rao,1985; Ortinau, et al., 1989; Lowenstein,1995; Oliver, 1997; Vavra, 1997), andeither approach can be quite useful inthe identification and prioritization ofcustomer-driven issues for improvement.However, each of the precedingapproaches has some importantlimitations, which are described below.

L I M I TAT I O N S O F AD I S T R I B U T I O N - B A S E DA P P R O A C H

Consider the hypothetical financialservices results presented in the twoquadrant maps shown in Figures 2.1 and2.2 (see right). In the case of both maps,the all-attribute average is used to splitthe performance axis, and to distinguishattributes as either performance strengthsor areas for improvement.

Clearly, in the case of the results presentedin Figure 2.1, service elements identifiedas areas for improvement fall below theperformance average for all attributes, andthe scores for these elements are relativelyunfavorable (i.e. means ranging from “1”to “2.5” on a 5-point scale). It is not hardto convince managers that such scoressuggest the need for improved performance.

In contrast, the results presented inFigure 2.2 do not as clearly demonstratethe need for improvement. After all,while the ratings for some serviceelements fall below the all-attributeaverage, even the lowest-rated ratedelement falls somewhere between “verygood” and “excellent” on the 5-pointscale. Managers could reasonably askthe ques t ion , “ Is the need forimprovement clearly indicated, or arethese results an artifact of the analysis?”

This illustrates a key limitation of adistribution-based approach: The resultsare guaranteed to produce both attributestrengths and areas for improvementbecause, by definition, some elementsmust fall below the average (or median,trimmed mean, etc.), and others aboveit.3 Whether improved performance trulyis needed, or feasibly can be achieved(given the current level of performance)is not necessarily reflected.

P E R F O R M A N C E C O M PA R I S O N S

An alternative strategy involves the useof performance comparisons as the basisfor defining performance strengths andareas for improvement. Of the multiple

Figure 2.2: Quadrant Analysis of Financial Services Attributes:All Performance Scores Are Relatively Favorable

IMP

OR

TA

NC

E

Lower

Higher

AveragePerformance

4.0Very Good

M E A N P E R F O R M A N C E R AT I N G

• Accuracy of Product Information

• Simple Paperwork

• Flexible Payment Plans

• Flexible Business Hours

• Convenient Office Locations

• Ability to Obtain Product Information

• Ease of Scheduling an Appointment

• Ability of Consultant to Answer Questions

Areasfor

Improvement

4.5 5.0Excellent

Figure 2.1: Quadrant Analysis of Financial Services Attributes:Performance Score Ranges From Poor to Excllent

IMP

OR

TA

NC

E

Lower

Higher

AveragePerformance

1.0Poor

M E A N P E R F O R M A N C E R AT I N G

• Accuracy of Product Information

• Simple Paperwork

• Flexible Payment Plans

• Flexible Business Hours

• Convenient Office Locations

• Ability to Obtain Product Information

• Ease of Scheduling an Appointment

• Ability of Consultant to Answer Questions

Areasfor

Improvement

2.0Fair

3.0Good

4.0Very Good

5.0Excellent

Page 4: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

organizational, industry, and/or “state-of-the-art” capabilities (based on current orrecent performance data) into account indefining such targets.

Performance comparisons also suffer froma couple of shortcomings. First, anorganization might be perceived as beingbetter than its competition in an industrywhere no provider is perceived as beingparticularly good, and this could result inoverlooking an attribute in need ofimprovement due to a “false sense ofconfidence” stemming from one’s competitiveadvantage. Secondly, as with thedistribution-based method, achievingimproved performance in an area targetedfor improvement may or may not lead todesired business results because therelationship between a given level of attributeperformance and the desired business resultshas not been established.

The problems and limitations discussedabove suggest the need to explore alternativeapproaches, particularly those that enablemanagers to define performance targets ina manner that is more likely to yield desiredbusiness results.

A N “ O U T S I D E - I N ” A P P R O A C H

Brandt (1998) proposes and illustrates an“outside-in” approach to definingperformance targets for measures of customerservice and satisfaction. This same outside-in approach provides an alternative todistribution-based and performancecomparison-based methods of determining

priorities for quality and serviceimprovement and innovation.

An outside-in approach attempts to definepriorities for improvement/innovation byaddressing the following questions:

1. What financial or market performanceoutcomes are critical to the organization’ssuccess?

2. Which specific quality/service elementsare most critical to the organization’ssuccess or failure in achieving itsfinancial/market performance goals?

3. With regard to these critical quality andservice elements, how well must acompany perform (i.e., what should bethe performance target) in order to meetits financial/market performance goals?

4. Based on results of customer satisfactionand quality measurements, does thecompany’s performance on each criticalelement meet or fail to meet necessaryperformance targets?

Using an outside-in approach, the processof setting performance targets moves fromoutcomes to performance drivers. Thisprocess, along with an understanding ofthe strength and functional form of therelationship between each pair of elementsin the chain, enable management to defineperformance targets: Knowledge of how a“downstream” element is impacted by the“upstream” element immediately precedingit furnishes the basis for definingperformance targets for the latter.

With respect to determining priorities forimprovement and innovation, an outside-in approach utilizes the logic and most ofthe procedures used in a more conventionalimportance-performance analysis, with onevery important exception: In an outside-in approach, the basis for evaluatingperformance is whether targets are achievedor not. The rationale for this difference inprocedures is simple: For an organization’smost critical quality and service elements,if performance targets have been set in away (and at a level) that is intended tomaximize the chances of achievingdownstream financial and market results,then it is essential that performancedeficiencies be reduced/eliminated inconnection with any of these criticalelements or “key drivers” of business results.

At this point, an actual case illustrationshould provide the reader with a bettersense of how an outside-in approach todefining priorities for improvement andinnovation works.

Figure 3 (see below) presents hypotheticalresults of a competitive importance-performance analysis of customer dataregarding technical support services. In thisinstance, customers were asked to rate theimportance of different service and supportissues, and then were asked to evaluate twodifferent providers in relation to these issues. The position of an organization on theperformance axis is determined by assessingthe significance of the difference between itsperceived performance, and that of itscompetitor, on each issue.4 Unlike thedistribution-based approach to quadrantanalysis, the competitive comparisonapproach produces six (rather than four)categories, three of which are most critical:

1. Competitive Strengths — importantservice areas in which the organization’sperceived performance is significantlybetter than the competitor.

2. Priorities for Improvement — importantservice areas in which the organization’sperceived performance is significantlyworse than the competitor.

3. Pre-emption Opportunities — importantservice areas in which there is nosignificant difference in perceivedperformance between the organizationand its competitor.

A performance comparison approach has atleast two advantages over the distribution-based approach: (1) It provides a clearer andseemingly less arbitrary basis for definingpriorities for improvement (i.e., a significant,competitive disadvantage on an importantquality or service issue); and (2) It takes

Figure 3: Competitive Importance-Performance Analysis of Technical SupportServices: Three Types of Importance Issues are Identified

IMP

OR

TA

NC

E

Lower

Higher

Significantly WorseThan Competitor

M E A N P E R F O R M A N C E R AT I N G

• Efficiency of Service Call Handling

• Frequency of Communication

At ParityWith Competitor

Significantly BetterThan Competitor

P r i o r i t i e s f o rI m p r o v e m e n t

P r e - e m p t i o nO p p o r t u n i t i e s

C o m p e t i t i v eS t r e n g t h s

• Response Time

• Total Time to Resolve Problem

• Technician’s Knowledge of My Sites Needs

• Effectiveness of Problem Escalation

• Effectiveness of Customer Training • Timeliness of

Invoicing for Services

• Quality of Replacement Parts

Page 5: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

A C A S E I L L U S T R AT I O N

A leading provider of telecommunicationsservices recently implemented a set of serviceperformance targets for selected elements ofcustomer service. These targets, derivedfrom the value chain illustrated in Figure 4(see above), sought to achieve profit goalsby building repeat business through customerretention. As shown in Figure 4, a numberof service elements were determined to bekey drivers of customer retention. Thecompany sought to define performancetargets for these service elements which, ifachieved, would give the company a 70%or better probability of customer retention.

How well, in the eyes of the customer, mustthe organization perform in each of thesekey service areas? The answer depends onthe service element being considered. Forexample, Figure 5.1 (see right) illustratesthe probability of retaining a customerdepending on how s/he rated theknowledgeability of the customer servicerepresentative with whom s/he had recentlyinteracted.5 Note that a customer whogives a “5” rating still has better than a 70%chance of being retained during the nextsix months. Thus, while not optimal, a “5”rating on knowledge need not be viewedas negatively as might be dictated byconventional wisdom regarding how tointerpret 10-point scale ratings, and/or asmight be dictated by a distribution-basedapproach to defining performance strengthsand weaknesses.

In contrast, consider the results shown inFigure 5.2 (see right). These results illustratethe probability of retaining the customerdepending on how s/he rated the helpfulnessof the service representative with whoms/he had recently interacted. Note that acustomer who gives a “5” rating now hasless than a 30% chance of being retained.Clearly, a “5” rating on helpfulness shouldnot be interpreted or treated the same as a“5” rating on knowledge.

Figure 4: Value Chain for Customer Service Transactions in a Telecommunications Organization

C R I T I C A L C U S T O M E R S E R V I C EE L E M E N T S A N D M E T R I C S

K E Y M A R K E T O U T C O M E C R I T I C A L F I N A N C I A L R E S U L T S

Customer Satisfactionand Retention

Profitability

Availability

Knowledge

Listening Skills

Courteousness

Helpfulness

Follow-Through

SER

VIC

E R

EP

RE

SEN

TA

TIV

E’S

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0PR

OB

AB

ILIT

Y O

F C

US

TO

ME

R R

ET

EN

TIO

NFigure 5.1: Relationship Between Service Representative’s Perceived Knowledge and the Probability of Customer Retention

S E R V I C E R E P R E S E N T A T I V E ’ S P E R C E I V E D K N O W L E D G E1 2 3 4 5 6 7 8 9 10

54%

72%

87%93%

98%

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0PR

OB

AB

ILIT

Y O

F C

US

TO

ME

R R

ET

EN

TIO

N

Figure 5.2: Relationship Between Service Representative’s Perceived Helpfulness and the Probability of Customer Retention

S E R V I C E R E P R E S E N T A T I V E ’ S P E R C E I V E D H E L P F U L N E S S1 2 3 4 5 6 7 8 9 10

29%

53%

78%

90%

97%

Page 6: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

These results demonstrate one of thedifficulties of interpreting data fromperformance rating scales: The degreeto which a given rating reflects arelatively favorable or unfavorablecustomer evaluation varies acrossattributes. Put simply, on a 10-pointscale, sometimes a performance ratingof “7” is good, while other times it’s notnearly good enough. Neither adistribution-based or performancecomparison approach would necessarilytake this into account.

In a n y e ve n t , h a v i n g d e f i n e dperformance targets for each serviceelement in the manner illustrated above,an outside-in approach to determiningpr ior i t i e s for improvement andinnovation proceeds by: (a) establishingthe relative importance of each serviceelement; (b) evaluating current or recentperformance data relative to targets; and(c) integrating the results of steps (a)and (b) to determine the priority issues.

The quadrant chart shown in Figure 6(see right) illustrates results for thetelecommunication customer serviceelements described above. Because themean performance scores on availabilityand accessibility fall short of performancetargets designed to achieve the company’scustomer retention goals, and becausethey significantly impact customersatisfaction and retention, these twoelements were defined as top prioritiesfor improvement and innovation. Givenfinite human, financial, and otherresources to invest, and given the impact

The basis for evaluating performance waswhether or not performance targets wereachieved. This is critical, because adistribution-based approach would haveyielded a very different set of conclusions,driven strictly by how the performancescore in each area compares to the overallaverage across all elements, regardless ofwhether the level of performance wassufficient to produce the desired effects oncustomer satisfaction and retention. As wasdiscussed earlier, such an approach may ormay not lead to improvements thatultimately drive desired business outcomes. 

of these two elements on customersa t i s f a c t ion and re t en t ion , theorganiza t ion chose to focus i t simprovement efforts in the areas ofcustomer service representat ives’availability and accessibility.

It should be noted that availability andaccessibility were not flagged as prioritiesfo r improvement because the i rperformance scores were the lowest ofall service elements: In fact, there wereo the r s e r v i c e e l emen t s h av ingperformance scores that were even lower(as well as some that were higher).

Figure 6: Outside-In Approach to Quadrant Analysis of TelecommunicationsCustomer Services Attributes

IMP

OR

TA

NC

E

Lower

Higher

PerformanceTargets Not Met

P E R F O R M A N C E

• Availability

Prioritiesfor

Improvement

PerformanceTargets Met

• Accessibility

• Listening Skills

• Helfulness

• Follow-Through

• Knowledge

• Courteousness

Page 7: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

F O O T N O T E S

1 Oliver (1997) notes that importance-performance analysis may actually have its originsin the work of Myers and Alpert (1968), although they examined importance andperceived brand differences, and not performance, per se.

2 An alternative to using stated measures of importance is to derive these scores viastatistical analysis. For example, one can compute the correlation between overallsatisfaction and attribute performance ratings, and then use this correlation as theattribute importance score. Attributes having higher correlations are viewed as beingmore important than those having lower correlations, because overall satisfactionratings are more likely to increase with improved performance on the former thanwith the latter. A variety of other regression and trade-off based procedures areavailable to derive attribute performance. This derived approach often is used insteadof stated importance, and on other occasions, both stated and derived measures ofimportance are examined.

3 Problems also arise from attempting to interpret the “distance” of an element from thecross hairs of the map. Two or more elements may be clustered extremely closely, buttheir respective locations vis a vis the cutting points may place them in entirelyseparate quadrants. This can lead to confusion and/or controversy regarding thequadrant analysis results.

4 Typically, performance comparisons are made using methods of statistical inference(e.g., significance testing) to evaluate differences among competitors. Since dataobtained from a sample of customers, transactions, etc. typically are used, the objectiveof significance testing is to rule out the risk that differences in the scores of two ormore organizations merely reflect random variation due to sampling error: If thecomputed probability is equal to or lower than a criterion level selected in advanceof the test (typically between 1 and 10%), then the difference is “statistically significant.”This leads to a conclusion that the difference reflects or is caused by something moresystematic (in effect, a “true” competitive advantage or disadvantage).

5 A logistic regression procedure was used in this analysis: Survey data were collectedfrom a sample of customers shortly after they had contacted the company’s customerservice center. Each customer was asked to rate the performance of the servicerepresentative with whom s/he interacted on several service dimensions. Subsequently,the account records of each surveyed customer were monitored over a 6-month periodin order to track whether the customer continued his/her long distance account, orswitched to a competitor. This approach enabled the organization to determine theprobability, on average, of retaining a customer depending on how s/he had rated theservice rep’s performance.

R E F E R E N C E S

Brandt, D. R. (1998) “An “Outside-In” Approach to Defining Performance Targets forMeasures of Customer Service and Satisfaction.” in J.A. Edosomwan, (ed.), CustomerSatisfaction Management Frontiers II, Fairfax, Virginia: Quality University Press;pp. 7.1-7.11.

Hawes, J. M. and Rao, C. P. (1985). “Using Importance-Performance Analysis to DevelopHealth Care Marketing Strategies,” Journal of Health Care Marketing, 5; pp.19-25.

Heskett, J. L., Sasser, Jr., W. E., and Schlesinger, L. A. (1997). The Service Profit Chain:How Leading Companies Link Profit and Growth to Loyalty, Satisfaction, and Value. New York: Free Press.

Kaplan, R. S. and Norton, D. P. (1996). The Balanced Scorecard. Cambridge, Massachusetts:Harvard Business Press.

Lowenstein, M. W. (1995). Customer Retention: An Integrated Process for Keeping Your BestCustomers. Milwaukee: ASQC Quality Press.

Martilla, J. A. and James, J. C. (1977). “Importance-Performance Analysis,” Journal ofMarketing, 41; pp.77-79.

Myers, J. H. and Alpert, M. I. (1968). “Determinant Buying Attitudes: Meaning andMeasurement,” Journal of Marketing, 32; pp. 13-20.

Oliver, R. L. (1997). Satisfaction: A Behavioral Perspective on the Consumer. Boston:Irwin/McGraw-Hill.

Ortinau, D. J., Bush, A. J., Bush, R. P., and Twible, J. L. (1989). “The Use of Importance-Performance Analysis for Improving the Quality of Marketing Education: InterpretingFaculty-Course Evaluations,” Journal of Marketing Education, 11; pp.78-86. Rust, R. T.,Zahorik, A. J., and Keiningham, T. L. (1994). Return on Quality: Measuring the FinancialImpact of Your Company’s Quest for Quality. Chicago: Probus.

Sethna, B. N. (1982). “Extensions and Testing of Importance-Performance Analysis,” BusinessEconomics, 17; pp.28-31.

Vavra, T. G. (1997). Improving Your Measurement of Customer Satisfaction. Milwaukee:ASQC Quality Press.

S U M M A R Y A N D C O N C L U S I O N

The 1999 Malcolm Baldrige National Quality Award Criteria

for Performance Excellence emphasizes that “quality is judged

by customers…customer-driven quality is directed toward

customer retention, market share gain, and growth, and it

demands constant sensitivity to changing and emerging

customer and market requirements, and the factors that drive

customer satisfaction and retention” (p.1).

A key implication of the preceding excerpt is that priorities

for improvement and innovation must originate with and be

driven by customers. An outside-in approach can furnish

management with a powerful, customer-driven method for

supporting the decisions and providing the direction needed

to achieve organizational growth and long-term success.

Such an approach enables an organization to align and

manage internal operations, customer relationships, and

business results systematically. As a basis for defining priorities

for improvement and innovation, an outside-in approach

offers organizations an alternative to distribution-based

and/or performance comparison approaches and their

attendant problems.

Page 8: AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER …AN “OUTSIDE-IN” APPROACH TO DETERMINING CUSTOMER-DRIVEN PRIORITIES FOR IMPROVMENT AND INNOVATION. As Oliver (1997; p.36)

B U R K E I N C O R P O R A T E D • 8 0 5 C E N T R A L A V E N U E • C I N C I N N A T I , O H 4 5 2 0 2 • 1 - 8 0 0 - 2 6 4 - 9 9 7 0

b u r k e . c o m