Let’s talk about you

14
Research excellence Let’s talk about you The key to getting accurate, actionable ideas from market research is to help respondents tell the truth about themselves Jan Hofmeyr, Chief Researcher, Behaviour Change White paper

description

The key to getting accurate, actionable ideas from market research is to help respondents tell the truth about themselves. http://www.tnsglobal.com

Transcript of Let’s talk about you

Page 1: Let’s talk about you

Research excellence

Let’s talk about youThe key to getting accurate, actionable ideas from market research is to help respondents tell the truth about themselves

Jan Hofmeyr, Chief Researcher, Behaviour Change

White paper

Page 2: Let’s talk about you

White paper

2

Research cannot hope to deliver precise plans for growth unless it builds a precise understanding of individuals. It seems an obvious point to make, but it’s one with which brand tracking surveys, in particular, struggle to come to terms.

It’s an important but often ignored truth that survey data can be valid at aggregate level and yet wrong about individual people. This comes about through mutually compensating error, the likelihood that for everyone who says that they used a particular brand but didn’t there is somebody else who says that they didn’t use the particular brand when they actually did. Thanks to mutually compensating error, brand tracking can continue to deliver topline aggregate figures that are roughly correct, even if individual data is seriously compromised.

This possibility ought to keep researchers awake at night, since the recommendations that we make about a brand’s potential and actual consumers depend upon individual truth and the way that each individual’s answers correlate together, rather than aggregate data. We require respondent-level validity – and all too often researchers do not push hard enough to achieve this.

TNS is developing a new approach to brand tracking that focuses clearly on respondent-level validity and the adaptations that are required to achieve this. Put simply, we care about whether our respondents tell us the truth about their likely actions – and we are developing new techniques to make it easier for them to do so.

This approach underpins the TNS ConversionModel, a global brand tracking study that has been built around the techniques and principles outlined in this paper.

The problems with ‘big ticket’ tracking – and how to solve themThe structure of today’s brand tracking surveys makes it difficult to get to individual truth. It’s worth pointing out early that this problem doesn’t result from respondents hiding the truth from us – it’s a case of survey techniques making it frustratingly difficult for them to provide us with meaningful information. The four main barriers that surveys put in the way of respondents telling the truth are:

� Brand tracking surveys are far longer than they need to be – and asking too many irrelevant and unnecessary questions has dire consequences for data quality

� They ask the wrong questions and often in the wrong way, using techniques and measures that are simplistic and known to lead to false information

� They ask questions at the wrong time, exposing results to the fallibility of human memory and failing to deliver the real-time insights that marketers need

� They fail to apply enough intelligence to the analysis of data, with the result that clients do not get the information they need in time.

Let’s talk about youThe key to getting accurate, actionable ideas from market research is to help respondents tell the truth about themselves

About the authorJan Hofmeyr is TNS’s leading expert on consumer behaviour, with a career spanning over 20 years advising many of the world’s best-known brands.

He invented ConversionModel whilst working for the Customer Equity Company (acquired by TNS in 2000), recognising a need for better quality insight on consumer motivations. In 2010, following a period of five years at Synovate, Jan returned to TNS to continue his work in this field, updating the ConversionModel methodology to cement its position as the world’s leading measure of consumer commitment.

Prior to working in market research, Jan was a senior political advisor for the African National Congress during and after the first democratic elections in South Africa. He is the co-author (with Butch Rice) of Commitment Led Marketing and the author of numerous, award-winning papers on brand equity.

Page 3: Let’s talk about you

White paper

3

Focusing on respondent-level validity, weeding out questions that don’t deliver it and developing new ways of asking questions that do, are the keys to delivering nimbler, more effective and more actionable trackers.

The TNS approach leverages available technologies and techniques to create in the moment surveys that are able to access consumers’ instinctive responses; to apply intelligence to these surveys to ensure relevant, responsive questions and actionable data; and to link this to data-streams such as economic conditions, sales information, marketing spend and digital behaviour to provide a holistic view.

Flexible, adaptive, faster: cutting survey lengthOur core proposition is that current big-budget trackers can be collapsed into one efficient, flexible, and adaptive data stream. This data stream can be integrated with others in a single-source approach.

This new data-stream is built upon an intelligence-driven survey populated by learning algorithms that cut survey length, drive up validity, and automatically create category and brand knowledge over time. The core survey is deliberately and genuinely ‘thin’: it takes no more than two to three minutes to complete. We do not consider ten minute surveys to be ‘thin’.

The new system will not be modular. It will be adaptive. There is a difference. Modular systems are like a layer-cake that adds survey chunks to a core using dumb criteria. The key to an adaptive system is that it learns from the respondent during the survey what should be asked next. In other words, adaptive surveys go where the respondent wants to go. Modular systems force respondents to go where the researcher thinks they need to go. The adaptive system becomes the ‘conversation with consumers’, part of a tracking approach that integrates attitudes and behavior through the creation of single-source data.

Smarter thinking about which questions to askBuilding intelligence into the tracker system is the key to making all aspects of a survey more relevant to the respondent and so overcoming the problem of boredom whilst improving data quality. At the same time, an intelligent tracker system can reduce costs through saving time – and enabling multiple survey trackers to be consolidated into one.

The task of creating an intelligent tracker system begins with applying a rigorous approach to sample size and covariance, asking smart questions about how many respondents need to answer each question, and how many questions each individual respondent needs to be asked.

Let’s talk about you

Page 4: Let’s talk about you

White paper

4

Leveraging a database of the standard deviations of variables provides us with an opportunity to reduce the number of irrelevant questions we ask by confirming how large the sample size for each question actually needs to be. If you know that a question has a small standard deviation, then you can reduce the size of the sample you need for that question – which in turn means you can select a random sub-sample to answer the question and allow the rest to skip through, reducing survey time.

Let’s look at a quick example of how this could work: we know from some twenty years of doing brand equity studies, that committed users of a brand tend to be homogeneous in the image they have of that and other brands. As a result, their answers to attribute association questions hardly vary. This means that you don’t have to force them all to respond to the attribute association question: measure a few and you will know what the rest would have said. You can allow these few to answer the question for the others.

Our approach to leveraging covariance is similar. In this case, we use a database of established covariance to create a skipping, interview-shortening process that is tailored to a particular respondent. We know that some questions are highly inter-correlated. The three questions most commonly used in loyalty studies, satisfaction, purchase intention and recommendation, happen to be great examples. If you know that a person’s answers to a particular question will be highly correlated with answers they have already given, then you can skip that question. Again, survey length could be cut without loss of information.

Intelligent pathways through heuristicsApplying heuristic (or self-educating) principles can help us to extend the idea of learning from respondents and create intelligent pathways in surveys. The key here lies in adapting each survey in real-time, to reflect the way that the particular respondent makes decisions. Once again, the key focus here is on achieving respondent-level validity.

We know, for example, that people who are uninvolved in a category behave in one of two ways: either they develop shallow habits in which they stick to one brand because they can’t be bothered to think about what to use; or they care so little about brand choice that they’re influenced more by point-of-purchase/consumption phenomena than by brand.

Let’s talk about you

Page 5: Let’s talk about you

White paper

5

There is very little point to asking people in this frame of mind an attribute association question because the results are highly predictable: their answers will be sparse and restricted to the brand they buy by habit. What’s more pertinent is to ask them questions that measure their response to ‘in the moment’ brand stimuli: discounts, special displays, prominence on the shelves, and so on. The challenge is to develop the right, engaging virtual environments to do this effectively.

By contrast, people who are committed to a brand are less influenced by ‘in the moment’ phenomena. They could skip these kinds of questions. A more complex pathway could be built using attitudinal equity configurations.

On the whole, we aren’t fans of attribute association questions. However, these same heuristic principles provide an opportunity to make simple changes that can dramatically improve the correlation of attribute responses with actual sales at respondent level. The four key changes that TNS has identified in this area are:

1. Allow respondents to select the attributes that are most relevant to them before asking them to associate attributes with brands.

2. Restrict the scope of the associations to the sub-set of brands that are relevant to each respondent.

3. Replace the free form association question i.e. respondents only tick positive associations; with a binary form i.e. respondents answer ‘yes-no’.

4. For driver analysis: transform the results into ‘share of mentions’ for each brand and attribute at respondent level.

‘Share of mentions’ is a simple transformation: instead of using values of ‘0, 1’ when performing driver analysis, use values that are based on the share of mentions the brand gets for each attribute. So, for example, if a person associates two brands with an attribute, then the values for that attribute for that respondent in a driver analysis would be ‘0.5, 0.5’.

A binary response format results in much greater response stability and reliability1. And reducing both the attribute and brand lists ensures that relevant information is collected and reduces the tedium associated with the classical attribute association task.

Let’s talk about you

Page 6: Let’s talk about you

White paper

6

Mobile capabilities: asking questions at the right timeMobile capabilities have a vital role to play in improving brand tracking surveys, since they have the potential to solve the problem of fallible human memory and to deliver fast-turnaround results. Leveraging mobile technology enables us to kick-start all surveys at the appropriate moment.

TNS has almost a decade’s experience of creating short-term panels in which panelists record their daily buying and consuming as it happens. These ‘In the moment’ mobile purchase and consumption diaries are less subject to memory errors; they can be used to collect ambient point-of-purchase or consumption information; and they provide a single-source of attitude and behavioral data. The events covered by the diaries could include drinking an alcoholic or non-alcoholic beverage, the complex and varied stages involved in planning a car purchase, exposure to an ad for the first time, and a huge range of other occasions.

In our experience, people create records of each event within an hour. In categories involving many events, people send up to eight records a day. If a panelist hasn’t sent anything for six hours since ‘waking’, they’re sent a reminder. Although each record looks long, it typically takes three minutes or less; and 70 percent of the panelists complete their diaries.

We validate overall consumption using external sources such as Kantar World Panel, Nielsen, IRI. Respondent-level validation involves setting flags to measure response consistency.

Mobile as listening deviceTNS has developed an app called MobileBehave that leverages the mobile’s potential as a listening device for all manner of brand-consumer communications taking place through the mobile channel. MobileBehave data builds over time as people become relaxed about the fact that the app is on their phone. It has multiple uses:

� A source of passive (i.e. ‘listening’) mobile behavioural data � A single-source of ‘listening’ data combined with ‘in the moment’ data � Can be used to recruit panelists for non-mobile ‘listening’ � Enables the building of online communities based on revealed interests � Can be used as a sample source for instant surveys � Becomes the basis for creating causal models of behaviour over time

Let’s talk about you

Page 7: Let’s talk about you

White paper

7

Asking the right questions in the right wayWe have always known that there is a gap between what people say in surveys and what they actually do. Thanks to contemporary neuroscience, we know the various reasons why the gap exists – and this can help us to fix it. By basing questions around the parts of the brain that become active when brand attachment forms, we are able to fix common mistakes that our industry makes when it comes to communications modeling.

There are many ways in which current approaches to measuring and modeling communications impacts ignore reality. Here’s a short list:

� Over-reliance on memory to establish communications exposure. As a result, modeled effectiveness coefficients are faulty;

� Failure to take account of what’s already in the brain about brands, in particular pre-existing brand commitments;

� Failure to model communications effects holistically (for example, in the context of other information that affects brand image like competitor communications);

� Overly narrow focus on characteristics of the advert at the expense of measuring impacts on the person.

Neuroscientists tell us that there are genuine differences between the way the brain reacts to favoured and non-favoured brands2. All forms of exposure to brands create neural tracks over time that link favoured brands to personal goals and values. Favoured brands then show up in complex networks in the brain that include the areas that guide decision-making, and those that deal with affective memories. By ‘affective’ we mean more than ‘emotional’. Affective refers to feelings with deep personal meaning.

Brand connections are built in multiple ways, most notably, through direct brand experience, through endorsements by others – most notably experts, friends, and what can best be called ‘the mass of humanity’, and through own-brand and competitor messaging.

A holistic approach to communications measurement and modeling can help. This is based on the single-source approach to information that we described earlier. We looked at the options for collecting information about brand use in a way that overcomes problems of memory; and gives access to context-relevant information. To model this information more effectively, we need to

Let’s talk about you

Page 8: Let’s talk about you

White paper

8

link it to metrics that reflect the neural connections that form around brands. There are two of these metrics: first, a quantified measure of ‘affective impact’ (remembering here that ‘affective’ means more than just ‘emotional’); second, open-ended questions to create verbatims that can measure ‘affective content’.

We can combine these two approaches to measure the affective impact of a communications piece. First, we ask a simple open-ended question: what does the advert bring to mind; in what ways has the brand become part of your life and who you are? Second, we explore the sequence of emotions: The lesson of most of the current emotional measurement is that ‘positive’ is good. Yet advertising is storytelling. And we know from great storytelling that it’s the management of an emotional sequence that really matters. So, for example, ‘negative’ need not be bad if it’s followed by ‘positive’. Examples might include: ‘problem – resolution’; ‘surprise – delight’; ‘threat – victory’. And so on.

The next step is to relate this view of the affective content of communications to how people actually make decisions in the market. TNS has developed a two-pillar model of brand equity that gets to the heart of what actually drives sales.

Theories of choice based on the idea that what people do is the result of psychological preferences combined with situational factors, probably pre-date ancient philosophers. In modern times, they show up in the distinction between attitudinal and behavioral loyalty. Usually, attitudinally loyal people will buy the brand to which they’re loyal if they can. But sometimes market (i.e. situational) factors nudge people towards an alternative; or even prevent people from buying the brand they want. And sometimes, when people have no strong first choice, market factors tip the scales in favour of one brand rather than another.

We’ve used this simple framework for understanding brand sales for many years3. In our framework, sales are a function of a brand’s ‘Power in the mind’ (attitudinal equity) and ‘Power in the market’ (market factors, brand presence, market equity). These two dependent variables anchor our analysis of brand equity and sales drivers.

We’ve recently updated these measures using surveys on behaviour panel data. We can show that our new metrics outperform other similar metrics at respondent level4, and we expect to continue to improve them in the months to come.

Let’s talk about you

Affective impact

Affective content

By affective impact we mean the extent to which a piece of communications links the brand to experiences that have a deeper personal meaning. It’s about placing the brand in the context of personal goals and values.

By affective content we mean articulating deeper motivations in words. Qualitative researchers use projective techniques and rich stimulus material to try to link instinct and intuition to words – so that a person can say what’s more deeply in their mind.

Page 9: Let’s talk about you

White paper

9

Power in the mindPower in the mind is a respondent-level measure of brand attachment that correlates better than similar measures with the real (panel validated) share of spend that each brand gets from that person. And it achieves this with a significant reduction in survey length.

We measure a brand’s power in the mind in two steps. First, we identify the brands that are relevant to each respondent. Second, we ask for two ratings for each relevant brand. The two dimensions that have to be measured are brand performance and brand involvement. We use scales derived from the most up-to-date neuroimaging survey measures5, and an algorithm underpinned by our original theories of brand relationship6 to calculate from these a ‘one number’ measure of attitudinal brand equity. This correlates better with a person’s share of consumption in panel data than other comparable metrics.

We use this number as a dependent variable for equity modeling; and also to create equity segments and a brand health ‘ladder’. Because we leverage heuristic principles7, this measure typically takes less than 30 seconds of survey time yet results in brand health scores for every respondent for every category and brand in a study. Continued improvements will further enhance accuracy over the coming 12 months.

Power in the marketPower in the market is a respondent level measure of the market factors that drive consumer behaviour. It offers a vital improvement in taking into account the law of double jeopardy. According to this law, bigger brands gain in two ways over smaller brands: they have more users, and their users tend to use them more.

There are important problems with the law of double jeopardy, most notably with its assumption that individual brand preferences are stationary over time8. Nevertheless, the law highlights the benefits of scale that accrue to big brands. These drive incremental sales for locally dominant brands; and create market barriers for smaller brands.

There are a number of important ways in which brands can pull marketing levers to drive sales: distribution, point-of-sale visibility, greater affordability, getting the product mix right (packs and variants), purchaser preference (leveraging the fact that the person who buys isn’t always the end-user), and creating local monopolies.

Let’s talk about you

Page 10: Let’s talk about you

White paper

10

Like our ‘power in the mind’ measure, our ‘power in the market’ measure leverages heuristic principles to cut survey time while increasing the validity of the results. It typically takes less than 30 seconds and gives granular, respondent-level information about the market drivers of sales for brands.

Put the two together and you have a powerful system of core metrics that takes less than a minute of survey time to deliver equity and market information about all brands at respondent level.

Gamification: a better way to ask questionsThe gamification methods pioneered by Puleston and others can help us to solve the problems of length, irrelevance, and boredom; and tap more effectively into less conscious motivations by engaging the parts of the brain that are not activated by classical, word-driven surveys.

Even when gaming methods aren’t very game-like, tests show that respondents are much more engaged by these devices than they are by classical survey methods. Mobile can play an important contributory role in applying gamification more widely, since mobile devices provide a channel for incorporating this approach into face-to-face interviews.

Intelligent, pro-active systems for ‘just-in-time’ informationBesides making surveys shorter, more relevant and more responsive, intelligent systems can also be used to deliver actionable information and insight more pro-actively. We are skeptical about the use of ‘early warning’ systems that rely on single trend analysis such as moving averages, Bollinger bands, and the like. Our reason is: a single trend doesn’t contain enough information to provide intelligent alerts. We set more store by the analysis of anomalous gaps across trends. By analyzing multiple trends gaps, we should be able to identify that stresses are developing in the system. These stresses can be a powerful indicator of opportunity or threat.

An example of a potentially anomalous gap would be when sales are under-supported by equity. Over twenty years of brand health modeling, we’ve seen such under-support often enough to know that it’s a sign that the brand’s sales will come under pressure. Similarly, when equity exceeds sales, it’s a sign of potential opportunity.

Let’s talk about you

Page 11: Let’s talk about you

White paper

11

How can we build anomalous gaps into analytical systems? A database of relationships between the key variables in data streams can help to establish the key anomalous gap values between such data points as marketing spend, attitudinal equity and sales. We can then build intelligence into the tracking system by automating the discovery of values in the data. This is a three-fold process: automating the collection of instances such as turning points in market share; populating a database with relevant instances that can trigger analysis; and automating the updating process so that the the data-stream delivers new instances.

Putting it all together: survey architecture for intelligent adaptive trackingThe TNS ConversionModel has been redeveloped along the principles set down in this paper, to deliver respondent-level validity within an adaptive tracking approach and reduced survey time. This approach enables the model to deconstruct market share precisely and provide clear guidance on opportunities for brand growth.

The core ConversionModel study will now form the basis of future tracking that is able to leverage an adaptive, heuristic architecture to ensure fewer, more relevant questions and respondent-level validity around individual behaviour. ConversionModel takes into account that people care about some decisions more than others – and that this prioritisation varies by individual as well as by category.

In further developing the ConversionModel, and applying a new approach to tracking more generally, we will develop survey architecture along the following lines:

‘In the moment’ tracking activationBy taking measurement close to behavioral events, we can measure three key things, with no more than three to four minutes for each event, diminishing over time as machine learning kicks in:

� What people actually buy;

� Basic context information: where were they, what were they doing;

� Complete brand equity and market barrier information at a situational level

Let’s talk about you

Page 12: Let’s talk about you

White paper

12

The development of smart mobile devices and gamification survey techniques, will improve compliance and validity of responses. Among respondents from whom we get permission to install the MobileBehave app, we will enable a three-fold integration of event-based behavior, situational brand equity, and mobile ‘listening’ over time. Analysis and deliveryWe apply two levels of near real-time reporting and analysis:

� Basic: feeds back trend information e.g. buying, consuming; that can be disaggregated according to ‘who’, ‘where’, ‘when’, ‘for what purpose’

� Analytic: feeds back information that requires algorithms based on trend changes and, more importantly, gaps across trends

Examples of basic feedback include ongoing, real-time trend information about what people are buying and consuming, where, and why. Basic feedback also includes real-time information about category/brand situational equities and situational drivers.

The analytic components of the system will be programmed to learn from experience, identifying when positive or negative equity stresses develop. As an example: when equity is high and consumption is low, this suggests a failure of marketing. When equity is low and consumption is high, this suggests that consumption is unsupported by psychological demand.

Intelligent, adaptive follow up surveysThe ‘in the moment’ survey process is the thin core. It gives us basic purchase and consumption information coupled to situational brand equities and market barrier information. As the diary builds, fewer questions will need to be asked. Questions about situational equities, for example, only need to be asked once.

The follow-up survey happens after a set time period that could be daily, weekly or monthly. Respondents will be channeled into questions that are relevant to the way they make decisions, with different subsets for people with strong brand preferences and people without, for example. We will know this from our analysis of patterns of attitudes and behavior revealed in the diary survey. By creating live adaptive questioning that is tailored to each respondent, we can integrate big-ticket trackers into one system that combines all relevant measurement areas: actual behaviour, brand equity, market factors, communications influences, path-to-purchase, and point-of-consumption.

Let’s talk about you

Page 13: Let’s talk about you

White paper

13

The future of tracking conversationsA lot is spoken about the need for brands to engage consumers in meaningful dialogue. Tracking surveys are no exception. The measures outlined in this paper leverage what we know about consumers, markets and the human brain in order to conduct conversations that are relevant and meaningful for each respondent. It makes for a more stimulating and enjoyable experience for those involved in our surveys. And it makes for more valid, holistic and actionable information for our clients.

The early results of this new approach can be seen in the insights delivered by the 2012 TNS ConversionModel. However, evolving trackers to reflect consumer decision-making more closely is an ongoing process. We are passionate about delivering questions and answers that are valid for individual respondents in our surveys. And we will continue to explore and apply new techniques in order to do so.

Let’s talk about you

You may be interested in...

The trouble with tracking by Jan Hofmeyr >

ConversionModel >

Commitment Economy >

Page 14: Let’s talk about you

White paper

14

About TNS TNS advises clients on specific growth strategies around new market entry, innovation, brand switching and stakeholder management, based on long-established expertise and market-leading solutions. With a presence in over 80 countries, TNS has more conversations with the world’s consumers than anyone else and understands individual human behaviours and attitudes across every cultural, economic and political region of the world.

TNS is part of Kantar, one of the world’s largest insight, information and consultancy groups.

Please visit www.tnsglobal.com for more information.

Get in touch If you would like to talk to us about anything you have read in this report, please get in touch via [email protected] or via Twitter @tns_global

Let’s talk about you

Sources

1: Dolnicar, Sara, Bettina Grun, and Friedrich Leisch (2011) ‘Quick, simple, and reliable: Force binary survey questions,’ International Journal of Research in Marketing, 53:2

2: Plassman, Hilke, Peter Kenning, and Dieter Ahlert (2007), ‘Why Companies Should Make Their Customers Happy: The Neural Correlates of Customer Loyalty,’ Advances in Consumer Research, 34:2

3: Hofmeyr, Jan H. and Butch Rice (2000), Commitment-Led Marketing, John Wiley and Sons, Chichester

4: Hofmeyr, Jan, Victoria Goodall, Marting Bongers, and Paul Holtzman (2008), ‘A new measure of brand attitudinal equity based on the Zipf distribution,’ International Journal of Marketing Research, 50:2;

Keiningham, Timothy L., Lerzan Aksoy, Alexander Buoye, and Bruce Cooil (2011), ‘Customer Loyalty isn’t Enough. Grow your Share of Wallet,’ Harvard Business Review, October

5: Reimann, Martin, Requel Castano, Judith Zaikowsky, and Antione Bechara (2011), ‘How we relate to brands: Psychological and Neurophysiological insights into Consumer-Brand Relationships,’ Journal of Consumer Psychology, (forthcoming)

6: Hofmeyr, Jan H. and Butch Rice (2000), Commitment-Led Marketing, John Wiley and Sons, Chichester

7: Gigerenzer, Gerd, Peter M. Todd, ABC Research Group (2000), Simple Heuristics That Make Us Smart, Oxford University Press, USA.

8: Hofmeyr, Jan, Victoria Goodall, Marting Bongers, and Paul Holtzman (2008), ‘A new measure of brand attitudinal equity based on the Zipf distribution,’ International Journal of Marketing Research, 50:2;