White paper Optimizing Audience Buying on Facebook and ... · PDF file Optimizing Audience...

Click here to load reader

  • date post

    09-Jun-2020
  • Category

    Documents

  • view

    0
  • download

    0

Embed Size (px)

Transcript of White paper Optimizing Audience Buying on Facebook and ... · PDF file Optimizing Audience...

  • White paper

    Optimizing Audience Buying on Facebook and Instagram

    July 2016

  • Contents

    1 Introduction 2 Methodology 5 Results – Campaign delivery

    – Brand lift

    – Cost per lift

    11 What it means for marketers

  • Optimizing Audience Buying on Facebook and Instagram 1

    Running campaigns simultaneously across different digital platforms should provide advertisers with extended reach into new audiences as well as the ability to reach pre-existing audiences in a more cost-effective manner. However, in practice, it isn’t immediately obvious what the most optimal way is of achieving these benefits and how significant they will be for any given campaign.

    Over the past few years, Facebook’s evolution into a family of apps and services has given advertisers new ways of extending their campaigns to platforms such as Instagram and the Audience Network. While some advertisers and agencies have been experimenting with manual allocation of budgets, Facebook fully launched placement optimization in late 2015 as a way of providing advertisers with an easier way of optimizing campaign delivery across Facebook, Instagram and the Audience Network. This paper aims to measure the effectiveness of optimizing audience buying to deliver value to brand advertisers.

    Introduction

    How placement optimization works Previous Marketing Science research has shown that over the course of a campaign, cost per outcome — from mobile app installs to brand awareness or online conversions — tends to vary between and within platforms due to audience behavior and characteristics. While advertisers and agencies can manually allocate budgets across platforms to account for these changes, placement optimization leverages Facebook’s ad delivery system to dynamically seek out the lowest cost per outcome at any given point in time wherever it’s available, whether it be on Facebook or, as in the illustrative example in Figure 1, on Instagram. As a result, placement optimization should provide lower cost per outcome than advertising on a single platform or even trying to manually allocate budgets across various platforms.1

  • Facebook IQ White paper2

    Introduction

    In this context, placement optimization is both an important and interesting topic for the Facebook Marketing Science team to study. Marketing Science’s mission is to help marketers understand how to leverage the maximum possible value from their advertising dollars. Determining whether placement optimization provides additional value and quantifying those gains is thus an important and straightforward goal for the team.

    We are also interested in placement optimization, as it offers a novel opportunity to robustly measure multichannel, cross-platform campaigns. Cross- platform and cross-device campaigns have traditionally been difficult to measure effectively; cookie-based measurement does a poor job of tracking individuals across devices and platforms and can tend to undervalue the true contribution of mobile.2 Not surprisingly, a recent eMarketer survey showed that

    when it comes to cross-platform measurement, 35% of advertisers are “not using a robust measurement technique,” while 34% say they “evaluate each channel individually and optimize based on channel-specific performance.”3 Rather than cookies, Facebook measurement solutions instead rely on people-based identities and conversion pixels, meaning we are in the unique position of being able to accurately measure the impact of a cross-platform campaign and better understand what factors might be contributing to any efficiency gains we observe.

    For this study, we’ve used placement optimization across Facebook and Instagram as a starting point to measure placement optimization’s effectiveness for brand advertisers. In the future, we plan to extend our methodology to look at direct response objectives as well as expand it across our family of apps.

    Co st

    p er

    o ut

    co m

    e

    Time elapsed

    Instagram-only avg cost per outcome

    Facebook-only avg cost per outcome

    Placement optimization avg cost per outcome

    Facebook measurement solutions rely on people- based identity and conversion pixels rather than cookies

    Figure 1. Graphic illustration of placement optimization mechanism

  • Optimizing Audience Buying on Facebook and Instagram 3

    Methodology

    Test design We focused our placement optimization research on brand campaigns that ran across Facebook and Instagram and used randomized controlled trials (RCT) as the primary methodology. We worked with a total of 10 brand advertisers across a range of verticals, countries, budgets and target audiences. While we deliberately chose a variety of advertisers and campaigns to test, we used the same underlying methodological approach throughout. Working closely with each advertiser, we split their budget and audience size equally between a placement optimization test cell in which an ad impression could be either on Facebook or on Instagram and a Facebook-only test cell where all impressions were delivered exclusively on Facebook.

    Our choice of test cells was deliberate. We could have potentially added other test cells or combinations of test cells, such as including an Instagram-only cell or cells with manually allocated budget across Facebook and Instagram. While these are both valid test designs that would have provided potentially meaningful insights, our consideration was driven primarily by a desire to focus on the most common question we get from advertisers that tend to be comfortable with their Facebook campaigns and that are looking for answers on how best to incorporate Instagram into their campaign strategy. With that question as our priority, we felt the Facebook-only versus placement optimization test design was the most logical question to start understanding this topic.

    With the two test cells decided up-front, every Facebook and Instagram user in a given test was randomly assigned to either the Facebook-only or the placement optimization cell prior to the campaign launch. Each test cell also had its own control group that did not receive any ads. This setup meant that the only difference between the two test cells was that consumers in the placement optimization cell had the opportunity to be reached across both Facebook and Instagram whereas consumers in the Facebook-only cell could only be reached on Facebook. Apart from that important difference, every other campaign feature — such as campaign objectives, optimization method, maximum bid (if using the auction), audience targeting and creative — were exactly the same across both test cells. Figure 2 provides an overview of our test design.

    This randomization of consumers across rigidly defined cells is the hallmark of the RCT methodology and ultimately provides for greater measurement accuracy compared to observational studies.4

  • Facebook IQ White paper4

    Methodology

    Measurement of brand lift

    Placement optimization

    Randomize groups Potential placement Brand polling

    Facebook- only

    Facebook & Instagram (+ other media)

    Facebook (+ other media)

    Note: Campaign budget, target audience, creative and campaign durations are hold constant for both cells.

    Figure 2. Test design

  • Optimizing Audience Buying on Facebook and Instagram 5

    Methodology

    Data collection procedure Despite some limitations, surveying has traditionally been the data collection method of choice for brand advertisers that want to understand the impact of a given campaign on their brand equity. With that in mind, we measured the impact of each of the two test cells (placement optimization and Facebook-only) by surveying a randomly selected subset of Facebook and Instagram users from both the exposed and control groups. We collected on average about 10,000 survey completions per test. The surveys consisted of three questions asked sequentially to the same person: while all the campaigns we measured asked ad recall as the first question, advertisers were allowed to customize the other two survey questions they wanted to ask. These questions generally fell into either upper-mid funnel metrics around awareness and affinity to lower- funnel metrics around purchase intent or willingness to recommend.5

    One important decision regarding our test design was about choosing the platform on which to administer the surveys. We ultimately elected to survey on Facebook mobile feed regardless of whether respondent had seen ad impressions on Facebook or Instagram. The main reason for this was to control for the fact that survey respondents on Instagram are not necessarily the same kind of survey respondents we see on Facebook; they likely have different baseline

    attitudes and perceptions of the brands we were testing, which could complicate comparing survey results between platforms and between test cells. We therefore chose to limit this potential bias by surveying exclusively on Facebook. The tradeoff of this approach is that respondents who saw ad impressions on Instagram may not have been active on Facebook and thus not eligible to respond to the survey on Facebook. This in turn could have led to an outcome where we underestimated Instagram’s true impact since we might have collected relatively fewer respondents who were exposed on Instagram.

    To address this potential bias, we compared the distr