A Comparison of Approaches to Advertising Measurement

Evidence from Big Field Experiments at Facebook

Brett Russell Gordon, Florian Zettelmeyer, Neha Bhargava, Dan chapsky

Research output: Working paper

Abstract

We examine how common techniques used to measure the causal impact of ad exposures on users’ conversion outcomes compare to the “gold standard” of a true experiment (randomized controlled trial). Using data from 12 US advertising lift studies at Facebook comprising 435 million user-study observations and 1.4 billion total impressions we contrast the experimental results to those obtained from observational methods, such as comparing exposed to unexposed users, matching methods, model-based adjustments, synthetic matched-markets tests, and before-after tests. We show that observational methods often fail to produce the same results as true experiments even after conditioning on information from thousands of behavioral variables and using non-linear models. We explain why this is the case. Our findings suggest that common approaches used to measure advertising effectiveness in industry fail to measure accurately the true effect of ads.
Original languageEnglish (US)
Number of pages54
StatePublished - Jul 1 2016

Fingerprint

Experiment
Facebook
Field experiment
Industry
Gold standard
User studies
Matching method
Advertising effectiveness
Conditioning
Randomized controlled trial

Cite this

@techreport{1ef28b298c214727b9d700c8bb3666c4,
title = "A Comparison of Approaches to Advertising Measurement: Evidence from Big Field Experiments at Facebook",
abstract = "We examine how common techniques used to measure the causal impact of ad exposures on users’ conversion outcomes compare to the “gold standard” of a true experiment (randomized controlled trial). Using data from 12 US advertising lift studies at Facebook comprising 435 million user-study observations and 1.4 billion total impressions we contrast the experimental results to those obtained from observational methods, such as comparing exposed to unexposed users, matching methods, model-based adjustments, synthetic matched-markets tests, and before-after tests. We show that observational methods often fail to produce the same results as true experiments even after conditioning on information from thousands of behavioral variables and using non-linear models. We explain why this is the case. Our findings suggest that common approaches used to measure advertising effectiveness in industry fail to measure accurately the true effect of ads.",
author = "Gordon, {Brett Russell} and Florian Zettelmeyer and Neha Bhargava and Dan chapsky",
year = "2016",
month = "7",
day = "1",
language = "English (US)",
type = "WorkingPaper",

}

TY - UNPB

T1 - A Comparison of Approaches to Advertising Measurement

T2 - Evidence from Big Field Experiments at Facebook

AU - Gordon, Brett Russell

AU - Zettelmeyer, Florian

AU - Bhargava, Neha

AU - chapsky, Dan

PY - 2016/7/1

Y1 - 2016/7/1

N2 - We examine how common techniques used to measure the causal impact of ad exposures on users’ conversion outcomes compare to the “gold standard” of a true experiment (randomized controlled trial). Using data from 12 US advertising lift studies at Facebook comprising 435 million user-study observations and 1.4 billion total impressions we contrast the experimental results to those obtained from observational methods, such as comparing exposed to unexposed users, matching methods, model-based adjustments, synthetic matched-markets tests, and before-after tests. We show that observational methods often fail to produce the same results as true experiments even after conditioning on information from thousands of behavioral variables and using non-linear models. We explain why this is the case. Our findings suggest that common approaches used to measure advertising effectiveness in industry fail to measure accurately the true effect of ads.

AB - We examine how common techniques used to measure the causal impact of ad exposures on users’ conversion outcomes compare to the “gold standard” of a true experiment (randomized controlled trial). Using data from 12 US advertising lift studies at Facebook comprising 435 million user-study observations and 1.4 billion total impressions we contrast the experimental results to those obtained from observational methods, such as comparing exposed to unexposed users, matching methods, model-based adjustments, synthetic matched-markets tests, and before-after tests. We show that observational methods often fail to produce the same results as true experiments even after conditioning on information from thousands of behavioral variables and using non-linear models. We explain why this is the case. Our findings suggest that common approaches used to measure advertising effectiveness in industry fail to measure accurately the true effect of ads.

M3 - Working paper

BT - A Comparison of Approaches to Advertising Measurement

ER -