Minimizing waste should be every advertiser’s top priority.  But until recently, measuring the effectiveness of out-of-home (OOH) advertising campaigns wasn’t easy. That’s why marketers have primarily used the medium for branding in the past.

Fortunately, the once-black-hole of OOH performance data is now a thing of the past.  With the right partner, you can directly quantify the effectiveness of your OOH campaigns and optimize across multiple campaign variables to improve your outcomes.

At AdQuick, we recommend two methodologies most often: attribution analysis and causal lift analysis.

Understanding the Best-Performing Components Of Your Campaigns

Attribution analysis helps you pinpoint the campaign components that drove the most (and most cost-effective) outcomes – i.e.,  the relative contribution of different ad units, ad creative, media vendors, etc.  This means attribution is best for campaign optimization: it helps you understand “what’s working” and what’s not working, relatively speaking, so you can shift dollars away from the latter and toward the former.

Ideally, you should perform attribution analysis on every campaign, so you can continually improve your campaign performance.  AdQuick makes this easy, with an intuitive attribution dashboard that’s included with every attribution-eligible campaign entirely free of charge.

AdQuick makes it easy to directly attribute online and offline events to your out-of-home ad exposures –– all surfaced in an intuitive, interactive dashboard.

With that said, attribution does not help you truly understand the effectiveness of your campaigns, because attribution does not imply causation.  It provides a useful relative measure of effectiveness, but it cannot definitively tell you that an OOH advertising campaign was the direct cause of a desired outcome.

Quantifying The True Impact Of Your Campaigns

To demonstrate causality, marketers must isolate the impact of their OOH media.  After all, there are many things that could cause an increase in sales (above and beyond a marketing campaign), and not all correlations are causal.   For example, a competitor could have supply issues that cause their customers to switch to your product instead of theirs. If their supply snafu aligns with the timing of your marketing campaign, you may incorrectly credit 100% of the sales increase to your campaign.

The best way to measure causal effect is with experiments. And the gold standard in measurement is a randomized controlled trial – a scientific experiment where a test and control group are randomly drawn from the same pool.  This, of course, is not possible with out-of-home media. So marketers have relied on a less-randomized version of controlled experiments to measure the effectiveness of their out-of-home campaigns…

The Old Guard: Control-vs-Exposed Market Analysis

Historically, there have been a limited number of options for marketers hoping to use controlled experiments to understand the true effectiveness of their out-of-home advertising.  The most common approach is to set up control-vs.-exposed experiments using different markets.  These usually work as follows:

You start with a market in which you plan to run your OOH campaign.  This is the “exposed” or “test” group.  Then, you choose a separate market in which you will not run any media.  This will be your “control” group.  You then measure outcomes from both groups.  These outcomes may be survey results (if, for example, you hope to measure lift in brand metrics like unaided or aided awareness, message association, brand preference, purchase intent, etc.) or they may be sales-related metrics like revenue or store visits.  Once you’ve collected your data, you compare the difference between the two groups.  This difference represents the impact of your out-of-home advertising investment.

For example, if sales in the control market were $2.5M, and sales in the exposed market were $4.5M, then you’d conclude that your OOH campaign drove an 80% lift in sales.

Challenges with Control-vs-Exposed Market Analysis

There are, of course, challenges with these control-vs-exposed studies.

First, since the test and control groups are not assigned at random, there are many external “confounding” variables that can influence the outcome of a study.  For example, there may be a different number of physical store locations in the control markets vs. the test markets. Or there may be different demographics among their populations.  Variables like these can be taken into account during market selection and even smoothed over during the final analysis.  

But what about the confounding variables that you don’t realize exist?  An infamous study once suggested nightlights cause myopia, because kids with nightlights were more likely to grow up nearsighted.  Later, further research uncovered that nearsighted parents were more likely to put nightlights in their kid’s rooms than other parents, so heredity was the real cause all along.  What’s the point?  There are literally an infinite number of potentially confounding variables, and it’s unrealistic to think you can account for them all.

Second, it can be quite expensive and time-intensive to set up control and test markets. Tests have to be designed, and analyses must be done. This prep-work is a drain on your internal resources and your budget.  

Lastly, appropriate test set up requires time that isn’t always available.  Perhaps you secured the budget for measurement initiatives too late, or you thought you set up the test correctly only to realize after the campaign start that you’d overlooked an important element.  With control-vs-exposed market analysis, the preparation must occur in advance.  If it doesn’t, you’re simply out of luck.

Causal Lift Analysis To The Rescue

Causal Lift Analysis (or causal impact analysis) is an approach to estimating the causal effect of your out-of-home marketing campaign on a desired outcome – like sales, app installs, web conversions, or in-store visits.  As opposed to controlled experiments, causal lift analysis uses what statisticians refer to as “observational analysis methods” to understand causal effect.

Here’s how it works: we build a model (technically speaking: a Bayesian structural time-series model) that looks at a historical sales pattern and creates a prediction of its future course.  This is similar to the control group in a controlled experiment (in fact, some describe them as “synthetic controls”, because in the absence of an actual experiment, there’s no “control” in the usual sense).  We then compare this prediction to actual results (similar to the exposed group), and the difference represents the impact of your out-of-home media investment.

Now you might ask, “how do you come up with a specific prediction for the group that didn’t see my ad campaign”?  The trick is to use other time-series data (and ideally multiple datasets) which are related to the desired outcome – i.e., time-series data that could not have been influenced by the campaign, but which are predictive of your desired outcome.  Examples include web searches for your industry, searches for your competitor’s products, or sales data from multiple other markets. These data are typically correlated to – but not affected by – your campaign.  We then use these data to train the model that ultimately generates our prediction.

Of course, we run these estimates many, many times.  And this allows us to create a distribution of the causal effects, which in turn allows us to quantify a confidence interval for our final estimate.

Source: Inferring the effect of an event using CausalImpact by Kay Brodersen, Big Data Conference Spain

Easy peasy.  :)

Ultimately, this approach allows us to calculate the ROI of a specific marketing campaign much more accurately, because we can be sure that the campaign (and not some unidentified confounding variable) caused the lift.

Validating the Accuracy of Causal Lift Analysis

“Wait”, you might say, “how do we know these are accurate estimates?”  Good question.  The best way to validate this approach is to – you guessed it – run a controlled experiment.   And data scientists at Google have done just that – they’ve run causal impact studies on data for which they’d already run a controlled experiment.  Here a snapshot comparison of the two results:

Source: Inferring the effect of an event using CausalImpact by Kay Brodersen, Big Data Conference Spain

As you can see, the two charts are almost identical.  This tells us that the method did an incredible job at estimating how many outcomes would have occurred without a campaign so that the difference between the “synthetic control” and exposed groups can be accurately quantified.

The Benefits of Causal Lift Analysis

Now that we understand how causal lift analysis works (and that it does in fact work!), let’s review some of its unique benefits.

It’s flexible.  Unlike many other measurement approaches, causal impact analysis can be used to measure many different types of outcomes – including offline store visits, online web visits, online sales, offline sales, app installs, and more.  

It’s retroactive.  Because causal lift analysis uses your backend sales (or other outcome) data, you can run the analysis at any time.  No long lead-times or additional pre-planning required.

It’s easy to set up. Causal impact analysis requires only a minimal amount of information.  No pixel implementation is required (though that is an option!), and it doesn’t involve any 3rd party partners which tend to come with incremental costs.
For example, here is a sample data template that we provide to customers:

… Four columns.  It doesn’t get much easier than that.

It’s secure.  At AdQuick, we employ robust security measures to protect all the data we receive and collect.  But with causal impact analysis, you have complete control over the data you share.  If you’re uncomfortable sharing sales or other outcome data with outside parties, we have a simple solution to obfuscate your data: Simply pick a random number, and multiply the number of events by this same random number.  When we provide the estimated number of attributed sales events, you can then simply divide by this number to get to the accurate amount.

It’s reliable. At its core, causal impact analysis is just a lot of advanced math.  There are no biases in math.  Plus, we provide confidence intervals so you understand exactly the degree to which the results are trustworthy.


Every advertiser deserves to understand their campaign effectiveness via consistent attribution and lift analysis.

At AdQuick, we recommend running attribution on every campaign (remember: we include this as added value for every applicable campaign), and periodically running causal lift studies to measure your ROI as well as improvements on upper funnel and lower funnel KPIs.

Just be forewarned: once you employ modern measurement solutions, you’ll probably get addicted to seeing what your OOH campaigns can do, well beyond brand awareness alone. Take a recent campaign we ran for a banking app seeking to improve brand consideration and drive subscriptions for their new reward-earning debit card.  We found that the campaign generated an impressive 39% lift in new subscriptions – far above the brand’s benchmark metrics – at a 98.7% confidence level.  And if they hadn’t run a causal impact study, the brand wouldn’t have had the opportunity to demonstrate those impressive results!

Want to learn more about measuring the effectiveness of your out-of-home advertising campaigns?  Connect with one of our out-of-home experts today!