Blog  Engineering  

How Significant Is Statistical Significance?

by Amobee, May 01, 2017

In the world of advertising, the number of actions a marketer can take is virtually limitless. Many savvy marketers have started to utilize analytics and adopt the discipline of statistics to help them make smarter decisions that lead to better outcomes. One of the most statistics-related questions marketers ask when looking at a test result is, “Are the results statistically significant?”

Some marketers care very little about the significance of a test, but others marketers treat statistical significance as the make-or-break criteria to whether a campaign ran successfully regardless of the results. Just how significant is statistical significance (see what I did there)? Well, first let’s understand what the term “statistical significance” truly means in the world of statistics.

Statistical Significance Explained

What does it mean when a campaign manager reported, “Based on the survey response from the control group and the exposed group, our campaign has raised the purchase intent from 10% to 14%, a 4% increase or 40% lift, with the result being statistically significant at 95% confidence level”? The most important thing marketers need to know is that this statement does not mean there is 95% confidence that the campaign achieved a 4% increase in purchase intent! In fact, the statement merely informs that we are 95% confident that the campaign drove an increase in purchase intent that is different from zero. In other words, knowing statistical significance only tells marketers the increase is likely not zero, but it does not inform the real magnitude of the increase.

To understand, we have to remember what we learned from that statistics intro class in college in which we just couldn’t stay awake. When a result achieved statistical significance — it is merely suggesting that we can reject the null hypothesis with a pre-specified degree of confidence (i.e. 95%). That null hypothesis, in most tests, states that the difference of the parameter being measured (i.e. purchase intent) between our testing groups is equal to zero. Hence to reject the null hypothesis is merely concluding that the lift we measured is greater than, or sometimes less than, 0% (i.e. the alternative hypothesis).

In the above statement the reported 4% increase in purchase intent is merely an estimate, and that “statistically significant at 95% confidence level” just means we’re 95% sure the value of the difference is not 0%, period. In fact, it is actually statistically improbable that your actual true lift is exactly 4.00%.

A Better Way to Gauge Performance

If statistical significance doesn’t tell marketers much other than the tested difference is not zero, how can marketers better gauge the actual performance of the test? For starters, marketers can ask, “What is the 95% confidence interval of the test results?” The nice thing about the confidence interval (CI) is that instead of giving a single statistics to describe the parameter, it gives two to help better describe the parameter estimate and offer better insights on the “robustness” of the test results.

Take the following three test results from Figure 1 as an example. All three tests have the exact same measured result at 4% increase in purchase intent between the exposed group and control group. That said, Test A was not statistically significant. When we draw the 95% CI under the probability density curve, we see the true difference lies somewhere between -1% and 9%, and because this range includes the 0% mark which was the aforementioned null hypothesis, the test was considered statistically insignificant as we cannot reject the null hypothesis using the 95% confidence level.

On the other hand, Test B and Test C are both statistically significant because both of their 95% CIs do not include the 0% mark, but while the second test has a 95% CI ranges from 1% to 7%, the third test has a smaller 95% CI ranges from 3% to 5%. It should now be fairly obvious why knowing the 95% CI give marketers far greater information about their test results. Not only does it inform as to whether the test is significant, it also tell us how far away the lower bound of CI is from 0%, our null hypothesis – allowing marketers to gauge the “robustness” of the reported test results as well as the magnitude of the least favorable outcome (i.e. 1% vs. 3%). CI itself not only already includes information whether the test is significant (does it include zero or not?), but it also informs the robustness of the reported statistics. Smart marketers should always ask for the CI rather than just the significance.

Should Marketers Dismiss Insignificant Results?

Now that we understand what it means to be statistically significant, should marketers ignore results that are statistically insignificant, assuming no additional tests can be done? Not necessarily, so long we are sure the test was conducted properly (which is a whole new blog topic). Going back to Test A in our prior example – although the 4% increase was not significant, its 95% CI still tell us that we are 95% sure the true lift was somewhere between -1% and 9%. While there’s a chance that the true lift is below zero, there’s also a chance that it is far greater than zero, with an average of 4% as the point estimate.

In fact, regardless of whether the result was statistically significant or not, this 4% point estimate is actually a marketer’s “best bet” on estimating how well the campaign performed in the absence of any other measurements. Statistically speaking, this point estimate is also known as the “maximum likelihood estimation” because it can be proved mathematically that this 4% maximizes the likelihood of the true test difference – as they are always shown at the peak of all probability density functions from Figure 1. Marketers just need to be aware that the result from Test A in our prior example is not as “robust” as the results from the other two tests. A decision based on an insignificant result should be made with greater caution, and if a budget were associated with such decision, marketers can consider reducing or slow pacing that budget according to the width of CI to help manage the associated risk.

Note that some vendors might advocate using lower confidence levels, such as 70% or 60%, to report that the result is “statistically significant” at a lower confidence level. While that is statistically valid, it is this author’s humble opinion that marketers still benefit more by knowing the exact 95%, or at least 90%, confidence intervals.

What Does This All Mean for Marketers?

So how important is statistical significance? While it is important to know whether the results are significant, marketers should always ask for confidence intervals to better gauge the robustness of the statistics reported. However, marketers also should not dismiss a non-significant positive result so quickly; they should proceed with any decision making with higher degree of cautiousness while look for additional indicators to help validate the test result. Finally, if a test result came back negative, then marketers should really question whether the experiment was conducted properly and whether what they wanted to measure was what the test ended up actually measuring before they deemed the test has no effect.

As marketers use their test results to help make better actionable decisions, understanding confidence intervals of these test statistics can help marketers take the guessing out of the decision-making process and help quantify the risk and uncertainty that are associated with the decisions marketers make.

For more on using analytics in marketing, see our blog series on building an analytic framework.

 

About Amobee

Founded in 2005, Amobee is an advertising platform that understands how people consume content. Our goal is to optimize outcomes for advertisers and media companies, while providing a better consumer experience. Through our platform, we help customers further their audience development, optimize their cross channel performance across all TV, connected TV, and digital media, and drive new customer growth through detailed analytics and reporting. Amobee is a wholly owned subsidiary of Tremor International, a collection of brands built to unite creativity, data and technology across the open internet.

If you’re curious to learn more, watch the on-demand demo or take a deep dive into our Research & Insights section where you can find recent webinars on-demand, media plan insights & activation templates, and more data-driven content. If you’re ready to take the next step into a sustainable, consumer-first advertising future, contact us today.

Read Next

All Blog Posts
Perspectives

Amobee Technology Drives Marketers Forward

Amobee is building industry leading technology, enabling brands and agencies to solve complex marketing challenges. Watch this new video showcasing the passion and expertise of the team that builds te…

March 10, 2016

Perspectives

Marketers Want More from Programmatic

Turn CEO Bruce Falck described three trends he’s seeing now to German publication Onlinemarketing.

September 30, 2016

Perspectives

How Cross-Device Delivers for Marketers

Cross-device tactics -- which allow marketers to find one person across the devices she uses -- provide a way to build momentum with customers throughout the entire customer journey. What should a marketer consider when evaluating cross-device as a tactic? Turn Director of Product Marketing Lori Gubin explains.

May 27, 2016