Why ad agencies should embrace A/B testing, too
A few years ago, when Facebook engagement ads were just taking off, Kevin Colleran, at the time still working for the social media behemoth (he was employee number 10 and its first sales executive; now a venture capitalist, what else?) told me that the way to make your Facebook ads really effective was to give the network three or four versions and let Facebook test them in a real environment. That way Facebook could virtually guarantee the efficacy of your brand message.
He mentioned that you’d be surprised what performed best. For example, you may have thought that videos would drive deeper engagement and you’d be wrong. You could hypothesize anything, in fact, but why bother when it was so easy to get proof of what worked simply by trying a few options.
I asked Kevin how many agencies took Facebook up on the offer and he answered hardly any. Unless they were a direct response firm, it just wasn’t in their DNA. So Facebook would themselves initiative comparisons for brands in order to prove the value.
This week Wired had a great piece on A/B testing and how it has become the “open secret of high-stakes web development.” It’s the formula by which almost all of Silicon Valley (maybe not Apple) improves its online products. Real time focus group testing in real life environments.
It’s really just a technique that derives from classic direct marketing. Beat the control. In the days of envelopes and stamps, however it took multiple tries and that could take many months a you had to conceive, write, print, mail, analyze data and try again.
On the web, of course, the process takes but a few hours. Change a color, an image or a headline and the impact on action taken could be significant. You may never know why, but that’s not the point.
Yet many ad agencies now getting into the digital business – creating websites, apps, online experiences and more – remain averse to A/B testing, or at least unaware of its potential. Why? For no other reason than the old linear process by which we made advertising – strategy, concept, approval, production, distribution – remains embedded in muscle memory. Or even more likely, because most ad agencies, along with plenty of companies in other industries, still practice HiPPO decision making; the highest paid person’s opinion determines what to do.
But read the Wired piece. Consider not only the dramatic improvements that A/B testing can yield – as well as the frequency with which the HiPPO’s are wrong – and you certainly conclude the strategy has a place in anything we ever do online. Ads, websites, apps, social engagement.
Maybe I should have done two versions of this post to see which one gets the most traffic.
Your thoughts? Are you using A/B testing for any of your online initiatives? Why not give it a try?
edwardboches It's worth checking out vansydow presentation from Cannes in 2010 on A/B testing display ads and this into the larger context of Agile Advertising http://www.slideshare.net/gustavvs/agile-advertising-burt-at-cannes-lions
Thanks Edward for the post. Hang in with me on this longer reply.
While I would always agree that testing is critical and not done enough, AB testing is tough due to many practical constraints online. This is particularly true in the display advertising world. In fact, there is academic literature highlighting the many challenges of running ab testing online (http://ronnyk.web.officelive.com/UnexpectedResults.aspx).
The majority of the issues described are in a controlled environment, a single web site. When you consider the complex world of display advertising that exists across a gazillion sites and real time bidding exchanges, these problems become even more difficult. This doesn't even get into the costs associated with running ab tests and the overhead costs of making sure they are done right. These costs include paying for control advertisements that don't advertise for your company, the overhead costs of implementing the ab tests correctly, and the costs associated with waiting for results. We have come across many advertisers that have been completely confused about the ab tests they were running because the results made absolutely no sense. This was almost certainly due to the fact that there was some unknown bias resulting from the implementation of the ab test. In fact, some practitioners have gone as far as suggesting that the ab tests themselves need to be ab tested to make sure they work correctly. Honest. This even drives up the costs further.
Moreover, in cases where the outcome of interest is rare, ab tests will become extremely expensive since you need to show so many controls to have enough signal to measure effectiveness. For this reason, it is important that we as an industry also begin to explore observational methods for estimating the impact of advertising. These approaches will allow companies to make use of the valuable data they collect and use it to measure effectiveness. We should think of these as observational experiments. These observational experiments will allow us to address some of the issues with ab testing that make it cost prohibitive. We at m6d, led by one of our data scientists, Ori Stitelman, have begun to take the first steps in looking at these approaches (http://m6d.com/blog/the-science-behind-nice/). In addition to the cost savings, observational methods allow one to look back at completed campaigns as well as ask other questions that may interest advertisers. Though observational approaches have their own set of issues and limitations, it is important that we as an industry explore all possible ways to estimate advertising effectiveness as finding a good solution is important for all of us. Ultimately all advertising should be judged based on how it impacts the people who see/interact with the message. Isn't that the reason we show the advertising to begin with?
PenryPrice Penry, thanks so much for this. I may have taken an overly simplified view. At least compared to the throughness and clarification you've added. Will look at you links and get smarter. But it does seem that with tools like Optimizely, it will be easier for anyone putting something online to at least determine and evaluate options when it comes to driving purchase or signup, requesting more information, going deeper into the engagement, or simply reducing bounce rates. More to learn.
edwardboches I would say not a simplified view at all, but a call to measure and test more. Bravo. And yes, there will be new tools and testing methods that will make us smarter, better, and more productive. What a great time for advertising!
I find it baffling if no A/B testing is done. I thought it was done for most advertising forms. How does an agency know they are maximizing performance if they do not do this type of testing? I mean products spend forever in R&D and focus groups and surveys before reaching market and there is still only a 10% success rate after 1 year. Do ad agencies view the short life of a campaign as a license to waste brand money? Digital or otherwise? Do brands allow this because they just assume what is sent out was tested or the best that can be?
HowieSPM I think there is plenty done for things like on line ads. But not that much, yet, for sites. Depends on client and their willingness to pay for alternatives and who value learning that way, rather than simply approving stuff.
edwardboches this might be a question for off line Edward but Mullen seems to be far ahead of even your sister agencies in how you view advertising, creativity, platforms, and spending client money wisely. Does this help your client pitches? meaning do the clients ask about these things vs wow that idea seems neato.
@HowieSPM I think the biggest advantage we have right now is a "hyper bundled" approach. Combined disciplines that genuinely work together let us put all the pieces together. On digital side, an investment in social, CT/developers and analytics helps check off the accountability ones, too. That being said, it's still about the "work" and business building ideas, whether media, social, strategic or creative.
I agree that A/B testing is a powerful, and often overlooked, tool. However, I'd argue the reason for this has to do with the ad community's focus on branding. If frequency builds brand awareness, why would a brand change its message? Look? Feel? The old-school rules of branding seem to directly oppose the rapid iteration approach of the tech community.
brookerandel That, too. Though I'm not sure that branding and results are incompatible. Test two things that are on brand. And don't test anything that's crap ever anyway. Seems to me that Obama is the same brand whether he posts an image or a photo, whether he says "Make a donation," Or "Learn more." Yet the two options yield dramatically different results.