Controlling for Impression Volatility in Digital Ad Spend Tests @DataXu
I've recently been involved in evaluating the results of a matched market test that looked at the impact of changes in digital advertising spend by comparing test vs. control markets, and by comparing differential lift in these markets over prior periods (e.g., year on year). One of the challenges involved in such tests is significant "impression volatility" across time periods -- basically, each dollar can buy you very different volumes of impressions from year to year.
You can unpack this volatility into at least three components:
- changes in overall macro-economic conditions that drive target audiences' attention,
- changes in the buying approach you took / networks you bought through, due to network-specific structural (like what publishers are included) and supply-demand drivers (like the relative effectiveness of the network's targeting approach)
- changes in "buy-specific" parameters (like audiences and palcements sought).
Let's assume that you handle the first with your test / control market structure. Let's also assume that the third is to be held constant as much as possible, for the purposes of the test (that is, buying the same properties / audiences, and using the same ad positions / placements for the tests). So my question was, how much volatility does the second factor contribute, and what can be done to control for that in a test?
Surfing around I came on DataXu's March 2011 Market Pulse study. DataXu is a service that allows you to buy across networks more efficiently in real time, sort of like what Kayak would be to travel if it were a fully automated agent and you flew every day. The firm noted a year-on-year drop in average daily CPM volatility from 102% to 42% from May 2010 to February 2011 (meaning I think the average day to day change in price across all networks in each of the two months compared). They attributed this to "dramatically increased volume of impressions bought and sold as well as maturation of trading systems". Notwithstanding, the study still pointed to a 342% difference in average indexed CPMs across networks during February 2011.
A number this big naturally piqued my interest, and so I read into the report to understand it better. The top of page 2 of the report summary presents a nice graph that shows average monthly indexed CPMs across 11 networks, and indeed shows the difference between the highest-priced and the lowest-priced network to be 342%. Applying "Olympic scoring" (tossing out highest- and lowest-priced exchanges) cuts that difference to about 180%, or roughly by half -- still a significant discrepancy of course. Looking further, one standard deviation in the whole sample (including the top and bottom values) is about 44%. Again, though perhaps a bit less dramatic for marketers' tastes, still lots.
(It's hard to know how "equivalent" the buys compared were, in terms of volumes, contextual consistency, and audience consistency, since the summary doesn't address these. But let's assume they were, roughly.)
So what? If your (display) ad buys are not so property-specific / audience-targeted that run-of-network buys in contextual or audience categories are OK, future tests might channel buys through services like DataXu and declare the buys "fully-price-optimized" across the periods and markets compared, allowing you to ignore +/- ~50% "impression volatility" swings, assuming the Feb 2011 spreads hold.
However, if what you're buying is very specific -- and only available through direct purchase, or one or two specialized networks at most -- then you ignore factor 2, trust the laws of supply and demand, and assume that you've bought essentially the same "attention" regardless of the difference in impressions.
I've asked some knowledgeable friends to suggest some perspectives on this, and will pass along their ideas. Other feedback welcome, especially from digital advertising / testing pros! Oh and if you're really interested, check out the DataXu TC50 2009 pitch video.
Comments