About

Cesar A. Brea bio at Force Five Partners

     

Subscribe

Get new posts by email:

RSS

Subscribe via
Google Currents

« Facebook's New TOS: What About Syndicated Content? | Main | Optimizing SEM vs. Affiliate Channel Investment: Amazon Giveth, Amazon Taketh Away »

March 25, 2009

MITX Measurement 2.0 Panel Recap

Yesterday morning I went to a MITX panel discussion titled "Measurement 2.0: How to Tell the Full Digital Story".  With 110 folks, it was SRO at Google's pad in Kendall Square.  Charlie Ballard from One to One Interactive (sponsor of other cool MITX panels) moderated, and the other panelists included Paul Botto , head of GA Enterprise Sales at Google, Morris Martin from Microsoft's Atlas Institute (that's him in the banner picture), Visible Measures' VP of Marketing and Analytics Matt Cutler, Mike Schneider from Allen & Gerritson, and my friend and colleague Ms. Perry Hewitt, CMO at the Cambridge-based social media measurement firm Crimson Hexagon.

Notwithstanding that it's so very 2004 to call anything "2.0" these days, Mike was correct to point out that before we can expect dollars to move toward "Web 3.0", we've got to get Measurement 2.0 right first.  Charlie usefully characterized that if "1.0" is about optimizing within channel silos, "2.0" is, in this context, about optimizing across them.  Whether you like the moniker or not, I agree (not uniquely) with his premise.  

Paul pushed the point further, arguing that to really understand a customer's experience, we need to move beyond a page-based measurement model to an event-based one.  This is especially necessary in a rich media world (think YouTube) where an experience spanning interaction across multiple rich media objects can happen within the context of a single page. (Whether or not you agree, it's provoking to think that while some pressures push us to think more macro (multi-channel), other technological developments push us to go more micro (intra-page).  Wonder if the same design concepts (pathways, handoffs) apply "fractally"?)  

However, Mike took the view that we should be careful about introducing new more exotic frameworks into a world where standards are such that we still can't agree on what defines a visit.  Matt pointed out that event tracking generates 10-100x the data, further complicating matters.  I'm in between: if you got a whole lotta Flash, you have no choice but to implement event-based measurement. Nonetheless, if we can't agree on standards, you give up benchmarking, because your own site (and perhaps others your agency has implemented) will be your only apples-to-apples point of reference.  (Paul indicated that event-based measurement is an invitation-only feature of GA.  I asked for one, and will report what I learn when I get to try it.)

Charlie kicked off the questions for the panel by referring to the multi-channel-measurement tool ur-text, Suresh Vittal's (Forrester Research report "Defining The Online Marketing Suite".  Specifically he asked if the centralized, "command and control" notion of tracking folks through a purchase pipeline across multiple channels still makes sense.  

Matt's take was that the explosive rise of social media has pushed the centralized model toward obsolescence (so soon!). He argued that with the "conversation" happening in places that don't (yet) let you slip measurement tags into their "vessels", marketing needs to be more about tracking what's happening out there using tools (like Perry's firm's) that Suresh didn't then cover but since has.  "Today, the center of gravity has moved, and marketing is much more like portfolio management", said Matt.   He then pointed to a silver lining opportunity: getting value from what he called "big data".  He described how in some presentations, he's successfully used tagcrowd.com to crunch a big bolus of comments on a video to infer / visually convey their collective meaning.

One question is, if we take his comments literally, are we back to local optimization of social stovepipes?  And, "big data" is valuable if you've got big comments.  What if no one comes to your party? In Long Tail space, no one can hear you scream. (Aside: this puts a premium on understanding viral propagation of your social media efforts as part of your portfolio management.)

Morris argued that the central model's value is just beginning to be realized, as it enables us to better understand the value of "upstream" investments and slowly ease away from over-emphasizing the value of being (if you're a publisher) / spending on (if you're an advertiser) the "last click".  Setting aside that Atlas is a display ad network with a natural interest in making this point, others have confirmed that display campaigns lift searches 15-20%.  Knowing this value, I think the opportunity here is to do the math to determine the "effective CPA" of an extra dollar to search vs. an extra dollar to display.

Charlie next asked, "How do we move from measurement to optimization?"  

Morris asserted that you've got to be able to track everything first, and that you shouldn't try to retrain media planners to work with a different process -- it's just too hard.  He pointed us to Atlas' Engagement Mapping tool, (launched a year ago, here's a review) as one option for optimizing within existing constructs.

Perry noted that one client has told her that her thinking about optimization has shifted, from "measure twice, cut once" to "measure twice, cut fast" -- the point being that media usage patterns are shifting quickly enough that a rough optimum appropriate to today is better than a perfect optimum appropriate to patterns we saw six months ago. Perry continued, "agility is the core competence in optimization efforts today."

Picking up Perry's thread, Matt urged the audience to think carefully about what data to collect.  He distinguished between "just-in-time" versus "just-in-case" data collection efforts.  "A bigger regression won't help," he noted, observing that "Even if it's more accurate, if people can't understand it they're unlikely to be able to act on it."  He suggested focusing on a narrow set of metrics and trying to move the needle 10% first, then adding more complexity to your models.  And, as a way to avoid analysis paralysis, Mike advised starting with a likely story in mind to prove or disprove, rather than boiling the ocean (testing/ regressing everything against everything) to find "emergent stories".  Truly men after my own  heart.  

A logical extension of the points above, particularly Perry's, is to shift the relative importance of A/B testing and passive measurement, versus back-testing, for media mix modeling efforts.  Charlie moved to this question next, asking, "How far can it go?"

Paul pointed out great results they've had (using Google Website Optimizer, natch) optimizing the Picassa download page.  Testing 200 different versions, they settled on one that "none of us would have ever thought of" that drove downloads 30% higher.  Surprisingly, the words "free download" don't help.  And, for those who fear that testing curbs creative freedom, reducing us to no better than Shakespeare's Monkeys, Paul pointed out that ironically, the opposite has been true -- creative teams feel they don't have to "play it safe" and can explore more possibilities, knowing that testing will ultimately discipline the process.  (Of course, this is true when experiments are as small as having or not having "free" on your page, but gets harder as the creative execution gets more expensive.)

Charlie's next question: "What about brand-focused advertising measurement?"  Matt talked about how the emergence of online video and social media have brought the left and right brains together: in these media, it's now simultaneously possible to craft a story that traditional brand marketers love and to measure its impact at least better than before, if not yet well enough.  In particular, he told the story of a credit card company that syndicated a video widget and saw a big jump in applications from folks who viewed it.  Perry told the story of how semantic analysis of an online crafting community's conversations (about vinyl home decor -- go figure) is being recycled to shape creative execution of television spots for one of her firm's clients (Ahem Perry, interesting crowd you're hanging with).  Matt further pointed to opportunities for "viral packaging", like paying Blendtec $10k to ask "Will It Blend?" of your product after their clever YouTube experiment with the iPhone drove millions of views and hundreds of thousands of subscribers to Blendtec's channel.  Paul suggested folks try Google Insights for Search as a way of getting a better view of what's happening upstream.

Panelists suggested the following additional resources:


Q&A:

  • I asked about whether the assembled players had explored allowing members of social media services like FB and LinkedIn extend their member profiles to include "analytics tracking tags" fields, so members could track visits and interaction by others in content the members publish or syndicate there.  It seemed to me a win-win all around for advertisers, members, and social media platforms.  Answer, good idea in principle, but social media platforms still guard that data jealously and there are privacy concerns that folks like Google and Microsoft in particular are sensitive to.  Paul did note though that YouTube provides some of this data to branded channel customers today.  My view is that if I can track you, dear reader, in GA using the tag embedded on this page through the Typepad template that wraps this content, it won't be long before Facebook makes the same thing happen, since advertisers want/ will pay for that (indirectly via CPMs), and who knows, they might be able to get a buck or a few each month from publishers to whom that information is really valuable.
  • Another person asked about the validity of the "view-through" as a metric -- that is,  what credit do you give to display ads that aren't clicked on?  Here's an article that describes the issue further (I love the author's concluding sentence: "Something between 0 and 100 percent credit is appropriate, depending on the advertiser's unique environmental, programmatic, and analytic profile. Each advertiser has to find its own answer.")  Morris referred folks to the Engagement Mapping research cited above, noting that "You can't grow search from the bottom of the funnel."
  • A third question was about the degree to which marketers should try to identify "emergent" funnels from the data versus operate/ test "pre-defined" purchase funnels.  The panelists were pretty much aligned in their responses about the practicality of focusing on the latter.  Matt said, "we're reinforcing for advertisers the importance of stories -- as humans we're tuned to listen to stories deep in our DNA, and it's much harder to infer them from oceans of data and analyses."  (From my end, I see an opportunity here -- services that collect stories as hypotheses, so that you can test the fit between stories and stats, mad-libs style.)  Charlie told a story about how they had tracked anonymous user 110135 through this cookie ID, and used this journey in a presentation to a cable company CEO, to huge effect.  Mike put it beautifully: "No story, no value."

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d834203eff53ef01156f4ab80d970b

Listed below are links to weblogs that reference MITX Measurement 2.0 Panel Recap:

Comments

The comments to this entry are closed.