Marketing Analytics Management
As marketing goes digital, it becomes more accountable: targetable, testable, trackable. This is good -- in theory. In practice, digital marketing has turned many marketers into sorcerer's apprentices who just can't get ahead of the flood. It's not a question of a lack of analytic smarts or the availability of technology to support the work. Rather, the challenge is collective coordination.
Some marketing teams compensate by hunkering down in channel-defined silos: "You do email, I do affiliates, he does, SEM, she does catalogs, those guys run the stores." But by now most people are hip to the idea that the best, most effective customer experiences optimize efforts to attract, engage, and convert customers across channels. Unfortunately, most marketing teams, and the ecosystem of agencies, technology vendors, research firms, and consultants that support them, are not organized to integrate analytics and tie them comprehensibly and usefully to the mix-, campaign-, and customer-level decisions such experiences require. Marketing analytics today are 110V appliances trying to plug into 220V strategic sockets, and what the world needs is a good adapter.
"Marketing analytics management", to coin a term, needs to move from an ad hoc set of informal activities run by a "priesthood" to a formally-defined and run process that is more broadly accessible. As with any process, the basic questions are:
- what are the outputs?
- what are the inputs?
- what happens along the way?
- who's involved, and how?
- what resources are needed to run the process?
- how do we evaluate and manage the process?
As a senior retailing executive said to me last fall, "I need my marketing team to think less like accountants, and more like decision makers!" The point: stop thinking that the process ends with a regular report whose chief virtue is that it's accurate -- or, even, that the process ends with one or more particular trends or insights about customer behavior that we should pay attention to.
Rather, each and every analytic effort needs to have a keep, change, drop, or add implication for the current marketing plan -- itself defined as the set of decisions (expressed as actions) about whom to target, how (what products, price, promotion, etc.). In a formal "marketing analytics management" context, this means an explicit, documented two-way association. Top-down, following explicit logic (expressed through models like the ones suggested by The Pyramid Principle), the marketing plan needs to selectively "commission" analytics efforts to support and monitor documented hypotheses and decisions about what to spend on and how to execute. Conversely, any ad hoc, bottom-up analytic effort should seek a relevant decision as a "patron" for the effort. A more complete "outputs data structure" might look like this:
- potential decision (action)
- analytic initiative
- stakes (in dollars, likely multiple fields)
- deadline for decision (for example, marketing plan due dates, important events, etc.)
For each decision to be made, and for any associated analysis to be done, we have to have:
- data that's easily findable, sufficiently understood, sufficiently clean, and sufficient;
- tools up to the complexity of the analytic task;
- a sense for how accurate and rigorous the answer has to be given the stakes involved in the decision to be made;
- a collective memory of relevant past decisions and analysis supporting them
- a map that shows where all the data lives;
- a dictionary that explains what piece of data means;
- a directory that explains how to get to the data;
- a log (just like the inspection logs in elevators) that documents periodic audits making sure that everything is flowing as it should
- a guide that describes how routine analytics get done and associated decisions get made
- a library (searchable at minimum, if not indexed) that stores all of the individual bits of research (again: queries, regressions, tests, benchmarks) and conclusions you've reached in the past.
Just as data need to be scoped to be appropriate to the stakes for a given decision, so should a process -- the steps and people necessary to go through to get a decision. Figure out the deadline for the decision, subtract the days required for the review process, along with the days required to get higher-priority work done, and you're left with the limited time you have to do the necessary design, data collection, analysis, and packaging. Presto, discipline.
At dinner recently, IBM Research's Kate Ehrlich observed she's seen three main roles in heavy-duty analytics efforts:
- decision-maker
- subject-matter expert
- data analyst
A recurring theme behind the points above is that you should know where you're trying to go with your analytic efforts, and scope the work, time, tools, and people involved accordingly. It's been my experience that there are exponentially diminishing returns and similarly exponentially increasing costs to analyses seeking to explain variances beyond a handful of variables (corroborating case in point: predicting tweet virality), or to cut uncertainty by a few percentage points (as above), or to involve more than a handful (3-5) people in primary roles in a review process. Again, all this isn't meant to dissuade us from such complexity -- it's just to make sure that the stakes make it worth it. I've also experienced the costs of NOT maintaining a tight ship when it comes to the data framework described above -- not just in terms of assembling data and re-learning lessons, but in terms of missed revenue and margin opportunities when bad data hide underperforming operations.
Comments