"Analytics Is Too Important To Be Left To Analysts", published February 20, 2015
I lead Force Five Partners, a marketing analytics consulting firm (bio). I've been writing here about marketing, technology, e-business, and analytics since 2003 (blog name explained).
Email or follow me:
"Analytics Is Too Important To Be Left To Analysts", published February 20, 2015
http://bit.ly/watbs-econtmag, published October 3
Warning: due to author sloth, this post is dated. Most of you have likely moved on with your lives. Me, my Heart Is Full for los albicelestes, despite a disappointing outcome, but I still have a lingering case of futbolytics to get over.
Fivethirtyeight.com contributor Benjamin Morris had a fascinating article a couple of weeks ago that examined Argentine footballer Lionel Messi's play in recent years. Until this year's World Cup of course, Messi had come under some criticism for, against the backdrop of his ethereal play for Barcelona, under-delivering for Argentina in recent years. The article -- warning, 4600+ words long, with charts and videos and hyperlinks and footnotes, as in "Ask not, Bill Simmons, for whom the bell tolls..." -- explains persuasively that Messi is an outlier among outliers, even in his performances for the Argentine national team.
The analysis of his shooting is fascinating enough. But what really caught my attention, was the analysis of his passing and other influences on the game. In particular, here's a fascinating chart from Morris' article that makes the point neatly.
My friends at the multi-channel marketing attribution analytics firm Visual IQ are fond of a soccer metaphor to explain what they do. "Giving all the conversion credit to the last-touched marketing channel is like giving Mario Götze all the credit for his goal in the 113th minute of the World Cup final," they say (with a zesty cruelty that borders on the sociopathic).
In the sales world, it's a holy grail to get to this kind of dynamic, or even an understanding of where on Morris' chart members of your team would be. It can be a thorny path to get there though, because unlike in Messi's world, a helping hand in sales can be harder to observe, and even if you try to measure it, often it can be (will be) gamed mercilessly and unhelpfully.
One way you can track this sort of thing is through online and offline knowledge sharing by members of your team. Winning proposals, presentations, good answers to FAQs, and then views of these by others can all be tracked in relatively painless, game-free ways. A number of years ago when I worked at ArsDigita, we worked with Siemens to build ShareNet, a global sales and marketing knowledge management system that for many years was a poster child for applications of its kind (see here for the HBR case study). The secret behind Siemens' success with ShareNet was the flexibility with which it could adapt what was captured, and how, to make it easy for people to contribute and consume. Today, fortunately, the tools and costs for building capabilities like this are far more accessible. And now as attribution analysis moves closer to the center of the marketing analytics agenda, we have the opportunity to put the resulting data to work in a way that moves the dominant motivation for this kind of behavior beyond altruism to proper credit.
So if you'd like to improve your organization's gol-orientation, perhaps it's time to compile and publish your own assist chart?
Mary Meeker's annual Internet Trends report is out. It's a very helpful survey and synthesis of what's going on, as ever, all 164 pages of it. But for the past few years it's contained a bit of analysis that's bugged me.
Page 15 of the report (embedded below) is titled "Remain Optimistic About Mobile Ad Spend Growth... Print Remains Way Over-Indexed." The main chart on the page compares the percentage of time people spend in different media with the percentage of advertising budgets that are spent in those media. The assumption is that percentage of time and percentage of budget should roughly be equal for each medium. Thus Meeker concludes that if -- as is the case for mobile -- the percentage of user time spent is greater than budget going there, then more ad dollars (as a percent of total) will flow to that medium, and vice versa (hence her point about print).
I can think of demand-side, supply-side, and market maturity reasons that this equivalency thesis would break down, which also suggest directions for improving the analysis.
On the demand side, different media may have different mixes of people, with different demographic characteristics. For financial services advertisers, print users skew older -- and thus have more money, on average, making the potential value to advertisers of each minute of time spent by the average user there more valuable. Different media may also have different advertising engagement power. For example, in mobile, in either highly task-focused use cases or in distracted, skimming/ snacking ones, ads may be either invisible or intrusive, diminishing their relative impact (either in terms of direct interaction or view-through stimulation). By contrast, deeper lean-back-style engagement with TV, with more room for an ad to maneuver, might if the ad is good make a bigger impression. I wonder if there's also a reach premium at work. Advertisers like to find the most efficient medium, but they also need to reach a large enough number of folks to execute campaigns effectively. TV and print are more reach-oriented media, in general.
On the supply side, different media have different power distributions of the content they can offer, and different barriers to entry that can affect pricing. On TV and in print, prime ad spots are more limited, so simple supply and demand dynamics drive up prices for the best spots beyond what the equivalency idea might suggest.
In favor of Meeker's thesis, though representing another short term brake on it, is yet another factor she doesn't speak to directly. This is the relative maturity of the markets and buying processes for different media, and the experience of the participants in those markets. A more mature, well-trafficked market, with well-understood dynamics, and lots of liquidity (think the ability for agencies and media brokers to resell time in TV's spot markets, for example), will, at the margin, attract and retain dollars, in particular while the true value of different media remain elusive. (This of course is one reason why attribution analysis is so hot, as evidenced by Google's and AOL Platform's recent acquisitions in this space.) I say in favor, because as mobile ad markets mature over time, this disadvantage will erode.
So for advertisers, agency and media execs, entrepreneurs, and investors looking to play the arbitrage game at the edges of Meeker's observation, the question is, what adjustment factors for demand, supply, and market maturity would you apply this year and next? It's not an idle question: tons of advertisers' media plans and publishers' business plans ride on these assumptions about how much money is going to come to or go away from them, and Meeker's report is an influential input into these plans in many cases.
A tactical limitation of Meeker's analysis is that while she suggests the overall potential shift in relative allocation of ad dollars (her slide suggests a "~$30B+" digital advertising growth opportunity in the USA alone - up from $20B last year*), she doesn't suggest a timescale and trendline for the pace with which we'll get there. One way to come at this is to look at the last 3-4 annual presentations she's made, and see how the relationships she's observed have changed over time. Interestingly, in her 2013 report using 2012 data, on page 5, 12% of time is spent on mobile devices, and 3% of ad dollars are going there, for a 4x difference in percentages. In the 2014 report using 2013 data, 20% of time is spent on mobile, and 5% of media dollars are going there -- again, a 4x relationship.
So, if the equivalency zeitgeist is at work, for the moment it may be stuck in a phone booth. But in the end I'm reminded of the futurist Roy Amara's saying: "We tend to overestimate the effect of a technology in the short term and underestimate its effect in the long term." Plus let's not forget new technologies (Glass, Occulus Rift, both portable and large/immersive) that will further jumble relevant media categories in years to come.
I've written a second book. It's called Marketing and Sales Analytics: Proven Techniques and Powerful Applications From Industry Leaders (so named for SEO purposes). Pearson is publishing it (special thanks to Judah Phillips, author of Building A Digital Analytics Organization, for introducing me to Jeanne Glasser at Pearson). The ebook version will be available on May 23, and the print version will come out June 23.
The book examines how to focus, build, and manage analytics capabilities related to sales and marketing. It's aimed at C-level executives who are trying to take advantage of these capabilities, as well as other senior executives directly responsible for building and running these groups. It synthesizes interviews with 15 senior executives at a variety of firms across a number of industries, including Abbott, La-Z-Boy, HSN, Condé Nast, Harrah's, Aetna, The Hartford, Bed Bath & Beyond, Paramount Pictures, Wayfair, Harvard University, TIAA-CREF, Talbots, and Lenovo. My friend and former boss Bob Lord, author of Converge was kind enough to write the foreword.
I'm in the final editing stages. More to follow soon, including content, excerpts, nice things people have said about it, slideshows, articles, lunch talk...
I'm working on a book. It will be titled Marketing and Sales Analytics: Powerful Lessons from Leading Practitioners. My first book, Pragmalytics, described some lessons I'd learned; this book extends those lessons with interviews with more than a dozen senior executives grappling with building and applying analytics capabilities in their companies. Pearson's agreed to publish it, and it will be out this spring. Right now I'm in the middle of the agony of writing it. Thank you Stephen Pressfield (and thanks to my wife Nan for introducing us).
A common denominator in the conversations I've been having is the importance of culture. Culture makes building an analytics capability possible. In some cases, pressure for culture change comes outside-in: external conditions become so dire that a firm must embrace data-driven objectivity. In others, the pressure comes top-down: senior leadership embodies it, leads by example, and is willing to re-staff the firm in its image. But what do you do when the wolf's not quite at the door, or when it makes more sense (hopefully, your situation) to try to build the capability largely within the team you have than to make wholesale changes?
There are a lot of models for understanding culture and how to change it. Here's a caveman version (informed by behavioral psychology principles, and small enough to remember). Culture is a collection of values -- beliefs -- about what works, and doesn't: what behaviors lead to good outcomes for customers, shareholders, and employees; and, what behaviors are either ignored or punished.
Values, in turn, are developed through chances individuals have to try target behaviors, the consequences of those experiences, and how effectively those chances and their consequences are communicated to other people working in the organization.
Chances are to culture change as reps (repetitions) are to sports. If you want to drive change, to get better, you need more of them. Remember that not all reps come in games. Test programs can support culture change the same way practices work for teams. Also, courage is a muscle: to bench press 500 pounds once, start with one pushup, then ten, and so on. If you want your marketing team to get comfortable conceiving and executing bigger and bolder bets, start by carving out, frequently, many small test cells in your programs. Then, add weight: define and bound dimensions and ranges for experimentation within those cells that don't just have limits, but also minimums for departure from the norm. If you can't agree on exactly what part of your marketing mix needs the most attention, don't study it forever. A few pushups won't hurt, even if it's your belly that needs the attention. A habit is easier to re-focus than it is to start.
Consequences need to be both visible and meaningful. Visible means good feedback loops to understand the outcome of the chance taken. Meaningful can run to more pay and promotion of course, but also to opportunity and recognition. And don't forget: a sense of impact and accomplishment -- of making a difference -- can be the most powerful reinforcer of all. For this reason, a high density of chances with short, visible feedback loops becomes really important to your change strategy.
Communication magnifies and sustains the impact of chances taken and their consequences. If you speak up at a sales meeting, the client says Good Point, and I later praise you for that, the culture change impact is X. If I then relate that story to everyone at the next sales team meeting, the impact is X * 10 others there. If we write down that behavior in the firm's sales training program as a good model to follow, the impact is X * 100 others who will go through that program.
Summing up, here's a simple set of questions to ask for managing culture change:
Ask these questions daily, tote up the score -- chances taken, consequences realized, communications executed -- weekly or monthly. Track the trend, slice the numbers by the behaviors and people you're trying to influence, and the consequences and communications that apply. Don't forget to keep culture change in context: frame it with the business results culture is supposed to serve. Re-focus, then wash, rinse, repeat. Very soon you'll have a clear view of and strong grip on culture change in your organization.
I originally got to know Judah Phillips through Web Analytics Wednesdays events he organized, and in recent years he's kindly participated on panels I've moderated and has been helpful to my own writing and publishing efforts. I've even partnered with some of the excellent professionals who have worked for him. So while I'm biased as the beneficiary of his wisdom and support, I can also vouch first-hand for the depth and credibility of his advice. In short, in an increasingly hype-filled category, Judah is the real deal, and this makes "Building The Digital Analytics Organization" a book to take seriously.
For me the book was useful on three levels. One, it's a foundational text for framing how to come at business analysis and reporting. Specifically, he presents an Analytics Value Chain that reminds us to bookend our analytic efforts per se with a clear set of objectives and actions, an orientation that's sadly missing in many balkanized corporate environments. Two, it's a blueprint for your own organization-building efforts. He really covers the waterfront, from how to approach analysis, to different kinds of analysis you can pursue, to how to organize the function and manage its relationships with other groups that play important supporting roles. For me, Chapter 6, "Defining, Planning, Collecting, and Governing Data in Digital Analytics" is an especially useful section. In it, he presents a very clear, straightforward structure for how you should set up and run these crucial functions. Finally, three, Judah offers a strong point of view on certain decisions. For example, I read him to advocate for a strongly centralized digital analytics function, rooted in the "business" side of the house, to make sure that you have both critical mass for these crucial skills, as well as proximity to the decisions they need to support.
These three uses had me scribbling in the margins and dog-earing extensively. But if you still need one more reason to pull the trigger, it helps that the book is very up-to-date and has a final chapter that looks forward very thoughtfully into how Judah expects what he describes as the "Analytical Economy" to evolve. This section is both a helpful survey of the different capabilities that will shape this future as well as an exploration of the issues these capabilities and associated trends will raise, in particular as they relate to privacy. It's a valuable checklist, to make sure you're not just building for today, but for the next few years to come.
I moderated this panel at the Massachusetts Innovation and Technology Exchange's (mitx.org)"The Science of Marketing: Using Data & Analytics for Winning" summit on August 1, 2013. Thanks to T. Rowe Price's Paul Musante, Visual IQ's Manu Mathew, iKnowtion's Don Ryan, and Google's Sonia Chung for participating!
We're now in the blood-sugar-crash phase of the Analytics / Big Data hype cycle, where the gap between promise and reality is greatest. Presenting symptoms of the gap include complaints about alignment, access to data, capacity to act on data-driven insights, and talent. This September 2012 HBR blog post by Paul Barth and Randy Bean of NewVantage Partners underscores this with some interesting data.
Executives' anxiety about this gap is also at its peak. Many of them turn to organization as their prime lever for solving things. A question I get a lot is "How should we organize our analytic capabilities?" Related ones include "How centralized should they be?", and "What should be on the business side, and what belongs in IT?"
This post suggests a few criteria for helping to answer these questions. But first, I'd like to offer a principle for tackling this generally:
Think organization last, not first.
A corollary to this might be, "Role is as role does." Too much attention today is paid to developing and organizing for analytic capability. Not enough attention is paid to defining and managing a portfolio of important business opportunities that leverage this capability. In our work with clients, we focus on building capability through practice and results. Our litmus test for whether we're making progress is a rule we call "3-2-1": In each quarter, the portfolio of business opportunities we're supporting with analytic efforts has to yield at least three "news you can use" insights, two experiments based on these insights, and one "scaling" of prior experiments to "production", with commensurate results. (The specific goals we set for each of these varies of course from situation to situation, but the approach is the same.)
Approaching things this way has several benefits:
Now, two critiques that can be made of this approach are, first, that it's too ad hoc and therefore misses opportunities to leverage experience beyond each individual opportunity addressed, and second, that it ignores that most people are "tribal" and that their behaviors are shaped accordingly. So once you've got a decent portfolio assembled and you're managing it along, here are some organizational considerations you can apply to help decide where folks should "live":
In our work we'll typically apply these criteria using scoresheets to evaluate either or both the specific business challenges we're solving for or the organizational models we're evaluating as possible options. Sometimes we just use "high-medium-low" assessments, and other times we'll do the math to help us stay objective about different ways to go. The main things are to keep attention to organization in balance with attention to progress, and to keep discussions about organization focused on the needs of the business, rather than allowing them to devolve into proxy battles for executive power and influence.