I've been working with a global financial services firm to develop its marketing analytics / intelligence capability, and we're now building a highly capable team to further extend and sustain the results and lessons so far. This includes a Marketing Analytics Director to lead a strong team doing advanced data mining and predictive modeling to support high-impact opportunities in various areas of the firm. Here's the job description on LinkedIn. If you are currently working at a large marketer, major analytics consulting firm, or advertising agency, and have significant experience analyzing, communicating, and implementing sophisticated multi-channel marketing programs, and are up for the challenge of leading a new team in this area for a world-class firm in a great city, please get in touch!
What strategic path for Yahoo! satisfies the following important requirements?
Solves a keenly felt customer / user / audience / human problem?
Fits within but doesn't totally overlap what other competitors provide?
Builds off things Yahoo! has / does well?
Fits Ms. Mayer's experiences, so she's playing from a position of strength and confidence?
As a consequence of all this, will bring advertisers back at premium prices?
Yahoo!'s company profile is a little buzzwordy but offers a potential point of departure. What Yahoo! says:
"Our vision is to deliver your world, your way. We do that by using technology, insights, and intuition to create deeply personal digital experiences that keep more than half a billion people connected to what matters the most to them – across devices, on every continent, in more than 30 languages. And we connect advertisers to the consumers who matter to them most – the ones who will build their businesses – through our unique combination of Science + Art + Scale."
What Cesar infers:
Yahoo! is a filter.
Here are some big things the Internet helps us do:
Every one of these functions has an 800 lb. gorilla, and a few aspirants, attached to it:
Argue -- Wordpress, Typepad, [insert major MSM digital presence here]
Relax -- Netflix, Hulu, Pandora, Spotify
Filter -- ...
Um, filter... Filter. There's a flood of information out there. Who's doing a great job of filtering it for me? Google alerts? Useful but very crude. Twitter? I browse my followings for nuggets, but sometimes these are hard to parse from the droppings. Facebook? Sorry friends, but my inner sociopath complains it has to work too hard to sift the news I can use from the River of Life.
Filtering is still a tough, unsolved problem, arguably the problem of the age (or at least it was last year when I said so). The best tool I've found for helping me build filters is Yahoo! Pipes. (Example)
As far as I can tell, Pipes has remained this slightly wonky tool in Yahoo's bazaar suite of products. Nerds like me get a lot of leverage from the service, but it's a bit hard to explain the concept, and the semi-programmatic interface is powerful but definitely not for the general public.
Now, what if Yahoo! were to embrace filtering as its core proposition, and build off the Pipes idea and experience under the guidance of Google's own UI guru -- the very same Ms. Mayer, hopefully applying the lessons of iGoogle's rise and fall -- to make it possible for its users to filter their worlds more effectively? If you think about it, there are various services out there that tackle individual aspects of the filtering challenge: professional (e.g. NY Times, Vogue, Car and Driver), social (Facebook, subReddits), tribal (online communities extending from often offline affinities), algorithmic (Amazon-style collaborative filtering), sponsored (e.g., coupon sites). No one is doing a good job of pulling these all together and allowing me to tailor their spews to my life. Right now it's up to me to follow Gina Trapani's Lifehacker suggestion, which is to use Pipes.
OK so let's review:
Valuable unsolved problem for customers / users: check.
Fragmented, undominated competitive space: check.
Yahoo! has credibly assets / experience: check.
Marissa Mayer plays from position of strength and experience: check.
Advertisers willing to pay premium prices, in droves: ...
Well, let's look at this a bit. I'd argue that a good filter is effectively a "passive search engine". Basically through the filters people construct -- effectively "stored searches" -- they tell you what it is they are really interested in, and in what context and time they want it. With cookie-based targeting under pressure on multiple fronts, advertisers will be looking for impression inventories that provide search-like value propositions without the tracking headaches. Whoever can do this well could make major bank from advertisers looking for an alternative to the online ad biz Hydra (aka Google, Facebook, Apple, plus assorted minor others).
Savvy advertisers and publishers will pooh-pooh the idea that individual Pipemakers would be numerous enough or consistent enough on their own to provide the reach that is the reason Yahoo! is still in business. But I think there's lots of ways around this. For one, there's already plenty of precedent at other media companies for suggesting proto-Pipes -- usually called "channels", Yahoo! calls them "sites" (example), and they have RSS feeds. Portals like Yahoo!, major media like the NYT, and universities like Harvard suggest categories, offer pre-packaged RSS feeds, and even give you the ability to roll your own feed out of their content. The problem is that it's still marketed as RSS, which even in this day and age is still a bit beyond for most folks. But if you find a more user-friendly way to "clone and extend" suggested Pipes, friends' Pipes, sponsored Pipes, etc., you've got a start.
Check? Lots of hand-waving, I know. But what's true is that Yahoo! has suffered from a loss of a clear identity. And the path to re-growing its value starts with fixing that problem.
Via my friends at VisualIQ, this wonderful post from Avinash Kaushik on doing multi-channel attribution and mix optimization in the real world. Plus a really rich set of conversations in the comments. My summary of his advice (reassuringly consistent with my own experiences with "pragmalytic" approaches):
Start by solving for specific attribution / optimization use cases you face in the real world, not the more general form of the challenge. He names three dominant ones he sees: "O2S -- Online to Store", "AMS -- Across Multiple Screens", and "ADC -- Across Digital Channels"
Use multiple analytic techniques to compensate for imperfect data that any one technique might rely on. For example, if there are holes or quality problems with your data, supplement it with controlled tests
Don't cop out, but accept that there are no perfect answers, just better ones, and that you should bias toward acting on acceptably imperfect information and learning and improving based on actual experience
Absolutely terrific stuff here, gets even better on the third and subsequent reads.
In May 2007, Microsoft paid $6 billion to buy aQuantive. Today, only five years later, they wrote off the whole investment. Since I wrote about this a lot five years ago (here, here and here), it prompted me to think about what happened, and what I might learn. Here are a few observations:
1. 2006 / 2007 was a frothy time in the ad network market, both for ads and for the firms themselves, reflecting the economy in general.
2. Microsoft came late to the party, chasing aQuantive (desperately) after Google had taken DoubleClick off the table.
3. So, Microsoft paid a 100% premium to aQuantive's market cap to get the firm.
4. Here's the way Microsoft might have been seeing things at the time:
a. "Thick client OS and productivity applications business in decline -- the future is in the cloud."
b. "Cloud business model uncertain, but certainly lower price point than our desktop franchise; must explore all options; maybe an ad-supported version of a cloud-based productivity suite?"
c. "We have MSN. Why should someone else sit between us and our MSN advertisers and collect a toll on our non-premium, non-direct inventory? In fact, if we had an ad network, we could sit between advertisers and other publishers and collect a toll!"
5. Here's the way things played out:
a. The economy crashed a year later.
b. When budgets came back, they went first to the most accountable digital ad spend: search.
c. Microsoft had a new horse in that race: Bing (launched June 2009). Discretionary investment naturally flowed there.
d. Meanwhile, "display" evolved: video display, social display (aka Facebook), mobile display (Dadgurnit! Google bought AdMob, Apple has iAd! Scraps again for the rest of us...). (Good recent eMarketer presentation on trends here.)
e. Whatever's left of "traditional" display: Google / DoubleClick, as the category leader, eats first.
f. Specialized players do continue to grow in "traditional" display, through better targeting technologies (BT) and through facilitating more efficient buys (for example, DataXu, which I wrote about here). But to grow you have to invest and innovate, and at Microsoft, by this point, as noted above, the money was going elsewhere.
g. So, if you're Microsoft, and you're getting left behind, what do you do? Take 'em with you! "Do not track by default" in IE 10 as of June 2012. That's old school medieval, dressed up in hipster specs and a porkpie hat. Steve Ballmer may be struggling strategically, but he's still as brutal as ever.
a. $6 Big Ones is only 2% of MSFT's market cap. aQuantive may have come at a 2x premium, but it was worth the hedge. The rich are different from you and me.
b. The bigger issue though is how does MSFT steal a march on Google, Apple, Facebook? Hmmm. video's hot. Still bandwidth constrained, but that'll get better. And there's interactive video. Folks will eventually spend lots of time there, and ads will follow them. Google's got Hangouts, Facebook's got Facetime, Apple's got iChat... and now MSFT has Skype, for $8B. Hmm.
a. Some of the smartest business guys I worked with at Bain in the late 90's (including Torrence Boone and Jason Trevisan) ended up at aQuantive and helped to build it into the success it was. An interesting alumni diaspora to follow.
b. Some of the smartest folks I worked with at Razorfish in the early 2000's (including Bob Lord) ended up at aQuantive. The best part is that Microsoft may have gotten more value from buying and selling Razorfish (to Publicis) than from buying and writing off the rest of aQuantive. Sweet, that.
Sorry for the buzzwordy title of this post, but hopefully you'll agree that sometimes they can be useful to communicating an important Zeitgeist.
I'm working with one of our clients right now to develop a new, advanced business intelligence capability that uses state-of-the art in-memory data visualization tools like Tableau and Spotfire that will ultimately connect multiple data sets to answer a range of important questions. I've also been involved recently in a major analysis of advertising effectiveness that included a number of data sources that were either external to the organization, or non-traditional, or both. In both cases, these efforts are likely to evolve toward predictive models of behavior to help prioritize efforts and allocate scarce resources.
Simultaneously, today's NYT carried an article about Clear Story, a Silicon Valley startup that aggregates APIs to public data sources about folks, and provides a highly simplified interface to those APIs for analysts and business execs. I haven't yet tried their service, but I'll save that for a separate post. The point here is that the emergence of services like this represent an important step in the evolution of Web 2.0 -- call it Web 2.2 -- that's very relevant for marketing analytics in enterprise contexts.
So, what's significant about these experiences?
Readers of Ralph Kimball's classic Data Warehouse Toolkit will appreciate both the wisdom of his advice, but also today, how the context for it has changed. Kimball is absolutely an advocate for starting with a clear idea of the questions you'd like to answer and for making pragmatic choices about how to organize information to answer them. However, the major editions of the book were written in a time when three things were true:
You needed to organize information more thoughtfully up front, because computing resources to compensate for poor initial organization were less capable and more expensive
The number of data sources you could integrate were far more limited, allowing you to be more definitive up front about the data structures you defined to answer your target questions
The questions themselves, or the range of possible answers to them, were more limited and less dynamic, because the market context was so as well
Together, these things made for business intelligence / data warehouse / data management efforts that were longer, and a bit more "waterfall" and episodic in execution. However, over the past decade, many have critiqued such efforts for high failure rates, mostly in which they collapse of their own weight: too much investment, too much complexity, too few results. Call this Planned Data Modeling.
Now back to the first experience I described above. We're using the tools I mentioned to simultaneously hunt for valuable insights that will help pay the freight of the effort, define useful interfaces for users to keep using, and through these efforts, also determine the optimal data structures we need underneath to scale from the few million rows in one big flat file we've started with to something that will no doubt be larger, more multi-faceted, and thus more complex. In particular, we're using the ability of these tools to calculate synthetic variables on the fly out of the raw data to point the way toward summaries and indeces we'll eventually have to develop in our data repository. This will improve the likelihood that the way we architect that will directly support real reporting and analysis requirements, prioritized based on actual usage in initial pilots, rather than speculative requirements obtained through more conventional means. Call thisOrganic Data Modeling.
Further, the work we've done anticipates that we will be weaving together a number of new sources of data, many of them externally provided, and that we'll likely swap sources in and out as we find that some are more useful than others. It occurred to me that this large, heterogenous, and dynamic collection of data sources would have characteristics sufficiently different in terms of their analytic and administrative implications that a different name altogether might be in order for the sum of the pieces. Hence, the Extrabase.
These terms are not meant to cover up a cop-out. In other words, some might say that mashing up a bunch of files in an in-memory visualization tool could reflect and further contribute to a lack of intellectual discipline and wherewithal to get it right. In our case, we're hedging that risk, by having the data modelers responsible for figuring out the optimal data repository structure work extremely closely with the "front-end" analysts so that as potential data structure implications flow out of the rubber-meets-the-road analysis, we're able to sift them and decide which should stick and which we can ignore.
But, as they say sometimes in software, "that's a feature, not a bug." Meaning, mashing up files in these tools and seeing what's useful is a way of paying for and disciplining the back end data management process more rigorously, so that what gets built is based on what folks actually need, and gets delivered faster to boot.
Here's one summary of the experience that's making the rounds:
I wasn't able to be there all that long, but my impression was different. Men of all colors (especially if you count tattoos), and lots more women (many tattooed also, and extensively). I had a chance to talk with Doc Searls (I'm a huge Cluetrain fan) briefly at the Digital Harvard reception at The Parish; he suggested (my words) the increased ratio of women is a good barometer for the evolution of the festival from narcissistic nerdiness toward more sensible substance. Nonetheless, on the surface, it does remain a sweaty mosh pit of digital love and frenzied networking. Picture Dumbo on spring break on 6th and San Jacinto. With light sabers:
Sight that will haunt my dreams for a while: VC-looking guy, blazer and dress shirt, in a pedicab piloted by skinny grungy student (?) Dude, learn Linux, and your next tip from The Man at SXSW might just be a term sheet.
So whom did I meet, and what did I learn:
I had a great time listening to PRX.org's John Barth. The Public Radio Exchange aggregates independent content suitable for radio (think The Moth), adds valuable services like consistent content metadata and rights management, and then acts as a distribution hub for stations that want to use it. We talked about how they're planning to analyze listenership patterns with that metadata and other stuff (maybe gleaning audience demographics via Quantcast) for shaping content and targeting listeners. He related for example that stations seem to prefer either 1 hour programs they can use to fill standard-sized holes, or two- to seven- minute segments they can weave into pre-existing programs. Documentary-style shows that weave music and informed commentary together are especially popular. We explored whether production templates ("structured collaboration": think "Mad Libs" for digital media) might make sense. Maybe later.
Paul Payack explained his Global Language Monitor service to me, and we explored its potential application as a complement if not a replacement for episodic brand trackers. Think of it as a more sophisticated and source-ecumenical version of Google Insights for Search.
Kara Oehler's presentation on her Mapping Main Street project was great, and it made me want to try her Zeega.org service (a Harvard metaLAB project) as soon as it's available, to see how close I can get to replicating The Yellow Submarine for my son, with other family members spliced in for The Beatles. Add it to my list of other cool projects I like, such as mrpicassohead.
Peter Boyce and Zach Hamed from Hack Harvard, nice to meet you. Here's a book that grew out of the class at MIT I mentioned -- maybe you guys could cobble together an O'Reilly deal out of your work!
Finally, congrats to Perry Hewitt (here with Anne Cushing) and all her Harvard colleagues on a great evening!
I've recently been involved in evaluating the results of a matched market test that looked at the impact of changes in digital advertising spend by comparing test vs. control markets, and by comparing differential lift in these markets over prior periods (e.g., year on year). One of the challenges involved in such tests is significant "impression volatility" across time periods -- basically, each dollar can buy you very different volumes of impressions from year to year.
You can unpack this volatility into at least three components:
changes in overall macro-economic conditions that drive target audiences' attention,
changes in the buying approach you took / networks you bought through, due to network-specific structural (like what publishers are included) and supply-demand drivers (like the relative effectiveness of the network's targeting approach)
changes in "buy-specific" parameters (like audiences and palcements sought).
Let's assume that you handle the first with your test / control market structure. Let's also assume that the third is to be held constant as much as possible, for the purposes of the test (that is, buying the same properties / audiences, and using the same ad positions / placements for the tests). So my question was, how much volatility does the second factor contribute, and what can be done to control for that in a test?
Surfing around I came on DataXu's March 2011 Market Pulse study. DataXu is a service that allows you to buy across networks more efficiently in real time, sort of like what Kayak would be to travel if it were a fully automated agent and you flew every day. The firm noted a year-on-year drop in average daily CPM volatility from 102% to 42% from May 2010 to February 2011 (meaning I think the average day to day change in price across all networks in each of the two months compared). They attributed this to "dramatically increased volume of impressions bought and sold as well as maturation of trading systems". Notwithstanding, the study still pointed to a 342% difference in average indexed CPMs across networks during February 2011.
A number this big naturally piqued my interest, and so I read into the report to understand it better. The top of page 2 of the report summary presents a nice graph that shows average monthly indexed CPMs across 11 networks, and indeed shows the difference between the highest-priced and the lowest-priced network to be 342%. Applying "Olympic scoring" (tossing out highest- and lowest-priced exchanges) cuts that difference to about 180%, or roughly by half -- still a significant discrepancy of course. Looking further, one standard deviation in the whole sample (including the top and bottom values) is about 44%. Again, though perhaps a bit less dramatic for marketers' tastes, still lots.
(It's hard to know how "equivalent" the buys compared were, in terms of volumes, contextual consistency, and audience consistency, since the summary doesn't address these. But let's assume they were, roughly.)
So what? If your (display) ad buys are not so property-specific / audience-targeted that run-of-network buys in contextual or audience categories are OK, future tests might channel buys through services like DataXu and declare the buys "fully-price-optimized" across the periods and markets compared, allowing you to ignore +/- ~50% "impression volatility" swings, assuming the Feb 2011 spreads hold.
However, if what you're buying is very specific -- and only available through direct purchase, or one or two specialized networks at most -- then you ignore factor 2, trust the laws of supply and demand, and assume that you've bought essentially the same "attention" regardless of the difference in impressions.
I've asked some knowledgeable friends to suggest some perspectives on this, and will pass along their ideas. Other feedback welcome, especially from digital advertising / testing pros! Oh and if you're really interested, check out the DataXu TC50 2009 pitch video.
Think of the protest sites as outdoor ad inventory. This inventory is in great locations -- in the hearts of the world's financial districts, with lots of people with very high disposable incomes to see your ads every day, all day, right outside their windows -- the same people that fancy watchmakers pay the WSJ big bucks to reach.
Yet currently, this valuable inventory is currently filled with PSAs...
...Or it goes begging altogether:
So it dawned on me: "Sponsored Occupations" -- the outdoor ad network that monetizes protest movements! This concept meets several needs simultaneously:
One stated objective of the movement is to "Make Them Pay". The concept creates a practical mechanism for realizing this goal.
Events and guerilla marketing in premium locations without a permitting process -- an advertiser's dream!
Plus, sponsors could negotiate special perks, like keeping the protesters from "Going all Oakland" (just heard that term) on their retail stores.
Cash-strapped municipalities can muscle a cut of the publishers' share, turning what's today a drag on public resources (police, etc.) into a money-maker.
There's another important benefit. This idea is a job creator. After all, the network needs people to pitch the "publishers" at each location, and sales folks to recruit the advertisers, and staff to traffic the ads, keep the books, etc. Politicians right and left could fold this into their platforms immediately.
Finally, for the entrepreneur who starts it all, there's the chance to Sell Out To The Man -- at a very attractive premium! And, for the protesters who back the venture, and get options working for it, a chance to cash out too, just like the guys they're protesting.
Something you hear a lot these days is, "The 'Marketing Funnel' concept's dead. It's just not clear what's replaced it." At the recent OMMA Metrics conference in NY, IBM/Unica's Yuchun Lee described its successor as some sort of "spaghetti". Judah Phillips had an excellent article in yesterday's Online Metrics Insider (Thanks to Rob Schmults for pointing me to it!) in which he suggested a "tumbler" metaphor and "seeking-shopping-sharing" structure for what we now do.
Let's consider our requirements for a metaphor to succeed the "funnel":
1. We still have a "current" whose power combines the "push" of customer needs and desires with the "pull" of companies with products that could satisfy those. ("I have testosterone, Porsche makes 911s.") To me, that still makes "linear" metaphors useful.
2. For me, "Attract", "Engage", and "Convert", plus sometimes "Retain" -- or variants thereof, like Judah's, still work for me as basic stage descriptors. What's changed is that channels have exploded in number, and audiences have fragmented as they use different ones. So a good metaphor will describe a journey that is less predictable in both flow path and rate. (It's probably also useful to use stage descriptors that reflect the customer's perspective, not the marketer's -- "Awareness", "Consideration", "Purchase", plus sometimes hopefully "Loyalty" and "Advocacy" -- but they're not quite as punchy. Nonetheless, you get the point.)
3. The channel system that lies between nascent demand and final purchase is, these days, much more replete with advice that educates folks as they flow through, qualifying and intensifying final demand. The metaphor we use has to describe the "chemistry" that happens in these intermediate spaces and interactions. And, the behaviors we observe in these stages should inform, as Judah suggests in his article, how we market afterward.
4. Over time, however, the system in between gets clogged with a lot of junky information, or becomes technologically obsolete, so you need to refresh or replace your presences in this system on a regular basis. The metaphor has to anticipate this need as well.
5. The operational and analytic processes for marketing within this flow are higher-tech than a simple funnel.
OK, so let's look at the Brita filter:
1. Water still generally flows through in a linear fashion; and, not all of it flows through at once.
2. As water flows through the charcoal however, it breaks up into much smaller droplets, that flow through at different rates and and less predictable paths.
3. The interaction of the water with activated charcoal only got rid of some impurities; Brita's engineers learned to add an ionic coating to get some minerals out of the water too.
4. Ideally, you change it now and then to keep its performance up.
5. It's higher-tech than a funnel (and more expensive too, but the benefits are sometimes worth it).
So, you say, "Cute but esoteric -- how is this conceptualization useful?"
2. It also helps me to extend them, by considering differential flow rates through discrete paths, and to focus on what we can learn from interactions at one stage and channel that might inform what we do in subsequent, different ones.
3. It helps to remind me that when populating and managing any such framework, when it gets too complex or overly-"excepted" with rules, I might be better off replacing it.
4. It helps me to remember that at best I can only be probabilistic and not deterministic in my understanding of "customer flow dynamics", and that what's important is to be explicit about probability levels so that group decisions can be helped by a shared understanding of these.
5. Finally, it's oddly memorable (to me anyway), and the filter's specific properties help me remember the requirements better.
So, what do you think? Please answer the poll below, and comment with questions / alternatives.
There are two notable things to me about this development / achievement.
The first is to ask whether this puts us ahead or behind Ray Kurzweil's schedule for 2019 (as predicted in 1999). (Really worth reading his predictions, since we're within shouting distance! What would you "keep / change / drop / add"?)
The second is a little closer in. Given the pace of this development, what does it mean for us as humans / users / consumers / citizens on the one hand, and as marketers / investors, etc. on the other -- from "now" to, say, "two years out"?
Imagine for example that in two years, IBM provides access to a more generalized form of Watson as a cloud-based API. What might you, as a person or as a business or other organization, do with a service that can understand speech, parse meanings, and optimize spending and investment recommendations based on how sure it is of the answer?
Cesar: "Watson, our lease is up soon, can you suggest some available space options nearby that would make sense for a business like Force Five Partners?"
Watson: "Cesar, here are five choices, with suggestions for what you should be paying for each, based on what I can find out right now..."
Wow. We had barely figured out SEO, when we got slammed with SNO -- Social Network Optimization (as well as the frozen kind)! Now we have to figure out Computational Engine Optimization? (Confusingly, natch, "CEO" -- you read it here first!) How do I optimize for "What inexpensive steakhouses are nearby?" How do we even think about that?
(Possible direction: Semantic Web Optimization -- "SWO", of course. Make sure you are well tagged-for, and indexed-by, the data stores and services where the terms "inexpensive", "steakhouse", and "nearby" would be judged. Or, in plain English: if Wolfram Alpha looks to Yelp to help answer this question, make sure your restaurant's entry there is labeled as a steakhouse, has an accurate address, and is accurately price-rated as "$". Whatever gaming ensues, just don't blame IBM / Apple / Wolfram /(Google too) for going for the mega-cheddar.)
It's trite to say that change is accelerating as technology develops. ("We're only in the second inning!") Some dismiss this (as Arthur C. Clarke said, we always overestimate the impact of technology in the short term, but underestimate it in the long term). But, if you doubt, this chart is worth a look. And then think about the degree to which "social" and "mobile" are now reinforcing, amplifying, and accelerating each other...
(Insert shameless commercial:) What are you doing to help your organization keep up?