About

I lead Force Five Partners, a marketing analytics consulting firm (bio). I've been writing here about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

89 posts categorized "Web/Tech"

May 29, 2014

Mary Meeker's @KPCB #InternetTrends Report: Critiquing The "Share of Time, Share of Money" Analysis

Mary Meeker's annual Internet Trends report is out.  It's a very helpful survey and synthesis of what's going on, as ever, all 164 pages of it. But for the past few years it's contained a bit of analysis that's bugged me.

Page 15 of the report (embedded below) is titled "Remain Optimistic About Mobile Ad Spend Growth... Print Remains Way Over-Indexed."  The main chart on the page compares the percentage of time people spend in different media with the percentage of advertising budgets that are spent in those media.  The assumption is that percentage of time and percentage of budget should roughly be equal for each medium.  Thus Meeker concludes that if -- as is the case for mobile -- the percentage of user time spent is greater than budget going there, then more ad dollars (as a percent of total) will flow to that medium, and vice versa (hence her point about print).

I can think of  demand-side, supply-side, and market maturity reasons that this equivalency thesis would break down, which also suggest directions for improving the analysis.  

On the demand side, different media may have different mixes of people, with different demographic characteristics.  For financial services advertisers, print users skew older -- and thus have more money, on average, making the potential value to advertisers of each minute of time spent by the average user there more valuable.  Different media may also have different advertising engagement power.  For example, in mobile, in either highly task-focused use cases or in distracted, skimming/ snacking ones, ads may be either invisible or intrusive, diminishing their relative impact (either in terms of direct interaction or view-through stimulation). By contrast, deeper lean-back-style engagement with TV, with more room for an ad to maneuver, might if the ad is good make a bigger impression. I wonder if there's also a reach premium at work.  Advertisers like to find the most efficient medium, but they also need to reach a large enough number of folks to execute campaigns effectively.  TV and print are more reach-oriented media, in general.

On the supply side, different media have different power distributions of the content they can offer, and different barriers to entry that can affect pricing.  On TV and in print, prime ad spots are more limited, so simple supply and demand dynamics drive up prices for the best spots beyond what the equivalency idea might suggest.  

In favor of Meeker's thesis, though representing another short term brake on it, is yet another factor she doesn't speak to directly. This is the relative maturity of the markets and buying processes for different media, and the experience of the participants in those markets.  A more mature, well-trafficked market, with well-understood dynamics, and lots of liquidity (think the ability for agencies and media brokers to resell time in TV's spot markets, for example), will, at the margin, attract and retain dollars, in particular while the true value of different media remain elusive. (This of course is one reason why attribution analysis is so hot, as evidenced by Google's and AOL Platform's recent acquisitions in this space.)  I say in favor, because as mobile ad markets mature over time, this disadvantage will erode.

So for advertisers, agency and media execs, entrepreneurs, and investors looking to play the arbitrage game at the edges of Meeker's observation, the question is, what adjustment factors for demand, supply, and market maturity would you apply this year and next?  It's not an idle question: tons of advertisers' media plans and publishers' business plans ride on these assumptions about how much money is going to come to or go away from them, and Meeker's report is an influential input into these plans in many cases.

A tactical limitation of Meeker's analysis is that while she suggests the overall potential shift in relative allocation of ad dollars (her slide suggests a "~$30B+" digital advertising growth opportunity in the USA alone - up from $20B last year*), she doesn't suggest a timescale and trendline for the pace with which we'll get there. One way to come at this is to look at the last 3-4 annual presentations she's made, and see how the relationships she's observed have changed over time.  Interestingly, in her 2013 report using 2012 data, on page 5, 12% of time is spent on mobile devices, and 3% of ad dollars are going there, for a 4x difference in percentages. In the 2014 report using 2013 data, 20% of time is spent on mobile, and 5% of media dollars are going there -- again, a 4x relationship.  

So, if the equivalency zeitgeist is at work, for the moment it may be stuck in a phone booth. But in the end I'm reminded of the futurist Roy Amara's saying: "We tend to overestimate the effect of a technology in the short term and underestimate its effect in the long term."  Plus let's not forget new technologies (Glass, Occulus Rift, both portable and  large/immersive) that will further jumble relevant media categories in years to come.

(*Emarketer seems to think we'll hit the $30B mobile advertising run rate sometime during 2016-2017.)

 

October 13, 2013

Unpacking Healthcare.gov

So healthcare.gov launched, with problems.  I'm trying to understand why, so I can apply some lessons in my professional life.  Here are some ideas.

First, I think it helps to define some levels of the problem.  I can think of four:

1. Strategic / policy level -- what challenges do the goals we set create?  In this case, the objective, basically, is two-fold: first; reduce the costs of late-stage, high-cost uncompensated care by enrolling the people who ultimately use that (middle-aged poor folks and other unfortunates) in health insurance that will get them care earlier and reduce stress / improve outcomes (for them and for society) later; second; reduce the cost of this insurance through exchanges that drive competition.  So, basically, bring a bunch of folks from, in many cases, the wrong side of the Digital Divide, and expose them to a bunch of eligibility- and choice-driven complexity (proof:  need for "Navigators"). Hmm.  (Cue the folks who say that's why we need a simple single-payor model, but the obvious response would be that it simply wasn't politically feasible.  We need to play the cards we're dealt.)

2. Experience level -- In light of that need, let's examine what the government did do for each of the "Attract / Engage / Convert / Retain" phases of a Caveman User Experience.  It did promote ACA -- arguably insufficiently or not creatively enough to distinguish itself from opposing signal levels it should have anticipated (one take here).  But more problematically, from what I can tell, the program skips "Engage" and emphasizes "Convert": Healthcare.gov immediately asks you to "Apply Now" (see screenshot below, where "Apply Now" is prominently  featured over "Learn More", even on the "Learn" tab of the site). This is technically problematic (see #3 below), but also experientially lots to ask for when you don't yet know what's behind the curtain. 

Healthcaregov
3. Technical level -- Excellent piece in Washington Post by Timothy B. Lee. Basically, the system tries to do an eligibility check (for participation and subsidies) before sending you on to enrollment.  Doing this requires checking a bunch of other government systems.  The flowchart explains very clearly why this could be problematic.  There are some front end problems as well, described in rawest form by some of the chatter on Reddit, but from what I've seen these are more superficial, a function of poor process / time management, and fixable.

4. Organizational level -- Great article here in Slate by David Auerbach. Basically, poor coordination structure and execution by HHS of the front and back ends.

Second, here are some things HHS might do differently:

1. Strategic level: Sounds like some segmentation of the potential user base would have suggested a much greater investment in explanation / education, in advance of registration.  Since any responsible design effort starts with users and use cases, I'm sure they did this.  But what came out the other end doesn't seem to reflect that.  What bureaucratic or political considerations got in the way, and what can be revisited, to improve the result? Or, instead of allowing political hacks to infiltrate and dominate the ranks of engineers trying to design a service that works, why not embed competent technologists, perhaps drawn from the ranks of Chief Digital Officers, into the senior political ranks, to advise them on how to get things right online?

2. Experience level: Perhaps the first couple of levels of experience on healthcare.gov should have been explanatory?  "Here's what to expect, here's how this works..." Maybe video (could have used YouTube!)? Maybe also ask a couple of quick anonymous questions to determine whether the eligibility / subsidy check would be relevant, to spare the load on that engine, before seeing what plans might be available, at what price?  You could always re-ask / confirm that data later once the user's past the shopping /evaluation stage, before formally enrolling them into a plan.  In ecommerce, we don't ask untargeted shoppers to enter discount codes until they're about to check out, right?

Or, why not pre-process and cache the answer to the eligibility question the system currently tries to calculate on the fly?  After all, the government already has all our social security numbers and green card numbers, and our tax returns.  So by the time any of us go to the site, it could have pre-determined the size of any potential subsidy, if any, we'd be eligible for, and it could have used this *estimated* subsidy to calculate a *projected* premium we might pay.  We'd need a little registration / security, maybe "enter your last name and social security number, and if they match we'll tell you your estimated subsidy". (I suppose returning a subsidy answer would confirm for a crook who knows my last name that he had my correct SSN, but maybe we could prevent the brute force querying this requires with CAPTCHA. Security friends, please advise.  Naturally, I'd make sure the pre-chached lookup file stays server-side, and isn't exposed as an array in a client-side Javascript snippet!)

3. I see from viewing the page source they have Google Tag Manager running, so perhaps they also have Google Analytics running too, alongside whatever other things...  Since they've open-sourced the front end code and their content on Github, maybe they could also share what they're learning via GA, so we could evaluate ideas for improving the site in the context of that data?

4. It appears they are using Optimizely to test/ optimize their pages (javascript from page source here).  While the nice pictures with people smiling may be optimal, There's plenty of research that suggests that by pushing much of the links to site content below the fold, and forcing us to scroll to see it, they might be burying the very resources the "experience perspective" I've described suggests they need to highlight.  So maybe this layout is in fact what maximizes the results they're looking for -- pressing the "Apply Now" button -- but maybe that's the wrong question to be asking!

Postscript, November 1:

Food for thought (scroll to bottom).  How does this happen?  Software engineer friends, please weigh in!

 

May 10, 2013

Book Review: Converge by @rwlord and @rvelez #convergebook

I just finished reading Converge, the new book on integrating technology, creativity, and media by Razorfish CEO Bob Lord and his colleague Ray Velez, the firm’s CTO.  (Full disclosure: I’ve known Bob as a colleague, former boss, and friend for more than twenty years and I’m a proud Razorfish alum from a decade ago.)

Reflecting on the book I’m reminded of the novelist William Gibson’s famous comment in a 2003 Economist interview that “The future’s already here, it’s just not evenly distributed.”  In this case, the near-perfect perch that two already-smart guys have on the Digital Revolution and its impact on global brands has provided them a view of a new reality most of the rest of us perceive only dimly.

So what is this emerging reality?  Somewhere along the line in my business education I heard the phrase, “A brand is a promise.”  Bob and Ray now say, “The brand is a service.”  In virtually all businesses that touch end consumers, and extending well into relevant supply chains, information technology has now made it possible to turn what used to be communication media into elements of the actual fulfillment of whatever product or service the firm provides.  

One example they point to is Tesco’s virtual store format, in which images of stocked store shelves are projected on the wall of, say, a train station, and commuters can snap the QR codes on the yogurt or quarts of milk displayed and have their order delivered to their homes by the time they arrive there: Tesco’s turned the billboard into your cupboard.  Another example they cite is Audi City, the Kinnect-powered configurator experience through which you can explore and order the Audi of your dreams.  As the authors say, “marketing is commerce, and commerce is marketing.”

But Bob and Ray don’t just describe, they also prescribe.  I’ll leave you to read the specific suggestions, which aren’t necessarily new.  What is fresh here is the compelling case they make for them; for example, their point-by-point case for leveraging the public cloud is very persuasive, even for the most security-conscious CIO.  Also useful is their summary of the Agile method, and of how they’ve applied it for their clients.

Looking more deeply, the book isn’t just another surf on the zeitgeist, but is theoretically well-grounded.  At one point early on, they say, “The villain in this book is the silo.”  On reading this (nicely turned phrase), I was reminded of the “experience curve” business strategy concept I learned at Bain & Company many years ago.  The experience curve, based on the idea that the more you make and sell of something, the better you (should) get at it, describes a fairly predictable mathematical relationship between experience and cost, and therefore between relative market share and profit margins.  One of the ways you can maximize experience is through functional specialization, which of course has the side effect of encouraging the development of organizational silos.  A hidden assumption in this strategy is that customer needs and associated attention spans stay pinned down and stable long enough to achieve experience-driven profitable ways to serve them.  But in today’s super-fragmented, hyper-connected, kaleidoscopic marketplace, this assumption breaks down, and the way to compete shifts from capturing experience through specialization, to generating experience “at-bats” through speedy iteration, innovation, and execution.  And this latter competitive mode relies more on the kind of cross-disciplinary integration that Bob and Ray describe so richly.

The book is a quick, engaging read, full of good stories drawn from their extensive experiences with blue-chip brands and interesting upstarts, and with some useful bits of historical analysis that frame their arguments well (in particular, I Iiked their exposition of the television upfront).  But maybe the best thing I can say about it is that it encouraged me to push harder and faster to stay in front of the future that’s already here.  Or, as a friend says, “We gotta get with the ‘90’s, they’re almost over!”

(See this review and buy the book on Amazon.com)


April 10, 2013

Fooling Around With Google App Engine @googlecloud

A simple experiment: the "Influence Reach Factor" Calculator. (Um, it just multiplies two numbers together.  But that's beside the point, which was to sort out what it's like to build and deploy an app to Google's App Engine, their cloud computing service.)

Answer: pretty easy.  Download the App Engine SDK.  Write your program (mine's in Python, code here, be kind, props and thanks to Bukhantsov.org for a good model to work from).  Deploy to GAE with a single click.

By contrast, let's go back to 1999.  As part of getting up to speed at ArsDigita, I wanted to install the ArsDigita Community System (ACS), an open-source application toolkit and collection of modules for online communities.  So I dredged up an old PC from my basement, installed Linux, then Postgres, then AOLServer, then configured all of them so they'd welcome ACS when I spooled it up (oh so many hours RTFM-ing to get various drivers to work).  Then once I had it at "Hello World!" on localhost, I had to get it networked to the Web so I could show it to friends elsewhere (this being back in the days before the cable company shut down home-served websites).  

At which point, cue the Dawn Of Man.

Later, I rented servers from co-los. But I still had to worry about whether they were up, whether I had configured the stack properly, whether I was virus-free or enrolled as a bot in some army of darkness, or whether demand from the adoring masses was going to blow the capacity I'd signed up for. (Real Soon Now, surely!)

Now, Real Engineers will say that all of this served to educate me about how it all works, and they'd be right.  But unfortunately it also crowded out the time I had to learn about how to program at the top of the stack, to make things that people would actually use.  Now Google's given me that time back.

Why should you care?  Well, isn't it the case that you read everywhere about how you, or at least certainly your kids, need to learn to program to be literate and effective in the Digital Age?  And yet, like Kubrick's monolith, it all seems so opaque and impenetrable.  Where do you start?  One of the great gifts I received in the last 15 years was to work with engineers who taught me to peel it back one layer at a time.  My weak effort to pay it forward is this small, unoriginal advice: start by learning to program using a high-level interpreted language like Python, and by letting Google take care of the underlying "stack" of technology needed to show your work to your friends via the Web.  Then, as your functional or performance needs demand (which for most of us will be rarely), you can push to lower-level "more powerful" (flexible but harder to learn) languages, and deeper into the stack.

April 06, 2013

Dazed and Confused #opensource @perryhewitt @oreillymedia @roughtype @thebafflermag @evgenymorozov

Earlier today, my friend Perry Hewitt pointed me to a very thoughtful essay by Evgeny Morozov in the latest issue of The Baffler, titled "The Meme Hustler: Tim O'Reilly's Crazy Talk".  

A while back I worked at a free software firm (ArsDigita, where early versions of the ArsDigita Community System were licensed under GPL) and was deeply involved in developing  an "open source" license that balanced our needs, interests, and objectives with our clients' (the ArsDigita Public License, or ADPL, which was closely based on the Mozilla Public License, or MPL).  I've been to O'Reilly's conferences (<shameless> I remember a ~20-person 2001 Birds-of-a-Feather session in San Diego with Mitch Kapor and pre-Google Eric Schmidt on commercializing open source </shameless>).  Also, I'm a user of O'Reilly's books (currently have Charles Severance's Using Google App Engine in my bag).  So I figured I should read this carefully and have a point of view about the essay.  And despite having recently read Nicholas Carr's excellent and disturbing  2011 book The Shallows about how dumb the Internet has made me, I thought nonetheless that I should brave at least a superficial review of Morozov's sixteen-thousand-word piece.

To summarize: Morozov describes O'Reilly as a self-promoting manipulator who wraps and justifies his evangelizing of Internet-centered open innovation in software, and more recently government, in a Randian cloak sequined with Silicon Valley rhinestones.  My main reaction: "So, your point would be...?" More closely:

First, there's what Theodore Roosevelt had to say about critics. (Accordingly, I fully cop to the recursive hypocrisy of this post.) If, as Morozov says of O'Reilly, "For all his economistic outlook, he was not one to talk externalities..." then Morozov (as most of my fellow liberals do) ignores the utility of motivation.  I accept and embrace that with self-interest and the energy to pursue it, more (ahem, taxable) wealth is created.  So when O'Reilly says something, I don't reflexively reject it because it might be self-promoting; rather, I first try to make sure I understand how that benefits him, so I can better filter for what might benefit me. For example, Morozov writes:

In his 2007 bestseller Words That Work, the Republican operative Frank Luntz lists ten rules of effective communication: simplicity, brevity, credibility, consistency, novelty, sound, aspiration, visualization, questioning, and context. O’Reilly, while employing most of them, has a few unique rules of his own. Clever use of visualization, for example, helps him craft his message in a way that is both sharp and open-ended. Thus, O’Reilly’s meme-engineering efforts usually result in “meme maps,” where the meme to be defined—whether it’s “open source” or “Web 2.0”—is put at the center, while other blob-like terms are drawn as connected to it.
Where Morozov offers a warning, I see a manual! I just have to remember my obligation to apply it honestly and ethically.

Second, Morozov chooses not to observe that if O'Reilly and others hadn't broadened the free software movement into an "open source" one that ultimately offered more options for balancing the needs and rights of software developers with those of users (who themselves might also be developers), we might all still be in deeper thrall to proprietary vendors.  I know from first-hand experience that the world simply was not and is still not ready to accept GPL as the only option.

Nonetheless, good on Morozov for offering this critique of O'Reilly.  Essays like this help keep guys like O'Reilly honest, as far as that's necessary.  They also force us to think hard about what O'Reilly's peddling -- a responsibility that should be ours.  I used to get frustrated by folks who slapped the 2.0 label on everything, to the point of meaninglessness, until I appreciated that the meme and its overuse drove me to think and presented me with an opportunity to riff on it.  I think O'Reilly and others like him do us a great service when they try to boil down complexities into memes.  The trick for us is to make sure the memes are the start of our understanding, not the end of it.

August 31, 2012

#Data #Visualization To Soothe The Savage Beast @mbostock

So last night I'm sitting on the tarmac waiting for my flight to take off, chillin' to a Coldplay's-"Hurts-Like-Heaven"+poor-screaming-child-in-exhausted-parent's-lap-two-rows-behind-me mashed up mix worthy of Eminem and Dido's "Stan".  After we took off, the music moved into a second movement in which the child's keening seemed to slide seamlessly into the many sonic layers of "Paradise", to the point where I thought maybe Chris Martin was two years old once again.

Eventually Mom decided to bring Junior, a real cutie barely two feet tall, to the lavatory.  As the little guy passed by my seat he had a look at my laptop screen, where I was busy trying to decipher and hack the d3 Javascript in a clone of the NYT's beautiful visualization of the Obama budget.

(http://www.nytimes.com/interactive/2012/02/13/us/politics/2013-budget-proposal-graphic.html)

He paused as Mom forged ahead, lingering by my seat to watch as I clicked from view to view of the data, the bubbles bouncing and re-forming to convey the vectors and magnitudes of our collective fiscal choices from one perspective to another.  His eyes moved back and forth from the screen to mine.  He became very quiet, and for a few seconds, the cabin was silent.  

Thank you Mike Bostock.  Among your life's achievements, you can count, for a few brief moments of one night, 100 grateful passengers, one relieved mother, and one happy little boy.

August 08, 2012

A "Common Requirements Framework" for Campaign Management Systems and Marketing Automation

In our "marketing analytics agency" model, as distinguished from a more traditional consulting one, we measure success not just by the quality of the insights and opportunities we can help clients to find, but on their ability to act on the ideas and get value for their investments.  Sometimes this means we simultaneously work both ends to an acceptable middle: even as we torture data and research for bright ideas, we help to define and influence the evolution of a marketing platform to be more capable. 

This raises the question, "What's a marketing platform, and a good roadmap for making it more capable?"  Lots of vendors, including big ones like IBM, are now investing in answering these questions, especially as they try to reach beyond IT to sell directly to the CMO. These vendors provide myriad marketing materials to describe both the landscape and their products, which variously are described as "campaign management systems" or even more gloriously as "marketing automation solutions".  The proliferation of solutions is so mind-blowing that analyst firms build whole practices making sense of the category.  Here's a recent chart from Terence Kawaja at LUMA Partners (via Scott Brinker's blog) that illustrates the point beautifully:

 

 

Yet even with this guidance, organizations struggle to get relevant stakeholders on the same page about what's needed and how to proceed. My own experience has been that this is because they're missing a simple "Common Requirements Framework" that everyone can share as a point of departure for the conversation.  Here's one I've found useful.

Basically marketing is about targeting the right customers and getting them the right content (product information, pricing, and all the before-during-and-after trimmings) through the right channels at the right time.  So, a marketing automation solution, well, automates this.  More specifically, since there are lots of homegrown hacks and point solutions for different pieces of this, what's really getting automated is the manual conversion and shuffling of files from one system to the next, aka the integration of it all.  Some of these solutions also let you run analysis and tests out of the same platform (or partnered components).

Each of these functions has increasing levels of sophistication I've characterized, as of this writing, into "basic", "threshold", and "advanced".  For simple roadmapping / prioritization purposes, you might also call these "now", "next", and "later".

Targeting

The simplest form of targeting uses a single data source, past experience at the cash register, to decide whom to go back to, on the idea that you build a business inside out from your best, most loyal customers.  Cataloguers have a fancy term for this, "RFM", which stands for "Recency, Frequency, and Monetary Value", which grades customers, typically into deciles, according to... how recently, how frequenty, and how much they've bought from you.  Folks who score high get solicited more intensively (for example, more catalog drops).  By looking back at a customer's past RFM-defined marginal value to you (e.g., gross margin you earned from stuff you sold her), you can make a decision about how much to spend marketing to her.  

One step up, you add demographic and behavioral information about customers and prospects to refine and expand your lists of folks to target.  Demographically, for example, you might say, "Hey, my best customers all seem to come from Greenwich, CT.  Maybe I should target other folks who live there."  You might add a few other dimensions to that, like age and gender. Or you might buy synthetic, "psychographic" definitions from data vendors who roll a variety of demographic markers into inferred attitudes.  Behaviorally, you might say "Let's retarget folks who walk into our store, or who put stuff into our online shopping cart but don't check out."  These are conceptually straightforward things to do, but are logistically harder, because now you have to integrate external and internal data sources, comply with privacy policies, etc.

In the third level, you begin to formalize the models implicit in these prior two steps, and build lists of folks to target based on their predicted propensity to buy (lots) from you.  So for example, you might say, "Folks who bought this much of this product this frequently, this recently who live in Greenwich and who visited our web site last week have this probability of buying this much from me, so therefore I can afford to target them with a marketing program that costs $x per person."  That's "predictive modelling".

Some folks evaluate the sophistication of a targeting capability by how fine-grained the target segments get, or by how close to 1-1 personalization you can get.  In my experience, there's often diminishing returns to this, often because the firm can't always practically execute differentiated experiences even if the marginal value of a personalized experience warrants it.  This isn't universally the case of course: promotional offers and similar experience variables (e.g., credit limits) are easier to vary than, say, a hotel lobby.  

Content

Again, a simple progression here, for me defined by the complexity of the content you can provide ("plain", "rich", "interactive") and by the flexibility and precision ("none", "pre-defined options", "custom options") with which you can target it through any given channel or combination of channels.

Another dimension to consider here is the complexity of the organizations and processes necessary to produce this content.  For example, in highly regulated environments like health care or financial services, you may need multiple approvals before you can publish something.  And the more folks involved, the more sophisticated and valuable the coordination tools, ranging from central repositories for templates, version control systems, alerts, and even joint editing.  Beware though simply paving cowpaths -- be sure you need all that content variety and process complexity before enabling it technologically, or it will simply expand to fit what the technology permits (the same way computer operating systems bloat as processors get more powerful).

Channels

The big dimension here is the number of channels you can string together for an integrated experience.  So for example, in a simple case you've got one channel, say email, to work with.  In a more sophisticated system, you can say, "When people who look like this come to our website, retarget them with ads in the display ad network we use." (Google just integrated Google Analytics with Google Display Network to do just this, for example, an ingenious move that further illustrates why they lead the pack in the display ad world.)  Pushing it even further, you could also say, "In addition to re-targeting web site visitors who do X, out in our display network, let's also send them an email / postcard combination, with connections to a landing page or phone center."

Analysis and Testing

In addition to execution of campaigns and programs, a marketing solution might also suport exploration  of what campaigns and programs, or components thereof, might work best.  This happens in a couple of ways.  You can examine past behavior of customers and prospects to look for trends and build models that explain how changes and saliencies along one or more dimensions might have been associated with buying.  Also, you can define and execute A/B and multi-variate tests (with control groups) for targeting, content, and channel choices.  

Again, the question here is not just about how much data flexibility and algorithmic power you have to work with within the system, but how many integration hoops you have to go through to move from exploration to execution.  Obviously you won't want to run exploration and execution off the same physical data store, or even the same logical model, but it shouldn't take a major IT initiative to flip the right operational switches when you have an insight you'd like to try, or scale.

Concretely, the requirement you're evaluating here is best summarized by a couple of questions.  First, "Show me how I can track and evaluate differential response in the marketing campaigns and programs I execute through your proposed solution," and then, "Show me how I can define and test targeting, content, and channel variants of the base campaigns or programs, and then work the winners into a dominant share of our mix."

A Summary Picture

Here's a simple table that tries to bundle all of this up.  Notice that it focuses more on function than features and capabilities instead of components.  

  Marketing Automation Commonn Requirements Framework

 

What's Right For You?

The important thing to remember is that these functions and capabilities are means, not ends.  To figure out what you need, you should reflect first on how any particular combination of capabilities would fit into your marketing organization's "vector and momentum".  How is your marketing performance trending?  How does it compare with competitors'?  In what parts -- targets, content, channels -- is it better or worse? What have you deployed recently and learned through its operation? What kind of track record have you established in terms of successful deployment and leverage from your efforts?  

If your answers are more like "I don't know" and "Um, not a great one" then you might be better off signing onto a mostly-integrated, cloud-based (so you don't compound business value uncertainty with IT risk), good-enough-across-most-things solution for a few years until you sort out -- affordably (read, rent, don't buy) -- what works for you, and what capability you need to go deep on. If, on the other hand, you're confident you have a good grip on where your opportunities are and you've got momentum with and confidence in your team, you might add best of breed capabilities at the margins of a more general "logical model" this proposed framework provides.  What's generally risky is to start with an under-performing operation built on spaghetti and plan for a smooth multi-year transition to a fully-integrated on-premise option.  That just puts too many moving parts into play, with too high an up-front, bet-on-the-come investment.

Again, remember that the point of a "Common Requirements Framework" isn't to serve as an exhaustive checklist for evaluating vendors.  It's best used as a simple model you can carry around in your head and share with others, so that when you do dive deep into requirements, you don't lose the forest for the trees, in a category that's become quite a jungle.  Got a better model, or suggestions for this one?  Let me know!

August 06, 2012

Zen and the Art of IT Planning #cio

It's been on my reading list forever, but this year I finally got around to Robert Pirsig's Zen and the Art of Motorcycle Maintenance.  It was heavy going in spots, but it didn't disappoint. So many wonderful ideas to think about and do something with. Among a thousand other things, I was taken with Pirsig's exposition of "gumption".  He describes it as a variable property developed in someone when he or she "connects with Quality" (the principal object of his inquiry).  He associates it with "enthusiasm", and writes:

A person filled with gumption doesn't sit around dissipating and stewing about things.  He's at the front of the train of his own awareness, watching to see what's up the track and meeting it when it comes.  That's gumption. (emphasis mine; Pirsig, Zen, p. 310, First Harper Perennial Modern Classics edition 2005)

In recent years I've tested my gumption limits in trivial and meaningful ways: built a treehouse, fixed an old snowblower, serviced sailboat winches, messed around in SQL and Python, started a business. For me, gumption was the "Well, here goes..." evanescent sense of that moment when preparation ends and experimentation begins, an amplified mix of anxiety and anticipation at the edge of the sort-of-known and the TBD.  Or, like the joy of catching a wave,  it's feeling for a short time what it's like to have your brain light up an order of magnitude more brightly than it manages on average, and watching your productivity soar.

So what's this got to do with IT planning?

For a while now I've been working with both big and small companies, and seen two types of IT planning happen in both settings. In one case there's endless talk of 3-year end-state architectures that seem to recede and disappear like mirages as you Gantt-crawl toward them.  In the other, there's endless hacks that "scratch itches" and make you feel like you're among the tribe of Real Men Who Ship, but  which toast you six months later with security holes or scaling limits.

Getting access to data and having enough operational flexibility to act on the insights we help produce with this data are crucial to the success we try to help our clients achieve, and hold ourselves accountable for. So, (sticking with the motorcycle metaphor) a big part of my job is to be able to read what "gear" an IT organization is in, and to help it shift into the right one if needed -- in other words, to find a proper balance of planning and execution, or "the right amount of gumption".  One crude measure I've learned to apply is what I'm calling the "slide-to-screen" ratio (aka the ".ppt-to-.php" score for nerdier friends).

It's a simple calculation.  Take the number of components yet to be delivered in an IT architecture chart or slide, and divide them by the number of components or applications delivered over the same time period looking backward.  For example, if the chart says 24 components will be delivered over the next three years, and the same number of comparable items have been delivered over the prior three years, you're running at "1".

Admittedly, the standard's arbitrary, and hard to compare across situations. It's the question that's valuable.  In one situation, there's lots of coding, but little clear sense of where it needs to go, tantamount to trying to drive fast in first gear.  In the other, there's lots of ambition, but not much seems to happen -- like trying to leave the driveway in fifth gear.  When I'm listening to an IT plan, I'm not only looking at the slides and the demos, I'm also feeling for the "gumption" of the authors, and where they are with respect to the "wave".  The best plans always seem to say something like, "Well, here's what we learned -- very specifically -- from the last 24 months' deployments, and here's what we think we need to do (and not) in the next 24 months as a result." They're simultaneously thoughtful and action-oriented.  Conversely, when I don't see this specifics-laden reflection, and instead get a generic look forward, and a squishy, over-hedged, non-committal roadmap for getting there, warning bells go off.

Pushing for the implications of the answer -- to downshift, or upshift, and how -- is incredibly valuable.  Above "1", pushing might sound like, "OK, so what pieces of this vision will you ship in each of the next 4 quarters, and what critical assumptions and dependencies are embedded in your answers?"  Below "1", the question might be, "So, what complementary capabilities, and security / usability / scalability enhancements do you anticipate needing to make these innovations commercially viable?"  The answers you get in that moment -- a "Blink"-style gumption test -- are more useful than any six-figure IT process or organizational audit will yield.

 

July 16, 2012

Congratulations @marissamayer on your new #Yahoo gig. Now what? Some ideas

Paul Simon wrote, "Every generation throws a hero at the pop charts."  Now it's Marissa Mayer's turn to try to make Yahoo!'s chart pop.  This will be hard because few tech companies are able to sustain value creation much past their IPOs.  

What strategic path for Yahoo! satisfies the following important requirements?

  • Solves a keenly felt customer / user / audience / human problem?
  • Fits within but doesn't totally overlap what other competitors provide?
  • Builds off things Yahoo! has / does well?
  • Fits Ms. Mayer's experiences, so she's playing from a position of strength and confidence?
  • As a consequence of all this, will bring advertisers back at premium prices?

Yahoo!'s company profile is a little buzzwordy but offers a potential point of departure.  What Yahoo! says:

"Our vision is to deliver your world, your way. We do that by using technology, insights, and intuition to create deeply personal digital experiences that keep more than half a billion people connected to what matters the most to them – across devices, on every continent, in more than 30 languages. And we connect advertisers to the consumers who matter to them most – the ones who will build their businesses – through our unique combination of Science + Art + Scale."

What Cesar infers:

Yahoo! is a filter.

Here are some big things the Internet helps us do:

  • Find
  • Connect
  • Share
  • Shop
  • Work
  • Learn
  • Argue
  • Relax
  • Filter

Every one of these functions has an 800 lb. gorilla, and a few aspirants, attached to it:

  • Find -- Google
  • Connect -- Facebook, LinkedIn
  • Share -- Facebook, Twitter, Yahoo!/Flickr (well, for the moment...)
  • Shop -- Amazon, eBay
  • Work -- Microsoft, Google, GitHub
  • Learn -- Wikipedia, Khan Academy
  • Argue -- Wordpress, Typepad, [insert major MSM digital presence here]
  • Relax -- Netflix, Hulu, Pandora, Spotify
  • Filter -- ...

Um, filter...  Filter.   There's a flood of information out there.  Who's doing a great job of filtering it for me?  Google alerts?  Useful but very crude.  Twitter?  I browse my followings for nuggets, but sometimes these are hard to parse from the droppings.  Facebook?  Sorry friends, but my inner sociopath complains it has to work too hard to sift the news I can use from the River of Life.

Filtering is still a tough, unsolved problem, arguably the problem of the age (or at least it was last year when I said so).  The best tool I've found for helping me build filters is Yahoo! Pipes.  (Example)

As far as I can tell, Pipes has remained this slightly wonky tool in Yahoo's bazaar suite of products.  Nerds like me get a lot of leverage from the service, but it's a bit hard to explain the concept, and the semi-programmatic interface is powerful but definitely not for the general public.

Now, what if Yahoo! were to embrace filtering as its core proposition, and build off the Pipes idea and experience under the guidance of Google's own UI guru -- the very same Ms. Mayer, hopefully applying the lessons of iGoogle's rise and fall -- to make it possible for its users to filter their worlds more effectively?  If you think about it, there are various services out there that tackle individual aspects of the filtering challenge: professional (e.g. NY Times, Vogue, Car and Driver), social (Facebook, subReddits), tribal (online communities extending from often offline affinities), algorithmic (Amazon-style collaborative filtering), sponsored (e.g., coupon sites).  No one is doing a good job of pulling these all together and allowing me to tailor their spews to my life.  Right now it's up to me to follow Gina Trapani's Lifehacker suggestion, which is to use Pipes.

OK so let's review:

  • Valuable unsolved problem for customers / users: check.
  • Fragmented, undominated competitive space: check.
  • Yahoo! has credibly assets / experience: check.
  • Marissa Mayer plays from position of strength and experience: check.
  • Advertisers willing to pay premium prices, in droves: ...

Well, let's look at this a bit.  I'd argue that a good filter is effectively a "passive search engine".  Basically through the filters people construct -- effectively "stored searches" -- they tell you what it is they are really interested in, and in what context and time they want it.  With cookie-based targeting under pressure on multiple fronts, advertisers will be looking for impression inventories that provide search-like value propositions without the tracking headaches.  Whoever can do this well could make major bank from advertisers looking for an alternative to the online ad biz Hydra (aka Google, Facebook, Apple, plus assorted minor others).

Savvy advertisers and publishers will pooh-pooh the idea that individual Pipemakers would be numerous enough or consistent enough on their own to provide the reach that is the reason Yahoo! is still in business.  But I think there's lots of ways around this.  For one, there's already plenty of precedent at other media companies for suggesting proto-Pipes -- usually called "channels", Yahoo! calls them "sites" (example), and they have RSS feeds.  Portals like Yahoo!, major media like the NYT, and universities like Harvard suggest categories, offer pre-packaged RSS feeds, and even give you the ability to roll your own feed out of their content.  The problem is that it's still marketed as RSS, which even in this day and age is still a bit beyond for most folks.  But if you find a more user-friendly way to "clone and extend" suggested Pipes, friends' Pipes, sponsored Pipes, etc., you've got a start.

Check?  Lots of hand-waving, I know.  But what's true is that Yahoo! has suffered from a loss of a clear identity.  And the path to re-growing its value starts with fixing that problem.

Good luck Marissa!

 

 

 

March 20, 2012

Organic Data Modeling in the Age of the Extrabase #analytics

Sorry for the buzzwordy title of this post, but hopefully you'll agree that sometimes they can be useful to communicating an important Zeitgeist.

I'm working with one of our clients right now to develop a new, advanced business intelligence capability that uses state-of-the art in-memory data visualization tools like Tableau and Spotfire that will ultimately connect multiple data sets to answer a range of important questions.  I've also been involved recently in a major analysis of advertising effectiveness that included a number of data sources that were either external to the organization, or non-traditional, or both.  In both cases, these efforts are likely to evolve toward predictive models of behavior to help prioritize efforts and allocate scarce resources.

Simultaneously, today's NYT carried an article about Clear Story, a Silicon Valley startup that aggregates APIs to public data sources about folks, and provides a highly simplified interface to those APIs for analysts and business execs.  I haven't yet tried their service, but I'll save that for a separate post.  The point here is that the emergence of services like this represent an important step in the evolution of Web 2.0 -- call it Web 2.2 -- that's very relevant for marketing analytics in enterprise contexts.

So, what's significant about these experiences?

Readers of Ralph Kimball's classic Data Warehouse Toolkit will appreciate both the wisdom of his advice, but also today, how the context for it has changed.  Kimball is absolutely an advocate for starting with a clear idea of the questions you'd like to answer and for making pragmatic choices about how to organize information to answer them.  However, the major editions of the book were written in a time when three things were true:

  • You needed to organize information more thoughtfully up front, because computing resources to compensate for poor initial organization were less capable and more expensive
  • The number of data sources you could integrate were far more limited, allowing you to be more definitive up front about the data structures you defined to answer your target questions
  • The questions themselves, or the range of possible answers to them, were more limited and less dynamic, because the market context was so as well

Together, these things made for business intelligence / data warehouse / data management efforts that were longer, and a bit more "waterfall" and episodic in execution.  However, over the past decade, many have critiqued such efforts for high failure rates, mostly in which they collapse of their own weight: too much investment, too much complexity, too few results.  Call this Planned Data Modeling.

Now back to the first experience I described above.  We're using the tools I mentioned to simultaneously hunt for valuable insights that will help pay the freight of the effort, define useful interfaces for users to keep using, and through these efforts, also determine the optimal data structures we need underneath to scale from the few million rows in one big flat file we've started with to something that will no doubt be larger, more multi-faceted, and thus more complex.  In particular, we're using the ability of these tools to calculate synthetic variables on the fly out of the raw data to point the way toward summaries and indeces we'll eventually have to develop in our data repository.  This will improve the likelihood that the way we architect that will directly support real reporting and analysis requirements, prioritized based on actual usage in initial pilots, rather than speculative requirements obtained through more conventional means.  Call this Organic Data Modeling.

Further, the work we've done anticipates that we will be weaving together a number of new sources of data, many of them externally provided, and that we'll likely swap sources in and out as we find that some are more useful than others.  It occurred to me that this large, heterogenous, and dynamic collection of  data sources would have characteristics sufficiently different in terms of their analytic and administrative implications that a different name altogether might be in order for the sum of the pieces.  Hence, the Extrabase.

These terms are not meant to cover up a cop-out.  In other words, some might say that mashing up a bunch of files in an in-memory visualization tool could reflect and further contribute to a lack of intellectual discipline and wherewithal to get it right.  In our case, we're hedging that risk, by having the data modelers responsible for figuring out the optimal data repository structure work extremely closely with the "front-end" analysts so that as potential data structure implications flow out of the rubber-meets-the-road analysis, we're able to sift them and decide which should stick and which we can ignore. 

But, as they say sometimes in software, "that's a feature, not a bug."  Meaning, mashing up files in these tools and seeing what's useful is a way of paying for and disciplining the back end data management process more rigorously, so that what gets built is based on what folks actually need, and gets delivered faster to boot.

Search

Books by
Cesar Brea