The May 20th 2013 edition of The New Yorker has an article by Vogue writer Nathan Heller on Massive Online Open Courses (MOOCs) titled "Laptop U: Has the future of college moved online?" The author explores, or at least raises, a number of related questions. How (well) does the traditional offline learning experience transfer online? Is the online learning experience more or less effective than the traditional one? (By what standard? For what material? What is gained and lost?) What will MOOCs mean for different colleges and universities, and their faculties? How will the MOOC revolution be funded? (In particular, what revenue model will emerge?)
Having worked a lot in the sector, for both public and private university clients, developing everything from technology, to online-enabled programs themselves, to analytic approaches, and even on marketing and promotion, the article was a good prompt for me to try to boil out some ways to think about answering these questions.
The article focuses almost exclusively on Harvard and EdX, the 12-school joint venture through which it's pursuing MOOCs. Obviously this skews the evaluation. Heller writes:
Education is a curiously alchemical process. Its vicissitudes are hard to isolate. Why do some students retain what they learned in a course for years, while others lose it through the other ear over their summer breaks? Is the fact that Bill Gates and Mark Zuckerberg dropped out of Harvard to revolutionize the tech industry a sign that their Harvard educations worked, or that they failed? The answer matters, because the mechanism by which conveyed knowledge blooms into an education is the standard by which MOOCs will either enrich teaching in this country or deplete it.
For me, the first step to boiling things out is to define what we mean by -- and want from -- an "education". So, let's try to unpack why people go to college. In most cases, Reason One is that you need a degree to get any sort of decent job. Reason Two is to plug into a network of people -- fellow students, alumni, faculty -- that provide you a life-long community. Of course you need a professional community for that Job thing, but also because in an otherwise anomic society you need an archipelago to seed friendships, companionships, and self-definition (or at least, as scaffolding for your personal brand: as one junior I heard on a recent college visit put it memorably, "Being here is part of the personal narrative I'm building.") Reason Three -- firmly third -- is to get an "education" in the sense that Heller describes. (Apropos: check this recording of David Foster Wallace's 2005 commencement address at Kenyon College.)
Next, this hierarchy of needs then gives us a way to evaluate the prospects for MOOCs.
If organization X can produce graduates demonstrably better qualified (through objective testing, portfolios of work, and experience) to do job Y, at a lower cost, then it will thrive. If organization X can do this better and cheaper by offering and/or curating/ aggregating MOOCs, then MOOCs will thrive. If a MOOC can demonstrate an adequately superior result / contribution to the end outcome, and do it inexpensively enough to hold its place in the curriculum, and do it often enough that its edge becomes a self-fulfilling prophecy -- a brand, in other words -- then it will crowd out its competitors, as surely as one plant shuts out the sunlight to another.
If a MOOC-mediated social experience can connect you to a Club You Want To Be A Member Of, you will pay for that. And if a Club That Would Have You As A Member can attract you to its clubhouse with MOOCs, then MOOCs will line the shelves of its bar. The winning MOOC cocktails will be the ones that best produce the desired social outcomes, with the greatest number of satisfying connections.
Finally, learning is as much about the frame of mind of the student as it is about the quality of the teacher. If through the MOOC the student is able to choose a better time to engage, and can manage better the pace of the delivery of the subject matter, then the MOOC wins.
Beyond general prospects, as you consider these principles, it becomes clear that it's less about whether MOOCs win, but which ones, for what and for whom, and how.
The more objective and standardized -- and thus measurable and comparable -- the learning outcome and the standard of achievement, the greater the potential for a MOOC to dominate. My program either works, or it doesn't.
If a MOOC facilitates the kinds of content exchanges that seed and stimulate offline social gatherings -- pitches to VCs, or mock interviewing, or poetry, or dance routines, or photography, or music, or historical tours, or bird-watching trips, or snowblower-maintenance workshops -- then it has a better chance of fulfilling the longings of its students for connection and belonging.
And, the more well-developed the surrounding Internet ecosystem (Wikipedia, discussion groups, Quora forums, and beyond) is around a topic, the less I need a Harvard professor, or even a Harvard grad student, to help me, however nuanced and alchemical the experience I miss might otherwise have been. The prospect of schlepping to class or office hours on a cold, rainy November night has a way of diluting the urge to be there live in case something serendipitous happens.
Understanding how MOOCs win then also becomes a clue to understanding potential revenue models.
If you can get accredited to offer a degree based in part or whole on MOOCs, you can charge for that degree, and gets students or the government to pay for it (Exhibit A: University of Phoenix). That's hard, but as a variant of this, you can get hired by an organization, or a syndicate of organizations you organize, to produce tailored degree programs -- think corporate training programs on steroids -- that use MOOCs to filter and train students. (Think "You, Student, pay for the 101-level stuff; if you pass you get a certificate and an invitation to attend the 201-level stuff that we fund; if you pass that we give you a job.")
Funding can come directly, or be subsidized by sponsors and advertisers, or both.
You can try to charge for content: if you produce a MOOC that someone else wants to include in a degree-based program, you can try to license it, in part or in whole.
You can make money via the service angle, the way self-publishing firms support authors, with a variety of best-practice based production services. Delivery might be offered via a freemium model -- the content might be free, but access to premium groups, with teaching assistant support, might come at a price. You can also promote MOOCs -- build awareness, drive distribution, even simply brand -- for a cut of the action, the way publishers and event promoters do.
Perhaps in the not-too-distant future we'll get the Academic Upfront, in which Universities front a semester's worth of classes in a MOOC, then pitch the class to sponsors, the way TV networks do today. Or, maybe the retail industry also offers a window into how MOOCs will be monetized. Today's retail environment is dominated by global brands (think professors as fashion designers) and big-box (plus Amazon) firms that dominate supply chains and distrubution networks. Together, Brands and Retailers effectively act as filters: we make assumptions that the products on their shelves are safe, effective, reasonably priced, acceptably stylish, well-supported. In exchange, we'll pay their markup. This logic sounds a cautionary note for many schools: boutiques can survive as part of or at the edges of the mega-retailers' ecosystems, but small-to-mid-size firms reselling commodities get crushed.
Of course, these are all generic, unoriginal (see Ecclesiastes 1:9) speculations. Successful revenue models will blend careful attention to segmenting target markets and working back from their needs, resources, and processes (certain models might be friendlier to budgets and purchasing mechanisms than others) with thoughtful in-the-wild testing of the ideas. Monolithic executions with Neolithic measurement plans ("Gee, the focus group loved it, I can't understand why no one's signing up for the paid version!") are unlikely to get very far. Instead, be sure to design with testability in mind (make content modular enough to package or offer a la carte, for example). Maybe even use Kickstarter as a lab for different models!
I just finished reading Converge, the new book on integrating technology, creativity, and media by Razorfish CEO Bob Lord and his colleague Ray Velez, the firm’s CTO. (Full disclosure: I’ve known Bob as a colleague, former boss, and friend for more than twenty years and I’m a proud Razorfish alum from a decade ago.)
Reflecting on the book I’m reminded of the novelist William Gibson’s famous comment in a 2003 Economist interview that “The future’s already here, it’s just not evenly distributed.” In this case, the near-perfect perch that two already-smart guys have on the Digital Revolution and its impact on global brands has provided them a view of a new reality most of the rest of us perceive only dimly.
So what is this emerging reality? Somewhere along the line in my business education I heard the phrase, “A brand is a promise.” Bob and Ray now say, “The brand is a service.” In virtually all businesses that touch end consumers, and extending well into relevant supply chains, information technology has now made it possible to turn what used to be communication media into elements of the actual fulfillment of whatever product or service the firm provides.
One example they point to is Tesco’s virtual store format, in which images of stocked store shelves are projected on the wall of, say, a train station, and commuters can snap the QR codes on the yogurt or quarts of milk displayed and have their order delivered to their homes by the time they arrive there: Tesco’s turned the billboard into your cupboard. Another example they cite is Audi City, the Kinnect-powered configurator experience through which you can explore and order the Audi of your dreams. As the authors say, “marketing is commerce, and commerce is marketing.”
But Bob and Ray don’t just describe, they also prescribe. I’ll leave you to read the specific suggestions, which aren’t necessarily new. What is fresh here is the compelling case they make for them; for example, their point-by-point case for leveraging the public cloud is very persuasive, even for the most security-conscious CIO. Also useful is their summary of the Agile method, and of how they’ve applied it for their clients.
Looking more deeply, the book isn’t just another surf on the zeitgeist, but is theoretically well-grounded. At one point early on, they say, “The villain in this book is the silo.” On reading this (nicely turned phrase), I was reminded of the “experience curve” business strategy concept I learned at Bain & Company many years ago. The experience curve, based on the idea that the more you make and sell of something, the better you (should) get at it, describes a fairly predictable mathematical relationship between experience and cost, and therefore between relative market share and profit margins. One of the ways you can maximize experience is through functional specialization, which of course has the side effect of encouraging the development of organizational silos. A hidden assumption in this strategy is that customer needs and associated attention spans stay pinned down and stable long enough to achieve experience-driven profitable ways to serve them. But in today’s super-fragmented, hyper-connected, kaleidoscopic marketplace, this assumption breaks down, and the way to compete shifts from capturing experience through specialization, to generating experience “at-bats” through speedy iteration, innovation, and execution. And this latter competitive mode relies more on the kind of cross-disciplinary integration that Bob and Ray describe so richly.
The book is a quick, engaging read, full of good stories drawn from their extensive experiences with blue-chip brands and interesting upstarts, and with some useful bits of historical analysis that frame their arguments well (in particular, I Iiked their exposition of the television upfront). But maybe the best thing I can say about it is that it encouraged me to push harder and faster to stay in front of the future that’s already here. Or, as a friend says, “We gotta get with the ‘90’s, they’re almost over!”
A simple experiment: the "Influence Reach Factor" Calculator. (Um, it just multiplies two numbers together. But that's beside the point, which was to sort out what it's like to build and deploy an app to Google's App Engine, their cloud computing service.)
Answer: pretty easy. Download the App Engine SDK. Write your program (mine's in Python, code here, be kind, props and thanks to Bukhantsov.org for a good model to work from). Deploy to GAE with a single click.
By contrast, let's go back to 1999. As part of getting up to speed at ArsDigita, I wanted to install the ArsDigita Community System (ACS), an open-source application toolkit and collection of modules for online communities. So I dredged up an old PC from my basement, installed Linux, then Postgres, then AOLServer, then configured all of them so they'd welcome ACS when I spooled it up (oh so many hours RTFM-ing to get various drivers to work). Then once I had it at "Hello World!" on localhost, I had to get it networked to the Web so I could show it to friends elsewhere (this being back in the days before the cable company shut down home-served websites).
At which point, cue the Dawn Of Man.
Later, I rented servers from co-los. But I still had to worry about whether they were up, whether I had configured the stack properly, whether I was virus-free or enrolled as a bot in some army of darkness, or whether demand from the adoring masses was going to blow the capacity I'd signed up for. (Real Soon Now, surely!)
Now, Real Engineers will say that all of this served to educate me about how it all works, and they'd be right. But unfortunately it also crowded out the time I had to learn about how to program at the top of the stack, to make things that people would actually use. Now Google's given me that time back.
Why should you care? Well, isn't it the case that you read everywhere about how you, or at least certainly your kids, need to learn to program to be literate and effective in the Digital Age? And yet, like Kubrick's monolith, it all seems so opaque and impenetrable. Where do you start? One of the great gifts I received in the last 15 years was to work with engineers who taught me to peel it back one layer at a time. My weak effort to pay it forward is this small, unoriginal advice: start by learning to program using a high-level interpreted language like Python, and by letting Google take care of the underlying "stack" of technology needed to show your work to your friends via the Web. Then, as your functional or performance needs demand (which for most of us will be rarely), you can push to lower-level "more powerful" (flexible but harder to learn) languages, and deeper into the stack.
A while back I worked at a free software firm (ArsDigita, where early versions of the ArsDigita Community System were licensed under GPL) and was deeply involved in developing an "open source" license that balanced our needs, interests, and objectives with our clients' (the ArsDigita Public License, or ADPL, which was closely based on the Mozilla Public License, or MPL). I've been to O'Reilly's conferences (<shameless> I remember a ~20-person 2001 Birds-of-a-Feather session in San Diego with Mitch Kapor and pre-Google Eric Schmidt on commercializing open source </shameless>). Also, I'm a user of O'Reilly's books (currently have Charles Severance's Using Google App Engine in my bag). So I figured I should read this carefully and have a point of view about the essay. And despite having recently read Nicholas Carr's excellent and disturbing 2011 book The Shallows about how dumb the Internet has made me, I thought nonetheless that I should brave at least a superficial review of Morozov's sixteen-thousand-word piece.
To summarize: Morozov describes O'Reilly as a self-promoting manipulator who wraps and justifies his evangelizing of Internet-centered open innovation in software, and more recently government, in a Randian cloak sequined with Silicon Valley rhinestones. My main reaction: "So, your point would be...?" More closely:
First, there's what Theodore Roosevelt had to say about critics. (Accordingly, I fully cop to the recursive hypocrisy of this post.) If, as Morozov says of O'Reilly, "For all his economistic outlook, he was not one to talk externalities..." then Morozov (as most of my fellow liberals do) ignores the utility of motivation. I accept and embrace that with self-interest and the energy to pursue it, more (ahem, taxable) wealth is created. So when O'Reilly says something, I don't reflexively reject it because it might be self-promoting; rather, I first try to make sure I understand how that benefits him, so I can better filter for what might benefit me. For example, Morozov writes:
In his 2007 bestseller Words That Work, the Republican operative Frank Luntz lists ten rules of effective communication: simplicity, brevity, credibility, consistency, novelty, sound, aspiration, visualization, questioning, and context. O’Reilly, while employing most of them, has a few unique rules of his own. Clever use of visualization, for example, helps him craft his message in a way that is both sharp and open-ended. Thus, O’Reilly’s meme-engineering efforts usually result in “meme maps,” where the meme to be defined—whether it’s “open source” or “Web 2.0”—is put at the center, while other blob-like terms are drawn as connected to it.Where Morozov offers a warning, I see a manual! I just have to remember my obligation to apply it honestly and ethically.
Second, Morozov chooses not to observe that if O'Reilly and others hadn't broadened the free software movement into an "open source" one that ultimately offered more options for balancing the needs and rights of software developers with those of users (who themselves might also be developers), we might all still be in deeper thrall to proprietary vendors. I know from first-hand experience that the world simply was not and is still not ready to accept GPL as the only option.
Nonetheless, good on Morozov for offering this critique of O'Reilly. Essays like this help keep guys like O'Reilly honest, as far as that's necessary. They also force us to think hard about what O'Reilly's peddling -- a responsibility that should be ours. I used to get frustrated by folks who slapped the 2.0 label on everything, to the point of meaninglessness, until I appreciated that the meme and its overuse drove me to think and presented me with an opportunity to riff on it. I think O'Reilly and others like him do us a great service when they try to boil down complexities into memes. The trick for us is to make sure the memes are the start of our understanding, not the end of it.
We're currently working with a leading investment management firm to help deploy and refine a new retirement guidance process and related tools. As part of this, we're helping our client find a freelance project/ business manager with broad new venture launch experience (not just management of a software development project, but coordination of promotional and operational aspects as well) for the balance of 2013. We would refer interested candidates to contract directly with our mid-Atlantic region client. (The work would be largely on-site.)
About the role:
If you're interested, please fill out the short form below, or please pass this on to someone you know who might be a good fit! Thanks.
I've written a short book. It's called "Pragmalytics: Practical Approaches to Marketing Analytics in the Digital Age". It's a collection and synthesis of some of the things I've learned over the last several years about how to take better advantage of data (Big and little) to make better marketing decisions, and to get better returns on your investments in this area.
The main point of the book is the need for orchestration. I see too much of the focus today on "If we build It (the Big Data Machine, with some data scientist high priests to look after it), good things will happen." My experience has been that you need to get "ecosystemic conditions" in balance to get value. You need to agree on where to focus. You need to get access to the data. You need to have the operational flexibility to act on any insights. And, you need to cultivate an "analytic marketer" mindset in your broader marketing team that blends perspectives, rather than cultivating an elite but blinkered cadre of "marketing analysts". Over the next few weeks, I'll further outline some of what's in the book in a few posts here on my blog.
I'm really grateful to the folks who were kind enough to help me with the book. The list includes: Mike Bernstein, Tip Clifton, Susan Ellerin, Ann Hackett, Perry Hewitt, Jeff Hupe, Ben Kline, Janelle Leonard, Sam Mawn-Mahlau, Bob Neuhaus, Judah Phillips, Trish Gorman Clifford, Rob Schmults, Michelle Seaton, Tad Staley, and my business partner, Jamie Schein. As I said in the book, if you like any of it, they get credit for salvaging it. The rest -- including several bits that even on the thousandth reading still aren't as clear as they should be, plus a couple of typos I need to fix -- are entirely my responsibility.
I'm also grateful to the wonderful firms and colleagues and clients I've had the good fortune to work for and with. I've named the ones I can, but in general have erred on the side of respecting their privacy and confidentiality where the work isn't otherwise in the public domain. To all of them: Thank You!
This field is evolving quickly in some ways, but there are also some timeless principles that apply to it. So, there are bits of the book that I'm sure won't age well (including some that are already obsolete), but others that I hope might. While I'm not one of those coveted Data Scientists by training, I'm deep into this stuff on a regular basis at whatever level is necessary to get a positive return from the effort. So if you're looking for a book on selecting an appropriate regression technique, or tuning Hadoop, you won't find that here, but if you're looking for a book about how to keep all the balls in the air (and in your brain), it might be useful to you. It's purposefully short -- about half the length of a typical business book. My mental model was to make it about as thick as "The Elements of Style", since that's something I use a lot (though you probably won't think so!). Plus, it's organized so you can jump in anywhere and snack as you wish, since this stuff can be toxic in large doses.
In writing it amidst all the Big Data craziness, I was reminded of Gandhi's saying (paraphrased) "First they ignore you... then they fight you, then you win." Having been in the world of marketing analytics now for a while, it seems appropriate to say that "First they ignore you, then they hype you, then you blend in." We're now in the "hype" phase. Not a day goes by without some big piece in the media about Big Data or Data Scientists (who now have hit the highly symbolic "$300k" salary benchmark -- and last time we saw it, in the middle part of the last decade in the online ad sales world, was a sell signal BTW). "Pragmalytics" is more about the "blend in" phase, when all this "cool" stuff is more a part of the furniture that needs to work in harmony with the rest of the operation to make a difference.
"Pragmalytics" is available via Amazon (among other places). If you read it please do me a favor and rate and review it, or even better, please get in touch if you have questions or suggestions for improving it. FWIW, any earnings from it will go to Nashoba Learning Group (a school for kids with autism and related disorders).
Where it makes sense, I'd be very pleased to come talk to you and your colleagues about the ideas in the book and how to apply them, and possibly to explore working together. Also, in a triumph of Hope over Experience, my next book (starting now) will be a collection and synthesis of interviews with other senior marketing executives trying to put Big Data to work. So if you would be interested in sharing some experiences, or know folks who would, I'd love to talk.
About the cover: it's meant to convey the harmonious convergence of "Mars", "Venus", and "Earth" mindsets: that is, a blend of analytic acuity, creativity and communication ability, and practicality and results-orientation that we try to bring to our work. Fellow nerds will appreciate that it's a Cumulative Distribution Function where the exponent is, in a nod to an example in the book, 1.007.
(Nerd alert! You have been warned.)
Unoriginally, I'm a big fan of Nate Silver's fivethirtyeight blog. I've learned a ton from him (currently also reading his book The Signal and the Noise). For a little while now I've been puzzling over the relationship between his "Nowcast" on the presidential election and the price of Obama 2012 contracts at Intrade. Take a look at this chart I made based on the data from each of these sources:
If we look past Obama's disastrous first debate, and look at the difference between the seven-day moving averages of the 538 Obama win probability and the Intrade Obama 2012 contract price, it looks to fluctuate roughly around 10-15 points, call it 12. Also, looking at the volumes, it looks like the heaviest trading happens roughly around midweek, before Friday. So if you trust Nate's projections, and unless you've got inside scoop about any big negative surprises to come, the logical thing to do is to buy Obama 2012s tomorrow, with an average probability of clearing $1.20 on each contract (about a 20% gain).
Now for the nerdy part:
First, the easy job: Intrade lets you download historical prices on its contracts.
Next, the harder job: Nate doesn't provide a .csv of his data. But if you "view source" on his page, you'll see a file called:
right after a preceding description "Data URL".
For a little while I fiddled with the Stanford Visualization Group's Data Wrangler tool to reshape the remaining data into the .csv I needed. It's a powerful tool, but it turned out to be easier in this case to wrangle the file structure I wanted manually:
Combining the Intrade and 538 data and then plotting the Intrade close and the "Obama win pct" series results in the chart above.
My son Ben and I participated in the Dover Sherborn Boosters annual triathlon this past Sunday. We really enjoyed it. It was his first, and my first in 22 years. Well over 300 folks competed, well-mixed in age and gender. They seemed like a pretty competitive, well-trained bunch to us, judging by the 95%+ who had lean cheeks and wetsuits and fancy bikes and bags that said "Boston Triathlon Team".
After the race, I was curious to get a better handle on how we'd done. All Sports Events had done a great job of running and timing the event, and their table of results was very detailed and useful. But I wanted to see it a bit more visually. The All Sports Events folks were kind enough to share the data file, and with a little fiddling to parse and convert strings to times, I got to this (click on the image to launch the Tableau Public interactive visualization):
Before the race, as I shivered un-rubbered on the beach waiting for the swim to start, I overheard a couple of guys my age talking about how now that they were in their forties, with their kids a little older and with more control at home and work (a state of grace I'm not yet familiar with), they had more time to train, especially on Saturday mornings.
Plotting 6th-order polynomial trend lines through the data revealed an interesting, if weak pattern that seems to confirm this life-stage effect, for both men and women. Average performance improves radically as you move from your teens to your twenties, declines as the realities of family life intrude in your thirties, improves once again as you rediscover your inner narcissist child in your forties, and then begins to decline again as Father Time eventually asserts himself (though with plenty of variance around the mean to give us hope). Like Shakespeare said, more or less.
What do you see? Thanks again to the organizers and volunteers for a great event!
So last night I'm sitting on the tarmac waiting for my flight to take off, chillin' to a Coldplay's-"Hurts-Like-Heaven"+poor-screaming-child-in-exhausted-parent's-lap-two-rows-behind-me mashed up mix worthy of Eminem and Dido's "Stan". After we took off, the music moved into a second movement in which the child's keening seemed to slide seamlessly into the many sonic layers of "Paradise", to the point where I thought maybe Chris Martin was two years old once again.
He paused as Mom forged ahead, lingering by my seat to watch as I clicked from view to view of the data, the bubbles bouncing and re-forming to convey the vectors and magnitudes of our collective fiscal choices from one perspective to another. His eyes moved back and forth from the screen to mine. He became very quiet, and for a few seconds, the cabin was silent.
Thank you Mike Bostock. Among your life's achievements, you can count, for a few brief moments of one night, 100 grateful passengers, one relieved mother, and one happy little boy.
Facebook's Sponsored Stories feature is one of the ad targeting horses the firm's counting on to pull it out of its current valuation morass (read this, via @bussgang).
Sponsored Stories is a virality-enhancing mechanism (no jokes please, that was an "a" not an "i") that allows Facebook advertisers to increase the reach of Facebook users' interactions with the advertisers' brands on Facebook (Likes, Check-ins, etc.). It does this by increasing the number of a user's Facebook friends who see such engagements with the advertisers' brands beyond the limited number who would, under normal application of the Facebook news feed algorithm, see those endorsements.
Many users are outraged that this unholy Son-Of-Beacon feature violates their privacy, to the point that they sue-and-settle (or try to, oops).
What they are missing perhaps is the opportunity to "surf" an advertiser's Sponsored Stories investment to amplify their own self-promotion or mere narcissism.
Consider the following simple example. Starbucks is / has been using this ad program. Let's say I go to Starbucks and "check in" on Facebook. Juiced by Sponsored Stories (within the additional impressions Starbucks has paid for), all of my Facebook friends browsing their news feeds will see I've checked in at Starbucks (and presumably feel all verklempt about a brand that could attract such a valued friend).
Now, what if I, savvy small business person, comment in my check in that I'm "at Starbucks, discussing my <link>NEW BOOK</link> with friends!" I've pulled off the social media equivalent of pasting my bumper sticker on Starbucks' billboard.
I need to look more closely into this, but as I understand it, the Sponsored Stories feature can't today prevent users from including negative feedback in their brand engagements, where such flexibility is provided for. So if they can't handle the negative yet, it may still be that they can't prevent more general forms of off-brand messaging.
I'm sure others have considered this and other possibilities. Comments very welcome! Meanwhile, I'm off to Starbucks to discuss my upcoming NEW BOOK.
In our "marketing analytics agency" model, as distinguished from a more traditional consulting one, we measure success not just by the quality of the insights and opportunities we can help clients to find, but on their ability to act on the ideas and get value for their investments. Sometimes this means we simultaneously work both ends to an acceptable middle: even as we torture data and research for bright ideas, we help to define and influence the evolution of a marketing platform to be more capable.
This raises the question, "What's a marketing platform, and a good roadmap for making it more capable?" Lots of vendors, including big ones like IBM, are now investing in answering these questions, especially as they try to reach beyond IT to sell directly to the CMO. These vendors provide myriad marketing materials to describe both the landscape and their products, which variously are described as "campaign management systems" or even more gloriously as "marketing automation solutions". The proliferation of solutions is so mind-blowing that analyst firms build whole practices making sense of the category. Here's a recent chart from Terence Kawaja at LUMA Partners (via Scott Brinker's blog) that illustrates the point beautifully:
Yet even with this guidance, organizations struggle to get relevant stakeholders on the same page about what's needed and how to proceed. My own experience has been that this is because they're missing a simple "Common Requirements Framework" that everyone can share as a point of departure for the conversation. Here's one I've found useful.
Basically marketing is about targeting the right customers and getting them the right content (product information, pricing, and all the before-during-and-after trimmings) through the right channels at the right time. So, a marketing automation solution, well, automates this. More specifically, since there are lots of homegrown hacks and point solutions for different pieces of this, what's really getting automated is the manual conversion and shuffling of files from one system to the next, aka the integration of it all. Some of these solutions also let you run analysis and tests out of the same platform (or partnered components).
Each of these functions has increasing levels of sophistication I've characterized, as of this writing, into "basic", "threshold", and "advanced". For simple roadmapping / prioritization purposes, you might also call these "now", "next", and "later".
The simplest form of targeting uses a single data source, past experience at the cash register, to decide whom to go back to, on the idea that you build a business inside out from your best, most loyal customers. Cataloguers have a fancy term for this, "RFM", which stands for "Recency, Frequency, and Monetary Value", which grades customers, typically into deciles, according to... how recently, how frequenty, and how much they've bought from you. Folks who score high get solicited more intensively (for example, more catalog drops). By looking back at a customer's past RFM-defined marginal value to you (e.g., gross margin you earned from stuff you sold her), you can make a decision about how much to spend marketing to her.
One step up, you add demographic and behavioral information about customers and prospects to refine and expand your lists of folks to target. Demographically, for example, you might say, "Hey, my best customers all seem to come from Greenwich, CT. Maybe I should target other folks who live there." You might add a few other dimensions to that, like age and gender. Or you might buy synthetic, "psychographic" definitions from data vendors who roll a variety of demographic markers into inferred attitudes. Behaviorally, you might say "Let's retarget folks who walk into our store, or who put stuff into our online shopping cart but don't check out." These are conceptually straightforward things to do, but are logistically harder, because now you have to integrate external and internal data sources, comply with privacy policies, etc.
In the third level, you begin to formalize the models implicit in these prior two steps, and build lists of folks to target based on their predicted propensity to buy (lots) from you. So for example, you might say, "Folks who bought this much of this product this frequently, this recently who live in Greenwich and who visited our web site last week have this probability of buying this much from me, so therefore I can afford to target them with a marketing program that costs $x per person." That's "predictive modelling".
Some folks evaluate the sophistication of a targeting capability by how fine-grained the target segments get, or by how close to 1-1 personalization you can get. In my experience, there's often diminishing returns to this, often because the firm can't always practically execute differentiated experiences even if the marginal value of a personalized experience warrants it. This isn't universally the case of course: promotional offers and similar experience variables (e.g., credit limits) are easier to vary than, say, a hotel lobby.
Again, a simple progression here, for me defined by the complexity of the content you can provide ("plain", "rich", "interactive") and by the flexibility and precision ("none", "pre-defined options", "custom options") with which you can target it through any given channel or combination of channels.
Another dimension to consider here is the complexity of the organizations and processes necessary to produce this content. For example, in highly regulated environments like health care or financial services, you may need multiple approvals before you can publish something. And the more folks involved, the more sophisticated and valuable the coordination tools, ranging from central repositories for templates, version control systems, alerts, and even joint editing. Beware though simply paving cowpaths -- be sure you need all that content variety and process complexity before enabling it technologically, or it will simply expand to fit what the technology permits (the same way computer operating systems bloat as processors get more powerful).
The big dimension here is the number of channels you can string together for an integrated experience. So for example, in a simple case you've got one channel, say email, to work with. In a more sophisticated system, you can say, "When people who look like this come to our website, retarget them with ads in the display ad network we use." (Google just integrated Google Analytics with Google Display Network to do just this, for example, an ingenious move that further illustrates why they lead the pack in the display ad world.) Pushing it even further, you could also say, "In addition to re-targeting web site visitors who do X, out in our display network, let's also send them an email / postcard combination, with connections to a landing page or phone center."
Analysis and Testing
In addition to execution of campaigns and programs, a marketing solution might also suport exploration of what campaigns and programs, or components thereof, might work best. This happens in a couple of ways. You can examine past behavior of customers and prospects to look for trends and build models that explain how changes and saliencies along one or more dimensions might have been associated with buying. Also, you can define and execute A/B and multi-variate tests (with control groups) for targeting, content, and channel choices.
Again, the question here is not just about how much data flexibility and algorithmic power you have to work with within the system, but how many integration hoops you have to go through to move from exploration to execution. Obviously you won't want to run exploration and execution off the same physical data store, or even the same logical model, but it shouldn't take a major IT initiative to flip the right operational switches when you have an insight you'd like to try, or scale.
Concretely, the requirement you're evaluating here is best summarized by a couple of questions. First, "Show me how I can track and evaluate differential response in the marketing campaigns and programs I execute through your proposed solution," and then, "Show me how I can define and test targeting, content, and channel variants of the base campaigns or programs, and then work the winners into a dominant share of our mix."
A Summary Picture
Here's a simple table that tries to bundle all of this up. Notice that it focuses more on function than features and capabilities instead of components.
What's Right For You?
The important thing to remember is that these functions and capabilities are means, not ends. To figure out what you need, you should reflect first on how any particular combination of capabilities would fit into your marketing organization's "vector and momentum". How is your marketing performance trending? How does it compare with competitors'? In what parts -- targets, content, channels -- is it better or worse? What have you deployed recently and learned through its operation? What kind of track record have you established in terms of successful deployment and leverage from your efforts?
If your answers are more like "I don't know" and "Um, not a great one" then you might be better off signing onto a mostly-integrated, cloud-based (so you don't compound business value uncertainty with IT risk), good-enough-across-most-things solution for a few years until you sort out -- affordably (read, rent, don't buy) -- what works for you, and what capability you need to go deep on. If, on the other hand, you're confident you have a good grip on where your opportunities are and you've got momentum with and confidence in your team, you might add best of breed capabilities at the margins of a more general "logical model" this proposed framework provides. What's generally risky is to start with an under-performing operation built on spaghetti and plan for a smooth multi-year transition to a fully-integrated on-premise option. That just puts too many moving parts into play, with too high an up-front, bet-on-the-come investment.
Again, remember that the point of a "Common Requirements Framework" isn't to serve as an exhaustive checklist for evaluating vendors. It's best used as a simple model you can carry around in your head and share with others, so that when you do dive deep into requirements, you don't lose the forest for the trees, in a category that's become quite a jungle. Got a better model, or suggestions for this one? Let me know!
It's been on my reading list forever, but this year I finally got around to Robert Pirsig's Zen and the Art of Motorcycle Maintenance. It was heavy going in spots, but it didn't disappoint. So many wonderful ideas to think about and do something with. Among a thousand other things, I was taken with Pirsig's exposition of "gumption". He describes it as a variable property developed in someone when he or she "connects with Quality" (the principal object of his inquiry). He associates it with "enthusiasm", and writes:
A person filled with gumption doesn't sit around dissipating and stewing about things. He's at the front of the train of his own awareness, watching to see what's up the track and meeting it when it comes. That's gumption. (emphasis mine; Pirsig, Zen, p. 310, First Harper Perennial Modern Classics edition 2005)
In recent years I've tested my gumption limits in trivial and meaningful ways: built a treehouse, fixed an old snowblower, serviced sailboat winches, messed around in SQL and Python, started a business. For me, gumption was the "Well, here goes..." evanescent sense of that moment when preparation ends and experimentation begins, an amplified mix of anxiety and anticipation at the edge of the sort-of-known and the TBD. Or, like the joy of catching a wave, it's feeling for a short time what it's like to have your brain light up an order of magnitude more brightly than it manages on average, and watching your productivity soar.
So what's this got to do with IT planning?
For a while now I've been working with both big and small companies, and seen two types of IT planning happen in both settings. In one case there's endless talk of 3-year end-state architectures that seem to recede and disappear like mirages as you Gantt-crawl toward them. In the other, there's endless hacks that "scratch itches" and make you feel like you're among the tribe of Real Men Who Ship, but which toast you six months later with security holes or scaling limits.
Getting access to data and having enough operational flexibility to act on the insights we help produce with this data are crucial to the success we try to help our clients achieve, and hold ourselves accountable for. So, (sticking with the motorcycle metaphor) a big part of my job is to be able to read what "gear" an IT organization is in, and to help it shift into the right one if needed -- in other words, to find a proper balance of planning and execution, or "the right amount of gumption". One crude measure I've learned to apply is what I'm calling the "slide-to-screen" ratio (aka the ".ppt-to-.php" score for nerdier friends).
It's a simple calculation. Take the number of components yet to be delivered in an IT architecture chart or slide, and divide them by the number of components or applications delivered over the same time period looking backward. For example, if the chart says 24 components will be delivered over the next three years, and the same number of comparable items have been delivered over the prior three years, you're running at "1".
Admittedly, the standard's arbitrary, and hard to compare across situations. It's the question that's valuable. In one situation, there's lots of coding, but little clear sense of where it needs to go, tantamount to trying to drive fast in first gear. In the other, there's lots of ambition, but not much seems to happen -- like trying to leave the driveway in fifth gear. When I'm listening to an IT plan, I'm not only looking at the slides and the demos, I'm also feeling for the "gumption" of the authors, and where they are with respect to the "wave". The best plans always seem to say something like, "Well, here's what we learned -- very specifically -- from the last 24 months' deployments, and here's what we think we need to do (and not) in the next 24 months as a result." They're simultaneously thoughtful and action-oriented. Conversely, when I don't see this specifics-laden reflection, and instead get a generic look forward, and a squishy, over-hedged, non-committal roadmap for getting there, warning bells go off.
Pushing for the implications of the answer -- to downshift, or upshift, and how -- is incredibly valuable. Above "1", pushing might sound like, "OK, so what pieces of this vision will you ship in each of the next 4 quarters, and what critical assumptions and dependencies are embedded in your answers?" Below "1", the question might be, "So, what complementary capabilities, and security / usability / scalability enhancements do you anticipate needing to make these innovations commercially viable?" The answers you get in that moment -- a "Blink"-style gumption test -- are more useful than any six-figure IT process or organizational audit will yield.
I've been working with a global financial services firm to develop its marketing analytics / intelligence capability, and we're now building a highly capable team to further extend and sustain the results and lessons so far. This includes a Marketing Analytics Director to lead a strong team doing advanced data mining and predictive modeling to support high-impact opportunities in various areas of the firm. Here's the job description on LinkedIn. If you are currently working at a large marketer, major analytics consulting firm, or advertising agency, and have significant experience analyzing, communicating, and implementing sophisticated multi-channel marketing programs, and are up for the challenge of leading a new team in this area for a world-class firm in a great city, please get in touch!
As many of you know (having been barraged with a Twit-tensity worthy of @justinbieber), Saturday I rode in the Nashoba Learning Group annual bike-a-thon. Nashoba Learning Group is a school in Bedford Massachusetts for children with Autistic Spectrum disorders. Our family has been involved with the school since its founding over a decade ago; it now has 90 students. It achieves wonderful results, and shares what it learns generously. And now we're also building an adult program as well.
This year's ride was among the most beautiful I can remember -- a lovely, relatively cool and dry New England summer day. Nonetheless, experience has taught me to seek any advantage possible. So, at breakfast, I spied this number, and imagined the drafting possibilities of a one-machine peloton:
10 miles into my admittedly parasitic strategy (Hey, I did offer to take my turn at the front, but I think they laughed), I thought I heard "Activate les contre-measures!" I thought I saw tacks, but I really can't be sure. Slowly though, the sound of the breeze in my ears was replaced with a slow hiss...
Furiously I pedaled - no, clawed - my way back. Well, scratched a bit. Let's just say it was a nice day for a ride.
NLG gets results...
...and makes people happy
Hi folks, a reminder to please sponsor me for this year's NLG Bike-a-thon! Here's the link to the donations site. Below for your reading pleasure is my recap of the 2007 ride. Thank you!