About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

66 posts categorized "e-business"

June 14, 2010

OMMA Metrics & Measurement "Modeling Attribution" Panel SF 7/22: Hope To See You There

I'll be moderating a panel at the OMMA Metrics & Measurement Conference in San Francisco on July 22.  

The topic of the panel is, "Modeling Attribution: Practitioner Perspectives on the Media Mix".  Here's the conference agenda page.

The panel description:

How do you determine the channels that influence offline and online behavior and marketing performance?  

How should you allocate your budget across CRM emails, display ads, print advertising, television and radio commercials, direct mail, and other marketing sources? 

What models, techniques, and technologies should you use develop attribution and predictive models that can drive your business? 

Do you need SAS, SPSS, and a PhD in Statistics? 

Does first click, last click, direct, indirect, or appropriate attribution matter – which is best?

What about multiple logistic regression? 

What is the impact of survey and voice-of-the-customer data on attribution? 

Hear from experts who have to answer these questions and tackle these tough issues as they work hard in the field every day for their consultancies, agencies, and brands.

So far, Manu Mathew, CEO from VisualIQ, and Todd Cunningham, SVP Research at MTV Networks, will be participating on the panel as well.

Hope to see you there.  Meanwhile, please suggest questions you'd like to ask the panelists by commenting here.  Thanks!

January 26, 2010

What's NYT.com Worth To You, Part II

OK, with the response curve for my survey tailing off, I'm calling it.  Here, dear readers, is what you said (click on the image to enlarge it):

Octavianworld nyt com paid content survey

(First, stats: with ~40 responses -- there are fewer points because of some duplicate answers -- you can be 95% sure that answers from the rest of the ~20M people that read the NYT online would be +/- 16% from what's here.)

90% of respondents would pay at least $1/month, and several would pay as much as $10/month. And, folks are ready to start paying after only ~2 articles a day.  Pretty interesting!  More latent value than I would have guessed.  At the same time, it's also interesting to note that no one went as high as the $14 / month Amazon wants to deliver the Times on the Kindle. (I wonder how many Kindle NYT subs are also paper subs getting the Kindle as a freebie tossed in?)

Only a very few online publishers aiming at "the general public" will be able to charge for content on the web as we have known it, or through other newer channels.  Aside from highly-focused publishers whose readers can charge subscriptions to expense accounts, the rest of the world will scrape by on pennies from AdSense et al

But, you say, what about the Apple Tablet (announcement tomorrow! details yesterday), and certain publishers' plans for it?  I see several issues:

  • First, there's the wrestling match to be had over who controls the customer relationship in Tabletmediaworld. 
  • Second, I expect the rich, chocolatey content (see also this description of what's going in R&D at the Times) planned for this platform and others like it to be more expensive to produce than what we see on the web today, both because a) a greater proportion of it will be interactive (must be, to be worth paying for), but also because b) producing for multiple proprietary platforms will also drive costs up (see for example today's good article in Ad Age by Josh Bernoff on the "Splinternet"). 
  • Third, driving content behind pay walls lowers traffic, and advertising dollars with it, raising the break-even point for subscription-based business models. 
  • Fourth, last time I checked, the economy isn't so great. 
The most creative argument I've seen "for" so far is that pushing today's print readers/ subscribers to tablets will save so much in printing costs that it's almost worth giving readers tablets (well, Kindles anyway) for free -- yet another edition of the razor-and-blade strategy, in "green" wrapping perhaps.

The future of paid content is in filtering information and increasing its utility.  Media firms that deliver superior filtering and utility at fair prices will survive and thrive.  Among its innovations in visual displays of information (which though creative, I'd guess have a limited monetization impact) is evidence that the Times agrees with this, at least in part (from the article on Times R&D linked to above):

When Bilton swipes his Times key card, the screen pulls up a personalized version of the paper, his interests highlighted. He clicks a button, opens the kiosk door, and inside I see an ordinary office printer, which releases a physical printout with just the articles he wants. As it prints, a second copy is sent to his phone.

The futuristic kiosk may be a plaything, but it captures the essence of R&D’s vision, in which the New York Times is less a newspaper and more an informative virus—hopping from host to host, personalizing itself to any environment.

Aside from my curiosity about the answers to the survey questions themselves, I had another reason for doing this survey.  All the articles I saw on the Times' announcement that it would start charging had the usual free-text commenting going.  Sprinkled through the comments were occasional suggestions from readers about what they might pay, but it was virtually impossible to take any sort of quantified pulse on this issue in this format.  Following "structured collaboration" principles, I took five minutes to throw up the survey to make it easy to contribute and consume answers.  Hopefully I've made it easier for readers to filter / process the Times' announcement, and made the analysis useful as well -- for example, feel free to stick the chart in your business plan for a subscription-based online content business ;-)  If anyone can point me to other, larger, more rigorous surveys on the topic, I'd be much obliged.

The broader utility of structuring the data capture this way is perhaps greatest to media firms themselves:  indirectly for ad and content targeting value, and perhaps because once you have lots of simple databases like this, it becomes possible to weave more complex queries across them, and out of these queries, some interesting, original editorial possibilities.

Briefly considered, then rejected for its avarice and stupidity: personalized pricing offers to subscribe to the NYT online based on how you respond to the survey :-)

Postscript: via my friend Thomas Macauley, NY (Long Island) Newsday is up to 35 paid online subs.

January 17, 2010

What's NYT.com Worth To You?

Via Chris Schroeder's  (@cmsschroed) RT of Henry Blodget (@hblodget), the news of the NYT's decision to start charging (again) for content.

Blodget's prior analysis suggested this might be worth ~$100 million per year  (my deduction based on his math) to NYT Co.  If a tenth of its 130M monthly unique visitors end up being "heavy users" that pay, 4 bucks a month gets them ~$600 million annually (13m * $4 * 12 months = $624 million).  (Seems high; better data anyone?)

What's it worth to you?  See what some folks had to say in the chart below.  Please take this survey to add your perspective, and let your friends know about it:

Note: I removed one response of "1 million articles for free, willing to pay $0 thereafter" because it messed up the display, but am mentioning it here for full disclosure. And to the respondent, thank you for participating!

Postscript: conclusions and analysis

October 22, 2009

Fly-By-Wire Marketing

One of the big innovations used in the F-16 fighter jet was the "fly-by-wire" flight control system.  Instead of directly connecting the pilot's movements of the control stick and the rudder pedals to the aircraft's control surfaces through cables (for WWI-era biplanes) or hydraulics, the pilot's commands were now communicated electronically to an intermediate computer, which then interpreted those inputs and made appropriate adjustments. 

This saved a lot of weight, and channeling some of those weight savings into redundant control circuits made planes safer.  Taken to its extreme in planes like the B2 bomber, "fly-by-wire" made it possible for pilots to "fly" inherently unstable airplanes by leaving microsecond-by-microsecond adjustments to the intermediate computer, while the pilot (or autopilot) provided broader guidance about climbs, turns, and descents.

Now we have "fly-by-wire marketing".

A couple of days ago I read Daniel Roth's October 19 article on Wired.com titled "The Answer Factory: Fast, Disposable, and Profitable as Hell", describing Demand Media's algorithmic approach to deciding what content to commission and publish.  The article is a real eye-opener.  While we watch traditional publishers talk about turning "print dollars into digital dimes", Demand has built a $200 million annual revenue business with a $1 billion valuation.  How?  As Roth puts it, "Instead of trying to raise the market value of online content to match the cost of producing it — perhaps an impossible proposition — the secret is to cut costs until they match the market value."  More specifically,

Before Reese came up with his formula, Demand Media operated in the traditional way. Contributors suggested articles or videos they wanted to create. Editors, trained in the ways of search engine optimization, would approve or deny each while also coming up with their own ideas. The process worked fine. But once it was automated, every algorithm-generated piece of content produced 4.9 times the revenue of the human-created ideas. So Rosenblatt got rid of the editors. Suddenly, profit on each piece was 20 to 25 times what it had been. It turned out that gut instinct and experience were less effective at predicting what readers and viewers wanted — and worse for the company — than a formula.

I'm currently in situations where either the day-to-day optimization of the marketing process is too complex to manage fully through direct human intervention, or some of the optimizations to be performed are still sufficiently vague that we can only anticipate them at a broader, categorical level, from which a subsequent process -- perhaps an automated one -- will be necessary to fully realize them.  I also recently went to and blogged about a very provocative MITX panel on personalization, where a key insight (thanks to Scott Brinker, Co-founder and CTO of ion Interactive) was how the process to support personalization needs to change as you cut to finer and finer-grained targeting.  So it was with these contexts in mind that I read Roth's article, and the question it prompted for me was, "In a future dominated by digital channels, is there a generic roadmap for appropriate algorithmic abstractions of marketing optimization efforts that I can then adapt for (client-) specific situations?" 

That may sound a little out there, but Demand Media is further proof that "The future's already here, it's just not evenly distributed yet."  And, I'm not original in pointing out that we've had automated trading on Wall Street for a while; with the market for our attention becoming as digital as the markets for financial securities, this analogy is increasingly apt.

So here are some bare bones of what such a roadmap might look like.

Starting with end in mind, an ultimate destination might be that we could vary as many elements of the marketing mix as needed, as quickly as needed, for each customer (You laugh, but the holodeck isn't that far away...), where the end result of that effort would generate some positive marginal profit contribution. 

At the other end of the road, where we stand today, in most companies these optimization efforts are done mostly by hand.  We design and set campaigns into motion by hand, we use our eyes to read the results, and we make manual adjustments.

One step forward, we have mechanistic approaches.  We set up rules that say, "Read the incoming data; if you see this pattern, then make this adjustment."  More concretely, "When a site visitor with these cookies set in her browser arrives, serve her this content." This works fine as long as the patterns to be recognized, and the adjustments to be made, are few and relatively simple.  It's a lot of work to define the patterns to look for.  And, it can be lots of work to design, implement, and maintain a campaign, especially if it has lots of variants for different target segments and offers (even if you take a "modular" approach to building campaign elements).  Further, at this level, while what the customer experiences is automated, the adjustments to the approach are manual, based on human observation and interpretation of the results.

Two steps down the road, we have self-optimizing approaches where the results are fed back into the rule set automatically.  The Big Machine says,  "When we saw these patterns and executed these marketing activities, we saw these results; crunching a big statistical model / linear program suggests we should modify our marketing responses for these patterns in the following ways..."  At this level, the human intervention is about how to optimize -- not what factors to consider, but which tools to use to consider them.

I'm not clear yet about what's beyond that.  Maybe Skynet.  Or, maybe I get a Kurzweil-brand math co-processor implant, so I can keep up with the machines.

The next question you ask yourself is, "How far down this road does it makes sense for me to go, by when?"  Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity.  The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.

For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit?  My experience has been that the answer is usually "yes".  But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.

At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data.  I've pushed the roadmap idea further to help organizations make decisions based on this richer set of considerations.)

So, what are your plans for Fly-By-Wire Marketing?

Postscript: Check out "The Value Of The New Machine", by Steve Smith in Mediapost's "Behavioral Insider" e-newsletter today.  Clearly things are well down the road -- or should be -- at most firms doing online display and search buys and campaigns.  Email's probably a good candidate for some algorithmic abstraction.

September 17, 2009

#Adobe + #Omniture: Further Thoughts ( #Analytics )

I've been following the Web Analytics Forum on Yahoo! -- David Simmons' ideas here are especially thoughtful -- and I listened to the Q3 Adobe con call.  Plus last night at Web Analytics Wednesday in Cambridge I had a chance to talk about it with VisualIQ CTO Anto Chittilappilly and Visible Measures' Tara Chang.  Here are some further ideas based on what I've read and heard so far:

  • Rich media -- video, interactive ads, etc. -- are a growing piece of the Internet experience. (Google's rumored acquisition of Brightcove reflects this.)  (The flip side: Semphonic CEO Gary Angel writes that "It's taking a long time, but HTML is dying.")
  • Since they're growing, tracking user interactions with them effectively is increasingly important, not just on the Internet but across platforms, and as the CIMM challenge to Nielsen suggests, not yet well addressed.
  • User tracking in rich media platforms like Flash is more granular and more persistent than cookie-based tracking (David Simmons explains how and recommends Eric Peterson's Web Site Measurement Hacks for more).
  • But, support for event-based tracking of users' interactions in rich internet media in existing web analytics platforms is in its infancy (though vendors say otherwise).
  • So, publishers want tighter, simpler integration of event-based tracking.
  • In the con call, Adobe CEO Shantanu Narayen mentions the framework "Create - Deploy - Optimize" as a way to understand their overall product roadmap vision.
  • Adobe has the "Create" (e.g., Photoshop, this part of their business is ~60% of their revenues) and "Deploy" (e.g., Flash Server / Flash Player) pieces covered.  But "Optimize" was still uncovered, up until this announcement.
That's the product strategy logic.  The financial logic:
  • Adobe is too dependent on the "Create" leg of their stool, and hasn't been able to monetize the "Deploy" piece as much as they might have hoped -- removing  license fees from the Open Screen Project is one recent example of this limitation.  So they're betting that "Optimize" has legs, and that buying Omniture in this economic climate at ~5x revenues is good timing.
  • Adobe's traditional software-licensing business model has gotten crushed year to year.  Omniture's revenues are >90% recurring subscriptions based on a SAAS model.  Adobe revenues (~$3B) are 10x Omniture's but the enhanced value proposition of the A-O integration and cross-selling through Adobe's sales force will accelerate O's growth.  Over the next 2-3 years, this will help to reduce the volatility of A's revenues / revenue growth.

What's ahead?  One direction, as I've previously discussed, is that the workflows associated with "Create-Deploy-Optimize" are increasingly complex, and that platforms that support these workflows in a simple, more integrated way will become important to have.  Managing these processes through hairballs built with Excel spreadsheets scattered across file servers just won't cut it.

Postscript: Eric Peterson's take.  And more from him.  The gist -- not so high on Omniture's relative value to customers and the experience of working with them, and not sure why Adobe did this.  Anil Batra has some interesting ideas for product directions that emerge from the combination.

September 15, 2009

Adobe + Omniture: Pragmalytically Perfect Sense

Big news (via, in my case, Eric Peterson's "Web Analytics Forum" on Yahoo!): Adobe's buying Omniture

Simple logic:

  • Adobe makes great tools for developing custom dashboards and other data visualization apps.  I know because in one engagement this summer, we've worked with a terrific client and another (also terrific) engineering firm to build a Flex-based prototype of an advanced predictive analytics application.  But prototyping is easy, tying a front end to a working, real-world analytics data model is much harder.
  • Omniture leads the pack of web analytics platform vendors, who all have more features and capabilities in their left pinkies than many of us could dream of in six lifetimes.  But exposing mere mortals to the interfaces these leading firms provide is like showing kryptonite to  Joe / Jane Superexecutive.  As analytics get more complex, it's even more important to focus on key questions and expose only the data / views on that data that illuminate those key questions.
  • So if you believe that that this web / digital / multichannel analytics thing has legs, then putting these two firms together and working both ends to the middle faster than might otherwise have happened is a smart thing to do.
  • The other reason to do this is to anticipate the trend in "custom reporting" and "advanced segmentation" capabilities in the "lower-end" analytics offerings (e.g., GA) from folks like Google.  I've been using these capabilities recently, and they get you a meaningful way, eroding the value of higher-end offerings on both the front (Adobe) and middle-back (Omniture) ends.

July 29, 2009

Microsoft-Yahoo Search Deal: And Then There Were Two And A Half

Microsoft and Yahoo! announced today that the former's new Bing search engine would now run the latter's search capability.  And, Yahoo! will sell paid search campaigns for the combined capability.  At least for starters, this means Google owns two thirds + of pro forma search volume (per Comscore), and MSFT+YHOO own the other third.  

Yahoo! had no choice but to accept colonization by Microsoft, as its own search technology was going nowhere.  Similarly, Microsoft had to do this deal with Yahoo because in today's tough market (even paid search is seeing only low single digit growth), it couldn't afford to divide its sales force to fight Yahoo in the face of their shared, colossal foe.

It will be interesting to see how much search volume sticks with / moves to each.  I'm guessing there is a curiosity factor that has Bing volume temporarily high, but that Google will hold serve, since Bing's advantages seem concentrated in some commerce-oriented niches.  (Here's a wacky idea though:  Microsoft could reverse one important structural disadvantage with a pre-emptive bid to embed Bing by default in Firefox, when Mozilla's current deal with Google expires in 2011.  Maybe not so wacky if, following long-term trends, IE's browser share drops close to 50% by then, and Google chooses to focus on Chrome.)

The other reason for this deal is for each side to achieve a critical mass of search volume to usefully inform display ad targeting.  Display ad networks have been a surprisingly profitable business, even in tough times (see also for example NET income for VLCK in Q1 of this year).  And the tip of the spear in the trend toward multi-channel optimization of marketing spend has been attribution analysis between search and display channels ("To what degree is a search triggered by display ad exposure, and how can we better target display ads to behaviorally-defined user segments across a network based on their search history?").  So, this deal would seem to be as much about shoring up each side's profit centers as it would be about huddling for warmth in the paid search world.

Given the complexities of having Yahoo! take over Bing sales and operations (through AdCenter), folks at Google should make hay in the short term, since their opponents will be distracted and the transition will create work for customers that arguably doesn't add much value either.  Hopefully, much of this complexity can be absorbed for advertisers by their search agencies, and hopefully they can get paid for some of that.

Notwithstanding interesting riffs on top of the established platforms, and newcomers that are useful in niches, this consolidation is further proof of the diminishing returns to innovation in search as we know it, that is, indexing and ranking unstructured content.  The next big wave of disruption in this category won't happen until we have radical change in the nature of the stuff being searched -- specifically, the advent of "Semantic Web" standards that will structure the underlying data in useful ways.  

It will be interesting to see how The Big Two And A Half left in search react to the propagation of open standards that could level the search playing field.  If the DOJ wants to fight the Search Trusts, perhaps its money is better spent on seeding and supporting those efforts than undoing what nature hath already wrought.

July 21, 2009

Facebook at 250 (Million): What's Next? And What Will Your Share Be?

Facebook announced last week that it had passed 250 million members.  Since no social network grows to the sky (as MySpace experienced before it), it's useful to reflect on the enablers and constraints to that growth, and on the challenges and opportunities those constraints present to other major media franchises (old and new) that are groping for a way ahead.

"Structured Collaboration" principles say social media empires survive and thrive based on how well they support value, affinity, and simplicity.  That is,

  • how useful (rationally and emotionally) are the exchanges of information they support?
  • how well do they support group structures that maximize trust and lower information vetting costs for members? 
  • how easy do they make it for users to contribute and consume information? 
(There are of course additional, necessary "means to these ends" factors, like "liquidity" -- the seed content and membership necessary to prime the pump -- and "extensibility" -- the degree to which members can adapt the service to their needs -- but that's for another post.)

My own experience with Facebook as a user, as well as my professional experience with it in client marketing efforts, has been:
  • Facebook focuses on broad, mostly generic emotional exchanges -- pictures, birthday reminders, pokes.  I get the first two, and I admire the economy of meaning in the third.  The service leaves it to you to figure out what else to share or swap.  As a result, it is (for me anyway) <linkbait> only sometimes  relevant as an element in a B2C campaign, and rarely relevant in a B2B campaign </linkbait>
  • Facebook roared past MySpace because it got affinity right -- initially.  That is, Facebook's structure was originally constrained -- you had to have an email address from the school whose Facebook group you sought to join.  Essentially, there had to be some pre-existing basis for affinity, and Facebook just helped (re-)build this connective tissue.  Then, Facebook allowed anyone to join, and made identifying the nature of relationships established or reinforced there optional.  Since most of us including me are some combination of busy and lazy, we haven't used this feature consistently to describe the origins and nature  of these relationships.  And, it's cumbersome and awkward to have to go back and re-categorize "friends". (An expedient hack on this might be to allow you to organize your friends into groups, and then ask you which groups you want to publish items to, as you go.)
  • Facebook is a mixed bag as a UI.  On one hand, by allowing folks to syndicate blogs and tweets into Facebook, they've made our life easier.  On the other, the popular unstructured communications vehicles -- like the "Wall" -- have created real problems for some marketers.  Structured forms of interaction that would have created less risky choices for marketers, like polls, have come later than they should have and are still problematic ( for example, you can't add polls to groups yet, which would be killer).  And, interacting with Facebook through my email client -- on my PC and on my smartphone -- is still painful.  To their credit, Facebook opened up a great API to enable others to build specialized forms of structured interaction on its social graph. But in doing so it's ceded an opportunity to own the data associated with potentially promising ones.  (Like prediction markets; Inkling Markets, for example, lets you syndicate notices of your trades to Facebook, but the cupboard's pretty bare still for pm apps running against Facebook directly.)
The big picture: Facebook's optimizing size of the pie versus share of the pie.  It can't be all things to all people, so it's let others extend it and share in the revenue and create streams of their own.  Estimates of the revenues to be earned this year by the ecosystem of third party app developers running on Facebook and MySpace run to $300-500 million, growing at 35% annually.  
Them's not "digital dimes", especially in the context of steep declines in September ad page trends in, say, revenues of leading magazine franchises, as well as stalled television network upfronts. But, folks might argue, "Do I want to live in thrall to the fickle Facebook API, and rent their social graph at a premium?"  The answer isn't binary -- how much of an app's functionality lives in Facebook, versus living on a publisher's own server, is a choice.  Plus, there's ways to keep Facebook honest, like getting behind projects like OpenSocial, as other social networks have done. (OpenSocial is trying to become to Facebook's social graph as Linux is to Windows.  Engineer friends, I know -- only sort of.)  And, for Old Media types who don't feel they are up to the engineering tasks necessary, there are modern-day Levi Strausses out there selling jeans to the miners -- like Ning, which just today raised more money at a high valuation.  Still too risky? Old Media could farm out app development to their own third party developer networks, improving viral prospects by branding and promoting (to their suscriber lists) the ones they like in exchange for a cut of any revenues.  In this scenario, content gets added as an ingredient, not the whole main course.  

What is true in the new environment is that reach-based ad network plays surfing on aggregated content won't pay any more.  Rather we have to think about services that would generate more revenue from narrower audiences.  The third-party games created by Facebook app developers referenced above demonstrate how those revenues might stem from value through entertainment.  As we speak, Apple and its developers are earning non-trivial sums from apps.  Phonetag has its hands in folks' pockets (mine included) for $10/month for its superuseful -- albeit non-social -- transcription service.  Filtering for relevant content is a big challenge and opportunity.  Might someone aggregate audiences with similar interests and offer a retail version sourced at wholesale from filtering service firms like Crimson Hexagon?  Looks like Porter Novelli may already be thinking down these lines...

Let's push the math: a winner service by anyone's measure would earn, say, $50M a year. Four bucks a month from each person is roughly $50/ year.  You'd then need a million folks, 1/250th of Facebook's user base to sign up.  Reasonability check -- consider the US circulation of some major magazine titles

If your application service is especially useful, maybe you can get $2/month directly from each person.  Maybe you can make the rest up in ecommerce affiliate commissions (a 10% commission on $125 in annual purchases by each person gets you ~$1/month) and ad revenue (the $12 million/year nut would require one dollar per member per month; a $10 CPM would mean getting each of your million users to account for one impression on your service a couple of times per week, more or less, to cover that nut.)

We also have to be prepared to live in a world where the premiums such services earn are as evanescent as mayflies, especially if we build them on open social graphs.  But that's ok -- just as Old Media winners built empires on excellent, timely editorial taste in content, New Media winners will build their franchises on "editorial noses" for function-du-jour, and function-based insights relevant to their advertisers.  And last time I checked, function and elegance were not mutually exclusive.

So, even as we salute the Facebook juggernaut as it steams past Media Beach, it's time to light some design workshop campfires, and think application services that have "Value, Affinity, Simplicity."

June 27, 2009

Future Forward "What's Next In Tech": "DARC" Days Ahead

Thursday evening I attended Future Forwards' "What's Next In Tech: Exploring The Growth Opportunities of 2009 and Beyond" at the BU School Of Management.  The second of the evening's two panels (moderated with usual aplomb by Scott Kirsner) included Hubspot CEO Brian Halligan.  Brian described the criteria Hubspot uses as part of its hiring process using the acronym "DARC":

  • Digital natives -- active presences in a number of places on the Web
  • Analytic -- not just comfortable with, but passionate about data and the tools to play with them 
  • Reach -- their digital presences have a large number of friends and followers that potentially help Hubspot's viral marketing efforts 
  • Content creators -- their digital contributions provide signs of intelligent life 
Very useful and memorable.  Also, echoed the "Show, don't tell" philosophy we had at ArsDigita a decade ago.

Brian noted the emergence of Boston as a center of digital marketing thought leadership, citing (among others) local heroes David Meerman Scott, Chris Brogan, and Paul Gillin, and mentioning firms like Communispace and Crimson Hexagon (where my friend Ms. Perry Hewitt was the first CMO before leaving a few weeks ago to lead online communications for Harvard University).

So what did the panelists think would be the opportunities to track going forward (generally, and in Massachusetts in particular)?  My notes (please correct any inaccuracies and ommissions):

  • Tim Healy, (CEO Enernoc and our former landlord in Contact Networks' early days -- thanks Tim!) -- Water
  • Brian -- (seconding Michael Greeley) -- Connected Healthcare
  • Ellen Rubin -- Business Intelligence (naturally, I agree) 
  • Helen Greiner -- Cloud Computing 
  • Mike Dornbrook  -- Smart Grids (After my recent MITX judging experience I think there's lots of possibilities here too!)
  • Neal Sequeira (GC VC who backed video at network Scanscout, another piece of the ArsDigita diaspora) -- The "real-time web" (here's what I think that could mean)
  • Michael Greeley -- robotics specifically, "connected healthcare" via the intersection of robotics and healthcare digitization/ informatics more generally 
  • Bijan Sabet -- Education (specifically mentioned using online games to teach, citing 8D World as an example)
Pet peeve of the evening: student entrepreneurs who complain that VCs don't do enough to reach out to student entrepreneurs.  Kind of a self-fulfilling prophecy if they keep at it, no?  "Capture-it-in-a-bottle" moment of the evening: Scott Kirsner disarming said student as "cranky".

June 01, 2009

The Future Of Paid Content

Some are trying to put the "free content" genie back into the bottle and return to a pay model of some sort.  

This will be tough.  One problem is that (most, though not all) publishers have taught us to expect a lot for "free".  Another is that the world is awash in content, so if you're a publisher, hiding yours behind a pay wall just makes room for someone else to try to have his (ad-supported) day in the sun.  Snobs contend, "Water everywhere, but only a few drops (ours) worth drinking."   Maybe, but with production and communication costs low, and lots of people out there, there are enough exceptions to disprove the rule.  Regardless, focusing on these issues misses the point about where the value for the average reader is today.  The future of paid content lies not in the content itself, but in serving two adjacent needs:  filtering what's relevant, and helping audiences to use it productively.

Let's look at filtering first, and let's take Twitter as an example.  At north of 20 million users, and even with a churn rate fluctuating around 50%, you can't ignore it (and recent research suggests business people are paying attention).  The challenge is finding useful tweeters. (Digerati friends please help -- is that what one who tweets is called?  Or, is it "tweeps", or "tweeple", or some such?)  There are some early stage services probing at this: besides Twitter Search (formerly Summize / monetized via... TBD) and its upcoming "Discovery Engine", there's Hashtags (search by / subscribe to... wait for it... hashtags; monetized via tip jar),  Microplaza (tweets from people you follow; monetized via subsidy from parent co, which is an enterprise-focused collaboration platform ASP), Tweetmeme (Digg for Twitter; monetized via sponsorships), Wefollow (like the Yellow Pages of Twitter), plus a half a dozen more I've heard of and tried and doubtless dozens I haven't (see here for more).  (Michael Yoon and I are working on one, stay tuned.)   Is some refined, scalable version of one or more of these systems worth $2-3 bucks a month to some reasonable sub-segment of the Web-using public?  Related memo to Google: it would be worth $2-3 month to me to have Google suggest good posts from my blogroll (I use Google Reader) based on parsing my emails, which it currently does to serve me ads in Gmail.

Second, and perhaps potentially far more lucrative, are services to help audiences do stuff with content.  Be an affiliate for schools that sell courses related to the content, for example.  Last time I checked, the market for education, particularly online / just-in-time education, was growing at a healthy clip.  More simply, offer lectures by content authors / editors and sell tickets to these events, or be an affiliate for others who do that with your content. 

My favorite creative approach to segmenting audience needs and monetizing accordingly comes from the musician Jill Sobule, whose http://jillsnextrecord.com/ (scroll down to "A Message From Jill") does a nice job of unpacking all the reasons why folks engage with her music, and then pricing related offers accordingly.  Folks wonder about Myspace's future, what with the Google deal expiring soon and all.  I wonder:  does Jill's approach suggest one path might be to leapfrog Eventful and function as an uber-agent for the bands making their homes on Myspace?