About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

63 posts categorized "Advertising"

October 15, 2010

Extending Marketing Integration to Agencies: The "Agency API" @rwlord #rzcs

My friends at Razorfish kindly invited me to their client summit in Boston this week.  It was a great event; they and their clients are working on some pretty cool stuff.  Social is front and center.  Close behind: lots of interesting touch / surface computing innovations (Pranav Mistry from the MIT Media Lab really blew our minds). 

In his opening comments Wednesday, Razorfish CEO Bob Lord made the point that for modern marketing to be effective it's got to be integrated across silos of course; but, further, that this has to extend to having agencies work together effectively on behalf of their clients, as much as clients responsible for different channels and functions need to themselves.

I've been wondering about this as well recently, and Bob's comments prompted me to write up some notes.

One observation is that *if* marketers are addressing agency collaboration, they usually start with an *organizational* solution that brings agencies together from time to time.   While this is great, it's insufficient.  To make entities like these effective, it helps to lay a foundation that *registers* and *reconciles* different aspects of what agencies have and do for their clients.  This foundation, realized through a simple, cheap tool like Basecamp if not the marketer's own intranet system, could include:

  • a data registry.  Agencies executing campaigns on behalf of their clients end up controlling data sets (display ad impressions and clicks, email opens, TV ratings, focus group / panel results) that are crucial to understanding the performance of marketing investments, but are typically beyond the scope of what IT's "enterprise data architectures" encompass.  I'm not suggesting that agencies need to ship this data to their clients; rather, that they simply register what's collected, who's got it, and how a client can get it if needed.
  • an insight registry.  Agency folks crunch data into fancy powerpoints bearing the insights on which campaign decisions get made.  It would be very helpful if these decks were tagged and linked from the appropriately-permissioned online workspace.
  • a campaign registry.  Think http://adverblog.com, only for the marketer's own (and perhaps direct competitors') campaigns of any stripe.  A place to put creative briefs and link to campaign assets / executions that implement them.

These approaches are simple, cheap, and "many hands make light work".  Implementing them collectively as a marketer's "Agency API" would help marketers and agencies to "reconcile" their work, in the following ways:

  • discover campaign conflicts and integration opportunities
  • unpack possible disagreements into manageable bites to resolve -- data conflicts, insight conflicts, analytic technique conflicts, creative brief conflicts
  • better prepare in advance of "interagency council" meetings

Of course the registries are not a panacea for inter-agency "issues" that a marketer needs to sort through, but they help to make any problematic issues more transparent and straightforward to focus on.  

Please take the poll below.  Reactions / suggestions welcome from folks with relevant experience!

 

June 14, 2010

OMMA Metrics & Measurement "Modeling Attribution" Panel SF 7/22: Hope To See You There

I'll be moderating a panel at the OMMA Metrics & Measurement Conference in San Francisco on July 22.  

The topic of the panel is, "Modeling Attribution: Practitioner Perspectives on the Media Mix".  Here's the conference agenda page.

The panel description:

How do you determine the channels that influence offline and online behavior and marketing performance?  

How should you allocate your budget across CRM emails, display ads, print advertising, television and radio commercials, direct mail, and other marketing sources? 

What models, techniques, and technologies should you use develop attribution and predictive models that can drive your business? 

Do you need SAS, SPSS, and a PhD in Statistics? 

Does first click, last click, direct, indirect, or appropriate attribution matter – which is best?

What about multiple logistic regression? 

What is the impact of survey and voice-of-the-customer data on attribution? 

Hear from experts who have to answer these questions and tackle these tough issues as they work hard in the field every day for their consultancies, agencies, and brands.

So far, Manu Mathew, CEO from VisualIQ, and Todd Cunningham, SVP Research at MTV Networks, will be participating on the panel as well.

Hope to see you there.  Meanwhile, please suggest questions you'd like to ask the panelists by commenting here.  Thanks!

April 13, 2010

MITX Panel: "Integrating Cross-Channel Customer Experiences" (April 29, 2010)

On the morning of April 29 I'll be moderating a MITX panel discussion titled "Integrating Cross-Channel Customer Experiences", in Cambridge, MA (Kendall Square).  More here, more posts to follow.  Hope to see you there!

March 13, 2010

Fly-By-Wire Marketing, Part II: The Limits Of Real Time Personalization

A few months ago I posted on what I called "Fly-By-Wire Marketing", or the emergence of the automation of marketing decisions -- and sometimes the automation of the development of rules for guiding those decisions.

More recently Brian Stein introduced me to Hunch, the new recommendation service founded by Caterina Fake of Flickr fame.  (Here's their description of how it works.  Here's my profile, I'm just getting going.)  When you register, you answer questions to help the system get to know you.  When you ask for a recommendation on a topic, the system not only considers what others have recommended under different conditions, but also what you've told it about you, and how you compare with others who have sought advice on the subject.

It's an ambitious service, both in terms of its potential business value (as an affiliate on steroids), but also in terms of its technical approach to "real time personalization".  Via Sim Simeonov's blog, I read this GigaOm post by Tom Pinckney, a Hunch co-founder and their VP of Engineering.  Sim's comment sparked an interesting comment thread on Tom's post.  They're useful to read to get a feel for the balance between pre-computation and on-the-fly computation, as well as the advantages of and limits to large pre-existing data sets about user preferences and behavior, that go into these services today.

One thing neither post mentions is that there may be diminishing returns to increasingly powerful recommendation logic if the set of things from which a recommendation can ultimately be selected is limited at a generic level.  For example, take a look at Hunch's recommendations for housewarming gifts.  The results more or less break down into wine, plants, media, and housewares.  Beyond this level, I'm not sure the answer is improved by "the wisdom of Hunch's crowd" or "Hunch's wisdom about me", as much as my specific wisdom about the person for whom I'm getting the gift, or maybe by what's available at a good price. (Perhaps this particular Hunch "topic" could be further improved by crossing recommendations against the intended beneficiary's Amazon wish list?)

My point isn't that Hunch isn't an interesting or potentially useful service.  Rather, as I argued several months ago,

The [next] question you ask yourself is, "How far down this road does it makes sense for me to go, by when?"  Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity.  The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.

For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit?  My experience has been that the answer is usually "yes".  But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.

At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data...)

Hunch is an interesting specific example of the increasingly broad RTP trend.  The NYT had an interesting article on real time bidding for display ads yesterday, for example.  The deeper issue in the trend I find interesting is the shift in power and profit toward specialized third parties who develop the capability to match the right cookie to the right ad unit (or, for humans, the right user to the right advertiser), and away from publishers with audiences.  In the case of Hunch, they're one and the same, but they're the exception.  How much of the increased value advertisers are willing to pay for better targeting goes to the specialized provider with the algorithm and the computing power, versus the publisher with the audience and the data about its members' behavior?  And for that matter, how can advertisers better optimize their investments across the continuum of targeting granularity?  Given the dollars now flooding into digital marketing, these questions aren't trivial.

January 29, 2010

Ecommerce On The Edge In 2010 #MITX

Yesterday morning I attended MITX's "What's Next For E-Commerce" Panel at Microsoft in Cambridge.  Flybridge Capital's Jeff Bussgang moderated a panel that included Shoebuy.com CEO Scott Savitz, CSN CEO Niraj Shah, Mall Networks CEO Tom Beecher,and Avenue 100 Media Solutions CEO Brian Eberman.

The session was well-attended and the panelists didn't disappoint. Across the board they provided a consistent cross-section of the sophistication and energy that characterizes life 2 SDs the right on the ecommerce success curve.

My notes and observations follow. But first, courtesy of Jeff, a quiz (answers at the end of the post):

1. Name the person, company, and city that originated the web-based shopping cart and secure payment process?

2. Name the person, company, and city that originated affiliate marketing on the web?

3. Name the largest email marketing firm in the world, and the city where it's headquartered?

Jeff opened by asking each of the panelists to talk about how they drive traffic, and how they try to distinguish themselves in doing so.

Brian described (my version) what his firm does as "performance marketing in the long tail", historically for education-sector customers (for- and non-profit) but now beyond that category. What that means is that they manage bidding and creative for 2 million less-popular keywords across all the major search engines for their customers. Their business is entirely automated and uses sophisticated models to predict when a customer should be willing to pay price X and use creative Y for keyword Z to reel in a likely-profitable order. The idea is that the boom in SEM demand has driven prices way up for popular keywords, but that there are still efficient marketing deals to be mined in the "long tail" of keyword popularity (e.g.,structured collaboration").

Niraj noted that there's an increasing returns dynamic in the SEM channel that raises entry barriers for upstarts and helps firms like CSN preserve and expand their position.  Namely, as firms like his get more sophisticated about conversion through scale and experience, they can afford to pay higher prices for a given keyword than smaller competitors can, and can reinvest in extending their SEM capabilities.  CSN now has a 10-person search marketing team within its total staff of 500. Since SEM is, to some degree, a jump-starter for firms that don't yet have a web presence sufficient to drive traffic organically, this edge is a powerful competitive weapon.  CSN is up to $200 million in annual revenues, and now manages the online furniture stores for folks like Walmart.

Scott sounded a different note, with similar results.  Shoebuy has focused more on cultivating its relationship with its existing customers and on Lifetime Value -- including referrals.  This focus has had a salutary effect on SEO, allowing them to rely less on SEM as it gets pricier.  Last year Shoebuy experienced double-digit top line growth and hit 8M uniques for December's shopping season, while realizing its lowest marketing expense as a percentage of sales since 2002.  They've continued to plow the savings into a better overall customer experience.  One way Shoebuy guides this reinvestment is through extensive use of Net Promoter-based surveys.  They keep the surveys brutally simple:  1)"Were you satisfied?" 2)"Whould you shop with us again?" 3)"Would you recommend us?".  Then they calculate the resulting NP scores to different things they try in their marketing mix, to give them a more nuanced insight than the binary outcome of an order can provide.

Tom described how while Mall Networks' traffic is "free" -- it all comes from their loyalty program partners' sites (e.g. Delta Skymiles website awards redemption page) -- they still have to jockey for Mall Networks' placement on those pages. (Though Tom was too polite to say so, the processes for deciding who goes where on popular pages is often a blood sport and ripe in most organizations for a more structured, rational approach.)

Former Molecular founder and CEO Ralph Folz asked about display -- is that making a comeback?  Brian indicated the lack of performance and the lack of placement control through ad networks made that a highly negative experience.  He did note that they are now experimenting with participation in real-time-bidding through ad exchanges for inventory that ad networks make available, sometimes for time windows only a hundred milliseconds long.  Jeff reinforced the emergence of "RTB" and mentioned MIT Prof. Ed Crawley's Cambridge-based DataXu (which Flybridge has invested in) as a leader in the field.

Affiliate marketing came up next.  Tom explained the basics (in response to a question): each of the 600 stores in Mall Networks stable pays Mall Networks, say for example, a 10% commission on orders that come through Mall Networks.  Mall Networks gives a chunk to the members of various loyalty programs that shop through it -- say 3-5% of the value of the order; some goes to the loyalty programs themselves, as partial inducements for sending traffic to Mall Networks, and the rest goes to Mall Networks to cover costs and yield profits.

All the other panelists include affiliates in their marketing mix, and all appeared satisfied to have them play a healthy role.  Niraj specifically mentioned the ShareASale and Google Affiliate networks.  Jeff asked about everyone's frenemy Amazon; the answers were uniformly respectful: "they're a tough competitor, but they build general confidence and familiarity with the ecommerce channel, and that's good for everyone."  Niraj noted the 800 lb. gorilla nature of their category dominance: "They're at $20m and NewEgg is the next biggest pure play at $2B.  They're a fact of life. We just have to be better at what we focus on."

Someone in the audience raised email.  All of the panelists use it, with lists ranging from millions to hundreds of millions of recipients in size.  They noted that this traditional pillar of online marketing has now gotten very sophisticated.  In their world, they look well beyond top line metrics like open- and clickthrough rates to root-cause analysis of segment-based performance.  Re-targeting came up, and Niraj noted that for them, email and re-targeting weren't substitutes (as some have seen them) but in fact played complementary roles in their mix.  (Jeff explained re-targeting for the audience: using an ad network to cookie visitors to your site, and then serving them "please come back!" ads on other sites in the network they go to after they've abandoned a shopping cart or otherwise left your site.  A twist: serving ads inviting them to *your* site after they've abandoned one of your competitors' sites.  Hey, all's fair in love, war, and ecommerce...).  A common theme:  unlike most of the rest of the world, email teams at these leading firms are tightly integrated with other channels' operators to better integrate the overall experience, even to the point of shared metrics.

What about social?  Scott: "Building community is key for us.  We run contests -- "What are you hoping will be under your tree this Christmas?" -- to stimulate input from our customers.  And, while we have social media coordinators, many people here participate in channels like Twitter in support of our efforts."  Niraj: "Our PR team came up with a 'Living Room Rescue' contest which we did in partnership with [a popular] HGTV host [whose name escaped me -- C.B.].  We got six thousand entries; we used a panel of professional decorators to narrow the list to a hundred, and then used social voting to choose a winner.  We publicized the contest, and it took on a life of its own, as local papers tried to drum up support for their local [slobs -- my word, not Niraj's].  While we couldn't / didn't measure conversion directly from this campaign, our indirect assessment was that it had a great ROI."  Jeff observed that social's potential seems greater when the object of the buzz is newsworthy.

It was a short leap from this to a question about attribution analysis, the simultaneous-dream-and-nightmare-du-jour for web analytics geeks out there.  Brian was surprisingly dismissive.  In his experience (if I understood correctly), he's seeing only up to 20%, and usually only 5-10% of order-placing customers touch two or more properties they source clicks from, across the broad landscape they cover, across a time frame ranging from a day to a month long.  "In the end, only a couple of dollars would shift from one channel to another if we did attribution analysis, so in general it's not worth it."  We chatted briefly after the panel about this; there are large ticket, high-margin exceptions to this rule (cars).  I need to learn about this one some more, it surprised me.

Mobile!  Is it finally here?  Scott reports that 6-9 months ago *customers* finally began asking for it (as opposed to having it pushed by vendors), so now they have a Shoebuy.com iPhone app.  Jeff noted that customers are rolling their own mobile strategies -- some folks are now going into (say) Best Buy, having a look at products in the flesh, then checking Amazon for the items and buying them through their iPhone if the price is right.  So, your store is now Amazon's showroom.  If you can't find something, or didn't even know you wanted it, but happen to stray near a store carrying it, location-based services will push offers at you -- and the offers may come from competitors.  (Gratuitous told-you-so here.)  Niraj:  "Say you're in Home Depot.  You want a mailbox.  Their selection is 'limited' [his description was more colorful]. We have 300 to choose from.  Wouldn't you want to know that?" Jeff:  Soon we'll also see the death of the checkout line: you'll take a picture of the barcode on the object of your desire, your smartphone will tell the store's POS system about it, and the POS system will send back a digital receipt you can show someone (or in the future, something) on your way out of the store. 

With all these channels in use, I asked how often they make decisions to reallocate investments across (as opposed to within) them -- say from search to email, as opposed to from keyword to keyword.  Brian: "Every day, each morning.  Some things -- like affiliate relationships -- may take 3-4 days to unwind.  But the optimization is basically non-stop."  Later we talked about the parallels with Wall Street trading floors.  For him, the analogy is apt.  Effectively he's a market-maker, only the securities are clicks, not stocks.  It's now reflected in their recruiting: many recent hires are former Wall Street quants.

A final note: The cultures in these shops are intensely customer-focused, flat, and data-driven.  Scott reads *every one* of the hundreds of thousands (yes you read right) of customer survey responses Shoebuy gets each year.  He also described the enthusiasm with which their customer service team embraced having all company communications to customers end with an invitation to email senior management with any concerns.  Niraj described CSN's floor plan:  500 people, no offices.  Everyone in the company takes a regular turn in customer service.  Everyone has access to the firm's data warehouse.  Brian told us about a digital display they have up in their offices showing hour-by-hour, source-by-source performance.  They also recently ran a "Query Day" in which everyone in the company -- including sales, finance, HR -- got training in how to use their databases to answer business questions.  Tom described that they “watch the cash register every minute, hour, day during the Christmas shopping season.”

This was a terrific session, and I've only captured half of it here.  Further comments / corrections / observations very welcome.

Quiz Answers:

1. MIT Prof. David K. Gifford, Open Market, Cambridge

2. Tom Gerace, BeFree, Cambridge

3. Constant Contact, Waltham

January 26, 2010

What's NYT.com Worth To You, Part II

OK, with the response curve for my survey tailing off, I'm calling it.  Here, dear readers, is what you said (click on the image to enlarge it):

Octavianworld nyt com paid content survey

(First, stats: with ~40 responses -- there are fewer points because of some duplicate answers -- you can be 95% sure that answers from the rest of the ~20M people that read the NYT online would be +/- 16% from what's here.)

90% of respondents would pay at least $1/month, and several would pay as much as $10/month. And, folks are ready to start paying after only ~2 articles a day.  Pretty interesting!  More latent value than I would have guessed.  At the same time, it's also interesting to note that no one went as high as the $14 / month Amazon wants to deliver the Times on the Kindle. (I wonder how many Kindle NYT subs are also paper subs getting the Kindle as a freebie tossed in?)

Only a very few online publishers aiming at "the general public" will be able to charge for content on the web as we have known it, or through other newer channels.  Aside from highly-focused publishers whose readers can charge subscriptions to expense accounts, the rest of the world will scrape by on pennies from AdSense et al

But, you say, what about the Apple Tablet (announcement tomorrow! details yesterday), and certain publishers' plans for it?  I see several issues:

  • First, there's the wrestling match to be had over who controls the customer relationship in Tabletmediaworld. 
  • Second, I expect the rich, chocolatey content (see also this description of what's going in R&D at the Times) planned for this platform and others like it to be more expensive to produce than what we see on the web today, both because a) a greater proportion of it will be interactive (must be, to be worth paying for), but also because b) producing for multiple proprietary platforms will also drive costs up (see for example today's good article in Ad Age by Josh Bernoff on the "Splinternet"). 
  • Third, driving content behind pay walls lowers traffic, and advertising dollars with it, raising the break-even point for subscription-based business models. 
  • Fourth, last time I checked, the economy isn't so great. 
The most creative argument I've seen "for" so far is that pushing today's print readers/ subscribers to tablets will save so much in printing costs that it's almost worth giving readers tablets (well, Kindles anyway) for free -- yet another edition of the razor-and-blade strategy, in "green" wrapping perhaps.

The future of paid content is in filtering information and increasing its utility.  Media firms that deliver superior filtering and utility at fair prices will survive and thrive.  Among its innovations in visual displays of information (which though creative, I'd guess have a limited monetization impact) is evidence that the Times agrees with this, at least in part (from the article on Times R&D linked to above):

When Bilton swipes his Times key card, the screen pulls up a personalized version of the paper, his interests highlighted. He clicks a button, opens the kiosk door, and inside I see an ordinary office printer, which releases a physical printout with just the articles he wants. As it prints, a second copy is sent to his phone.

The futuristic kiosk may be a plaything, but it captures the essence of R&D’s vision, in which the New York Times is less a newspaper and more an informative virus—hopping from host to host, personalizing itself to any environment.

Aside from my curiosity about the answers to the survey questions themselves, I had another reason for doing this survey.  All the articles I saw on the Times' announcement that it would start charging had the usual free-text commenting going.  Sprinkled through the comments were occasional suggestions from readers about what they might pay, but it was virtually impossible to take any sort of quantified pulse on this issue in this format.  Following "structured collaboration" principles, I took five minutes to throw up the survey to make it easy to contribute and consume answers.  Hopefully I've made it easier for readers to filter / process the Times' announcement, and made the analysis useful as well -- for example, feel free to stick the chart in your business plan for a subscription-based online content business ;-)  If anyone can point me to other, larger, more rigorous surveys on the topic, I'd be much obliged.

The broader utility of structuring the data capture this way is perhaps greatest to media firms themselves:  indirectly for ad and content targeting value, and perhaps because once you have lots of simple databases like this, it becomes possible to weave more complex queries across them, and out of these queries, some interesting, original editorial possibilities.

Briefly considered, then rejected for its avarice and stupidity: personalized pricing offers to subscribe to the NYT online based on how you respond to the survey :-)

Postscript: via my friend Thomas Macauley, NY (Long Island) Newsday is up to 35 paid online subs.

January 01, 2010

Grokking Google Wave: The Homeland Security Use Case (And Why You Should Care)

A few people asked me recently what I thought of Google WaveLike others, I've struggled to answer this.

In the past few days I've been following the news about the failed attempt to blow up Northwest 253 on Christmas Day, and the finger-pointing among various agencies that's followed it.  More particularly, I've been thinking less about whose fault it is and more about how social media / collaboration tools might be applied to reduce the chance of a Missed Connection like this.

A lot of the comments by folks in these agencies went something like, "Well, they didn't tell us that they knew X," or "We didn't think we needed to pass this information on."  What most of these comments have in common is that they're rooted in a model of person-to-person (or point-to-point) communication, which creates the possibility that one might "be left out of the loop" or "not get the memo".

For me, this created a helpful context for understanding how Google Wave is different from email and IM, and why the difference is important.  Google Wave's issue isn't that the fundamental concept's not a good idea.  It is.  Rather, its problem is that it's paradigmatically foreign to how most people (excepting the wikifringe) still think.

Put simply, Google Wave makes conversations ("Waves") primary, and who's participating secondary.  Email, in contrast, makes participants primary, and the subjects of conversations secondary.  In Google Wave, with the right permissions, folks can opt into reading and participating in conversations, and they can invite others.  The onus for awareness shifts from the initiator of a conversation to folks who have the permission and responsibility to be aware of the conversation.  (Here's a good video from the Wave team that explains the difference right up front.)  If the conversation about Mr. Abdulmutallab's activities had been primary, the focus today would be about who read the memo, rather than who got it.  That would be good.  I'd rather we had a filtering problem than an information access / integration problem.

You may well ask, "Isn't the emperor scantily clad -- how is this different from a threaded bboard?"  Great question.   One answer might be that "Bboards typically exist either independently, or as features of separate purpose-specific web sites.  Google Wave is to threaded bboard discussions as Google Reader is to RSS feeds -- a site-independent conversation aggregator, just as Google Reader is a site-independent content aggregator."   Nice!  Almost: one problem of course is that Google Wave today only supports conversations that start natively in Google Wave.  And, of course, that you can (sometimes) subscribe to RSS feeds of bboard posts, as in Google Groups, or by following conversations by subscribing to RSS feeds for Twitter hashtags.  Another question: "How is Google Wave different from chat rooms?"  In general, most chats are more evanescent, while Waves appear (to me) to support both synchronous chat and asynchronous exchanges equally well.

Now the Big Question: "Why should I care?  No one is using Google Wave anyway."  True (only 1 million invitation-only beta accounts as of mid-November, active number unknown) -- but at least 146 million people use Gmail.  Others already expect Google Wave eventually will be introduced as a feature for Gmail: instead of / in addition to sending a message, you'll be able to start a "Wave".  It's one of the top requests for the Wave team.  (Gmail already approximates Wave by organizing its list of messages into threads, and by supporting labeling and filtering.)  Facebook, with groups and fan pages, appears to have stolen a march on Google for now, but for the vast bulk of the world that still lives in email, it's clunky to switch back and forth.  The killer social media / collaboration app is one that tightly integrates conversations and collaboration with messaging, and the prospect of Google-Wave-in-Gmail is the closest solution with any realistic adoption prospects that I can imagine right now.

So while it's absurdly early, marketers, you read it here first: Sponsored Google Waves :-)  And for you developers, it's not too early to get started hacking the Google Wave API and planning how to monetize your apps.

Oh, and Happy New Year!

Postscript: It was the software's fault...

Postscript #2: Beware the echo chamber

December 26, 2009

A Springier #iPhone Springboard: Why, When, and How

Once again it's the Year Of Mobile.  Let's put aside for the moment whether you think this is still another macromyopic projection.  Assuming you buy that, there's no denying the iPhone's leadership position in the mobile ecosystem.  If mobile's important to you, the iPhone desktop is "strategic ground" whose evolution you should care about.

A frequent beef about the iPhone is that all apps are accessed from a single-level desktop, and that you have to swipe across several screens to get to the app you want.  (Sometimes, this can be life-threatening, as when a friend launches PhoneSaber, and you're slow on the draw.)  Today we're mostly stuck with this AFAIK, since my cursory research (browsing plus buttonholing some Apple Store folks) didn't reveal any immediate plans to upgrade the iPhone OS to address this.

It's interesting to see how what tribe you're from influences how you'd solve this.  If Microsoft (of yore, anyway) made the iPhone, the solution might likely be some sort of Windows Explorer-type hierarchical folders.  If Google made the iPhone, the answer to this challenge might be Gmail-style labels / tags.  If you come from the Apple/ Adobe RIA world, Expose might appeal to you.

From the business side, my mind runs to the "Why" that will shape the "When" and "How".  Here's a 2010 prediction:  big firms will stop thinking in terms of having one iPhone app, and more in terms of fielding "branded suites" of iPhone apps. 

Let's say you're a media firm, with multiple media properties.  These properties might share a similar functional need solved by a common app, like a reader.  Or, a single media property (say, a men's lifestyle one) might want a collection of lighter-weight, function-specific apps like a wine-chooser, a tie-chooser (take pictures of your ties, then have the app suggest -- via expert opinion, crowdsourcing, or an API for your significant other to code to -- which of your ties might go well with a shirt you see / snap a picture of at the store), and so on.

Without more dimensionality to Springboard, the BigCo app developer has two choices:

  • Lard up a single app to do more within the "brand experience" it creates with its iPhone app.  But monolithic apps are slower and less reliable, presumably even if you're using the Tab Bar framework.  Plus, monolithic apps don't expand BigCo's share of the iPhone Springboard desktop, presumably a desirable strategic objective.
  • Build multiple apps that get scattered across the Springboard, compromising the "critical mass" feel of the "branded suite" (apps that appear together make more of a brand impression than apps appearing separately, on different screens.  I don't have any data to support this, and you could argue the opposite, that apps scattered across screens provide more frequent brand reminders.  I think folks might be likelier mnemonically to remember "five swipes to the men's lifestyle screen".  Anybody got data?).

The BigCo marketing department has a choice not available to the lowly app developer, however, and that's to write Apple a check.  It's reasonable to expect that we won't all get access to the new "MDS" (Multi-Dimensional Springboard) API BigCo gets.  Today, Apple already price-discriminates among iPhone developers: the Standard enrollment charge is $99, while the Enterprise is $299.  As this platform becomes even more important, and as BigCos want to do more with it, it's reasonable to expect that Apple will get even more creative with its pricing, private or publicly.

So that's the "Why".  As for "When", I'm guessing no earlier than 2011, given Apple's Cathedral-style approach to iPhone development (this might provide an opportunity for Android, BTW). (Thanks for re-tweeting this, @perryhewitt .)

And "How"? I'm betting on an Expose-style interface.  Swipe down to "zoom in" to a single screen, swipe up to move to a "higher altitude" and view multiple screens at once, perhaps with a subtle label or background (brand-appropriate, natch) for each one.

Who's closer to this?  What do you know?

December 22, 2009

New Year's Resolutions, 2010, Part I: Less Is More

Tony Haile was my gracious host last week for a short visit to Betaworks in Manhattan's meatpacking district.  Fascinating conversation (Thanks Tony!), more about that in a separate post to follow. 

Across the street from Betaworks' offices was this sign:

 IMG01101-1

Hit me like a ton of bricks (no irony).  My gold standard for trying boil down what I'm doing -- and for that matter what anyone else I'm working with is doing:

  • Clarity -- it's veal.
  • "Promise" -- not just veal: quality veal.
  • Accountability -- got a beef?  Talk to Dave.
  • Brevity -- more questions?  Knock on #425.
Probably not the same epiphany for you as it was for me, and Seth Godin's got nothing to worry about to be sure, but nonetheless a high signal-to-noise moment for me given what's been on my mind.  Hope to use it to full effect in the new year.

October 22, 2009

Fly-By-Wire Marketing

One of the big innovations used in the F-16 fighter jet was the "fly-by-wire" flight control system.  Instead of directly connecting the pilot's movements of the control stick and the rudder pedals to the aircraft's control surfaces through cables (for WWI-era biplanes) or hydraulics, the pilot's commands were now communicated electronically to an intermediate computer, which then interpreted those inputs and made appropriate adjustments. 

This saved a lot of weight, and channeling some of those weight savings into redundant control circuits made planes safer.  Taken to its extreme in planes like the B2 bomber, "fly-by-wire" made it possible for pilots to "fly" inherently unstable airplanes by leaving microsecond-by-microsecond adjustments to the intermediate computer, while the pilot (or autopilot) provided broader guidance about climbs, turns, and descents.

Now we have "fly-by-wire marketing".

A couple of days ago I read Daniel Roth's October 19 article on Wired.com titled "The Answer Factory: Fast, Disposable, and Profitable as Hell", describing Demand Media's algorithmic approach to deciding what content to commission and publish.  The article is a real eye-opener.  While we watch traditional publishers talk about turning "print dollars into digital dimes", Demand has built a $200 million annual revenue business with a $1 billion valuation.  How?  As Roth puts it, "Instead of trying to raise the market value of online content to match the cost of producing it — perhaps an impossible proposition — the secret is to cut costs until they match the market value."  More specifically,

Before Reese came up with his formula, Demand Media operated in the traditional way. Contributors suggested articles or videos they wanted to create. Editors, trained in the ways of search engine optimization, would approve or deny each while also coming up with their own ideas. The process worked fine. But once it was automated, every algorithm-generated piece of content produced 4.9 times the revenue of the human-created ideas. So Rosenblatt got rid of the editors. Suddenly, profit on each piece was 20 to 25 times what it had been. It turned out that gut instinct and experience were less effective at predicting what readers and viewers wanted — and worse for the company — than a formula.

I'm currently in situations where either the day-to-day optimization of the marketing process is too complex to manage fully through direct human intervention, or some of the optimizations to be performed are still sufficiently vague that we can only anticipate them at a broader, categorical level, from which a subsequent process -- perhaps an automated one -- will be necessary to fully realize them.  I also recently went to and blogged about a very provocative MITX panel on personalization, where a key insight (thanks to Scott Brinker, Co-founder and CTO of ion Interactive) was how the process to support personalization needs to change as you cut to finer and finer-grained targeting.  So it was with these contexts in mind that I read Roth's article, and the question it prompted for me was, "In a future dominated by digital channels, is there a generic roadmap for appropriate algorithmic abstractions of marketing optimization efforts that I can then adapt for (client-) specific situations?" 

That may sound a little out there, but Demand Media is further proof that "The future's already here, it's just not evenly distributed yet."  And, I'm not original in pointing out that we've had automated trading on Wall Street for a while; with the market for our attention becoming as digital as the markets for financial securities, this analogy is increasingly apt.

So here are some bare bones of what such a roadmap might look like.

Starting with end in mind, an ultimate destination might be that we could vary as many elements of the marketing mix as needed, as quickly as needed, for each customer (You laugh, but the holodeck isn't that far away...), where the end result of that effort would generate some positive marginal profit contribution. 

At the other end of the road, where we stand today, in most companies these optimization efforts are done mostly by hand.  We design and set campaigns into motion by hand, we use our eyes to read the results, and we make manual adjustments.

One step forward, we have mechanistic approaches.  We set up rules that say, "Read the incoming data; if you see this pattern, then make this adjustment."  More concretely, "When a site visitor with these cookies set in her browser arrives, serve her this content." This works fine as long as the patterns to be recognized, and the adjustments to be made, are few and relatively simple.  It's a lot of work to define the patterns to look for.  And, it can be lots of work to design, implement, and maintain a campaign, especially if it has lots of variants for different target segments and offers (even if you take a "modular" approach to building campaign elements).  Further, at this level, while what the customer experiences is automated, the adjustments to the approach are manual, based on human observation and interpretation of the results.

Two steps down the road, we have self-optimizing approaches where the results are fed back into the rule set automatically.  The Big Machine says,  "When we saw these patterns and executed these marketing activities, we saw these results; crunching a big statistical model / linear program suggests we should modify our marketing responses for these patterns in the following ways..."  At this level, the human intervention is about how to optimize -- not what factors to consider, but which tools to use to consider them.

I'm not clear yet about what's beyond that.  Maybe Skynet.  Or, maybe I get a Kurzweil-brand math co-processor implant, so I can keep up with the machines.

The next question you ask yourself is, "How far down this road does it makes sense for me to go, by when?"  Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity.  The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.

For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit?  My experience has been that the answer is usually "yes".  But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.

At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data.  I've pushed the roadmap idea further to help organizations make decisions based on this richer set of considerations.)

So, what are your plans for Fly-By-Wire Marketing?

Postscript: Check out "The Value Of The New Machine", by Steve Smith in Mediapost's "Behavioral Insider" e-newsletter today.  Clearly things are well down the road -- or should be -- at most firms doing online display and search buys and campaigns.  Email's probably a good candidate for some algorithmic abstraction.