A few months ago I posted on what I called "Fly-By-Wire Marketing", or the emergence of the automation of marketing decisions -- and sometimes the automation of the development of rules for guiding those decisions.
More recently Brian Stein introduced me to Hunch, the new recommendation service founded by Caterina Fake of Flickr fame. (Here's their description of how it works. Here's my profile, I'm just getting going.) When you register, you answer questions to help the system get to know you. When you ask for a recommendation on a topic, the system not only considers what others have recommended under different conditions, but also what you've told it about you, and how you compare with others who have sought advice on the subject.
It's an ambitious service, both in terms of its potential business value (as an affiliate on steroids), but also in terms of its technical approach to "real time personalization". Via Sim Simeonov's blog, I read this GigaOm post by Tom Pinckney, a Hunch co-founder and their VP of Engineering. Sim's comment sparked an interesting comment thread on Tom's post. They're useful to read to get a feel for the balance between pre-computation and on-the-fly computation, as well as the advantages of and limits to large pre-existing data sets about user preferences and behavior, that go into these services today.
One thing neither post mentions is that there may be diminishing returns to increasingly powerful recommendation logic if the set of things from which a recommendation can ultimately be selected is limited at a generic level. For example, take a look at Hunch's recommendations for housewarming gifts. The results more or less break down into wine, plants, media, and housewares. Beyond this level, I'm not sure the answer is improved by "the wisdom of Hunch's crowd" or "Hunch's wisdom about me", as much as my specific wisdom about the person for whom I'm getting the gift, or maybe by what's available at a good price. (Perhaps this particular Hunch "topic" could be further improved by crossing recommendations against the intended beneficiary's Amazon wish list?)
My point isn't that Hunch isn't an interesting or potentially useful service. Rather, as I argued several months ago,
The [next] question you ask yourself is, "How far down this road does it makes sense for me to go, by when?" Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity. The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.
For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit? My experience has been that the answer is usually "yes". But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.
At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data...)
Hunch is an interesting specific example of the increasingly broad RTP trend. The NYT had an interesting article on real time bidding for display ads yesterday, for example. The deeper issue in the trend I find interesting is the shift in power and profit toward specialized third parties who develop the capability to match the right cookie to the right ad unit (or, for humans, the right user to the right advertiser), and away from publishers with audiences. In the case of Hunch, they're one and the same, but they're the exception. How much of the increased value advertisers are willing to pay for better targeting goes to the specialized provider with the algorithm and the computing power, versus the publisher with the audience and the data about its members' behavior? And for that matter, how can advertisers better optimize their investments across the continuum of targeting granularity? Given the dollars now flooding into digital marketing, these questions aren't trivial.