About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

28 posts categorized "ecommerce"

November 18, 2009

@Chartbeat: Biofeedback For Your Web Presence

Via an introduction by my friend Perry Hewitt, I had a chance yesterday to learn more about Chartbeart, the real-time web analytics product, from its GM Tony Haile.

Chartbeat provides a tag-based tracking mechanism, dashboard, and API for understanding your site's users in real time.  So, you say, GA and others are only slightly lagged in their reporting.  What makes Chartbeat differentially useful?

I recently wrote a post titled "Fly-By-Wire Marketing" that reacted to an article in Wired on Demand Media's business model, and suggested a roadmap for firms interested in using analytics to automate web publishing processes. 

After listening to Tony (partly with "Fly-By-Wire Marketing" notions in mind), it occurred to me that perhaps the most interesting possibilities lay in tying a tool like Chartbeat into a web site's CMS, or more ambitiously into a firm's marketing automation / CRM platform, to adjust on the fly what's published / sent to users.

Have a look at their live dashboard demo, which tracks user interactions with Fred Wilson's blog, avc.com.  Here's a question: if you were Fred -- and Fred's readers -- how would avc.com evolve during the day if you (as Fred or one of Fred's readers) could see this information live on the site, perhaps via a widget that allowed you to toggle through different views?  Here are some ideas:

1. If I saw a disproportionate share of visitors coming through from a particular location, I might push stories tagged with that location to a "featured stories" section / widget, on the theory that local friends tell local friends, who might then visit direct to the home page url.

2. If I saw that a particular story was proving unusually popular, I might (as above) feature "related content", both on a home page and on the story page itself.

3. If I saw that traffic was being driven disproportionately by a particular keyword, I might try to wire a threshold / trigger into my AdWords account (or SEM generally) to boost spending on that keyword, and I might ask relevant friends for some link-love (though this obviously is slowed by how frequently search engines re-index you). 

(Note: pushing this further, as we discussed with Tony, we'd subscribe to a service that would give us a sense for how much of the total traffic being driven to Chartbeat users by that keyword is coming our way, and use that as a metric for optimizing our traffic-driving efforts in real time.  Of course such a service would have to anonymize competitor information, be further aggregated to protect privacy, and be offered on an opt-in basis, but could be valuable even at low opt-in rates, since what we're after is relative improvement indications, and not absolute shares.)

4. If you saw lots of traffic from a particular place, or keyword, or on a particular product, you might connect this information to your email marketing system and have it influence what goes out that day.  Or, you might adjust prices, or promotions, dynamically based on some of this information.

Some of you will wonder how these ideas relate to personalization, which is already a big if imperfectly implemented piece of many web publishers' and e-retailers' capabilities.  I say personalization is great for recognizing and adjusting to each of you, but not to all of you.  For example, pushing this further, I wonder about the potential for "analytics as content".  NYT's "most-emailed" list is a good example of this, albeit in a graphically unexciting form.  What if you had a widget that plotted visitors on a map (which exists today of course) but also color-coded them according to their source, momentarily flashing the site or keyword that referred them?  At minimum it would be entertaining, but it would also hold a mirror up to the site's users showing them who they are (their locations and interests), in a way that would reinforce the sense of community that the site may be trying to foster otherwise. 

Reminds me a bit of Spinvision, and by proxy of this old post

October 22, 2009

Fly-By-Wire Marketing

One of the big innovations used in the F-16 fighter jet was the "fly-by-wire" flight control system.  Instead of directly connecting the pilot's movements of the control stick and the rudder pedals to the aircraft's control surfaces through cables (for WWI-era biplanes) or hydraulics, the pilot's commands were now communicated electronically to an intermediate computer, which then interpreted those inputs and made appropriate adjustments. 

This saved a lot of weight, and channeling some of those weight savings into redundant control circuits made planes safer.  Taken to its extreme in planes like the B2 bomber, "fly-by-wire" made it possible for pilots to "fly" inherently unstable airplanes by leaving microsecond-by-microsecond adjustments to the intermediate computer, while the pilot (or autopilot) provided broader guidance about climbs, turns, and descents.

Now we have "fly-by-wire marketing".

A couple of days ago I read Daniel Roth's October 19 article on Wired.com titled "The Answer Factory: Fast, Disposable, and Profitable as Hell", describing Demand Media's algorithmic approach to deciding what content to commission and publish.  The article is a real eye-opener.  While we watch traditional publishers talk about turning "print dollars into digital dimes", Demand has built a $200 million annual revenue business with a $1 billion valuation.  How?  As Roth puts it, "Instead of trying to raise the market value of online content to match the cost of producing it — perhaps an impossible proposition — the secret is to cut costs until they match the market value."  More specifically,

Before Reese came up with his formula, Demand Media operated in the traditional way. Contributors suggested articles or videos they wanted to create. Editors, trained in the ways of search engine optimization, would approve or deny each while also coming up with their own ideas. The process worked fine. But once it was automated, every algorithm-generated piece of content produced 4.9 times the revenue of the human-created ideas. So Rosenblatt got rid of the editors. Suddenly, profit on each piece was 20 to 25 times what it had been. It turned out that gut instinct and experience were less effective at predicting what readers and viewers wanted — and worse for the company — than a formula.

I'm currently in situations where either the day-to-day optimization of the marketing process is too complex to manage fully through direct human intervention, or some of the optimizations to be performed are still sufficiently vague that we can only anticipate them at a broader, categorical level, from which a subsequent process -- perhaps an automated one -- will be necessary to fully realize them.  I also recently went to and blogged about a very provocative MITX panel on personalization, where a key insight (thanks to Scott Brinker, Co-founder and CTO of ion Interactive) was how the process to support personalization needs to change as you cut to finer and finer-grained targeting.  So it was with these contexts in mind that I read Roth's article, and the question it prompted for me was, "In a future dominated by digital channels, is there a generic roadmap for appropriate algorithmic abstractions of marketing optimization efforts that I can then adapt for (client-) specific situations?" 

That may sound a little out there, but Demand Media is further proof that "The future's already here, it's just not evenly distributed yet."  And, I'm not original in pointing out that we've had automated trading on Wall Street for a while; with the market for our attention becoming as digital as the markets for financial securities, this analogy is increasingly apt.

So here are some bare bones of what such a roadmap might look like.

Starting with end in mind, an ultimate destination might be that we could vary as many elements of the marketing mix as needed, as quickly as needed, for each customer (You laugh, but the holodeck isn't that far away...), where the end result of that effort would generate some positive marginal profit contribution. 

At the other end of the road, where we stand today, in most companies these optimization efforts are done mostly by hand.  We design and set campaigns into motion by hand, we use our eyes to read the results, and we make manual adjustments.

One step forward, we have mechanistic approaches.  We set up rules that say, "Read the incoming data; if you see this pattern, then make this adjustment."  More concretely, "When a site visitor with these cookies set in her browser arrives, serve her this content." This works fine as long as the patterns to be recognized, and the adjustments to be made, are few and relatively simple.  It's a lot of work to define the patterns to look for.  And, it can be lots of work to design, implement, and maintain a campaign, especially if it has lots of variants for different target segments and offers (even if you take a "modular" approach to building campaign elements).  Further, at this level, while what the customer experiences is automated, the adjustments to the approach are manual, based on human observation and interpretation of the results.

Two steps down the road, we have self-optimizing approaches where the results are fed back into the rule set automatically.  The Big Machine says,  "When we saw these patterns and executed these marketing activities, we saw these results; crunching a big statistical model / linear program suggests we should modify our marketing responses for these patterns in the following ways..."  At this level, the human intervention is about how to optimize -- not what factors to consider, but which tools to use to consider them.

I'm not clear yet about what's beyond that.  Maybe Skynet.  Or, maybe I get a Kurzweil-brand math co-processor implant, so I can keep up with the machines.

The next question you ask yourself is, "How far down this road does it makes sense for me to go, by when?"  Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity.  The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.

For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit?  My experience has been that the answer is usually "yes".  But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.

At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data.  I've pushed the roadmap idea further to help organizations make decisions based on this richer set of considerations.)

So, what are your plans for Fly-By-Wire Marketing?

Postscript: Check out "The Value Of The New Machine", by Steve Smith in Mediapost's "Behavioral Insider" e-newsletter today.  Clearly things are well down the road -- or should be -- at most firms doing online display and search buys and campaigns.  Email's probably a good candidate for some algorithmic abstraction.

October 05, 2009

Personalization Is A Process #MITXMT

Last week I went to a MITX panel in Cambridge titled, "Get Relevant:  The Next Generation of Website Personalization".  I asked a question: "In between broadcast-to-the-masses and one-to-one, where in your practical experience is the crossover point (in number of customer segments you define and design for) where returns from finer-grained personalization are exceeded by the complexity of supporting an expanding number of them?" 

I got three good answers:

  • Brett Zucker (CTO at Bridgeline Software) and Joe Henriques (Regional Director, Sitecore) manned up ;-) and each gave me a number for where they see many organizations today -- Brett said 6-10, and Joe put it at 10-20.  Gross average, but crudely useful nonetheless!
  • Andrew Hally (VP Product Marketing and Strategy at Unica) offered that it really depends on the industry, and on the mass-customizability of the product and other elements of the marketing mix. In the credit card biz, for example, you see firms executing against thousands of segments continuously redefined through real-time testing, because it's really easy to change terms and assess response.
  • Scott Brinker (Co-founder/CTO at ion Interactive) noted that there are several answers to the question, because the process changes with each order of magnitude of the number of segments you set up to support. Scott described how in his experience, the hardest jump to make is from a handful of segments, supported by a highly manual process for creating offers, to the next order of magnitude, which at minimum requires modularization of marketing mix elements so they can be re-combined easily.  Scott further noted that traditional, primary-research-based segmentation approaches are being replaced by emergent (my word, blame me) segmentations based on testing. 

(Gents, apologies if I misquoted you -- please let me know.)

There's another question embedded in all this, which is Personalization (segmentation) with respect to what?  In other words, you can usefully define different segments for different elements of the marketing mix.  Upscale leisure traveler and a business traveler may get the same luxury hotel room.  But, they may behave differently when it comes to price.  Price-sensitive travelers may have different preferences for using marketing channels to find the best price.  So, in theory, you could have a single segment for product (luxury buyer, for example), two for pricing ("service-focused vs. price-focused"), and yet another two for channel preferences (e.g., "online-dominant" vs. "offline-dominant").  This can make segmentation much more operationally relevant, but it then also puts a premium on coordinating the outputs of these segmentation efforts (for example, if someone's really price-sensitive, you may want to steer them toward lower-cost digital channels for conversion).

Finally, another take-away from all this is to match the technology to the degree and kind (e.g., implicit vs. explicit) of personalization you intend to design a process for.  There's no point in buying a "Hemi-size" personalization engine if you've got a "Yugo-size" gas tank for marketing mix execution.  On the other hand, when the technology for testing an order-of-magnitude jump in segments becomes affordable, maybe it's time to optimize for that capability and flex / redesign the execution capability?

September 17, 2009

#Adobe + #Omniture: Further Thoughts ( #Analytics )

I've been following the Web Analytics Forum on Yahoo! -- David Simmons' ideas here are especially thoughtful -- and I listened to the Q3 Adobe con call.  Plus last night at Web Analytics Wednesday in Cambridge I had a chance to talk about it with VisualIQ CTO Anto Chittilappilly and Visible Measures' Tara Chang.  Here are some further ideas based on what I've read and heard so far:

  • Rich media -- video, interactive ads, etc. -- are a growing piece of the Internet experience. (Google's rumored acquisition of Brightcove reflects this.)  (The flip side: Semphonic CEO Gary Angel writes that "It's taking a long time, but HTML is dying.")
  • Since they're growing, tracking user interactions with them effectively is increasingly important, not just on the Internet but across platforms, and as the CIMM challenge to Nielsen suggests, not yet well addressed.
  • User tracking in rich media platforms like Flash is more granular and more persistent than cookie-based tracking (David Simmons explains how and recommends Eric Peterson's Web Site Measurement Hacks for more).
  • But, support for event-based tracking of users' interactions in rich internet media in existing web analytics platforms is in its infancy (though vendors say otherwise).
  • So, publishers want tighter, simpler integration of event-based tracking.
  • In the con call, Adobe CEO Shantanu Narayen mentions the framework "Create - Deploy - Optimize" as a way to understand their overall product roadmap vision.
  • Adobe has the "Create" (e.g., Photoshop, this part of their business is ~60% of their revenues) and "Deploy" (e.g., Flash Server / Flash Player) pieces covered.  But "Optimize" was still uncovered, up until this announcement.
That's the product strategy logic.  The financial logic:
  • Adobe is too dependent on the "Create" leg of their stool, and hasn't been able to monetize the "Deploy" piece as much as they might have hoped -- removing  license fees from the Open Screen Project is one recent example of this limitation.  So they're betting that "Optimize" has legs, and that buying Omniture in this economic climate at ~5x revenues is good timing.
  • Adobe's traditional software-licensing business model has gotten crushed year to year.  Omniture's revenues are >90% recurring subscriptions based on a SAAS model.  Adobe revenues (~$3B) are 10x Omniture's but the enhanced value proposition of the A-O integration and cross-selling through Adobe's sales force will accelerate O's growth.  Over the next 2-3 years, this will help to reduce the volatility of A's revenues / revenue growth.

What's ahead?  One direction, as I've previously discussed, is that the workflows associated with "Create-Deploy-Optimize" are increasingly complex, and that platforms that support these workflows in a simple, more integrated way will become important to have.  Managing these processes through hairballs built with Excel spreadsheets scattered across file servers just won't cut it.

Postscript: Eric Peterson's take.  And more from him.  The gist -- not so high on Omniture's relative value to customers and the experience of working with them, and not sure why Adobe did this.  Anil Batra has some interesting ideas for product directions that emerge from the combination.

September 15, 2009

Adobe + Omniture: Pragmalytically Perfect Sense

Big news (via, in my case, Eric Peterson's "Web Analytics Forum" on Yahoo!): Adobe's buying Omniture

Simple logic:

  • Adobe makes great tools for developing custom dashboards and other data visualization apps.  I know because in one engagement this summer, we've worked with a terrific client and another (also terrific) engineering firm to build a Flex-based prototype of an advanced predictive analytics application.  But prototyping is easy, tying a front end to a working, real-world analytics data model is much harder.
  • Omniture leads the pack of web analytics platform vendors, who all have more features and capabilities in their left pinkies than many of us could dream of in six lifetimes.  But exposing mere mortals to the interfaces these leading firms provide is like showing kryptonite to  Joe / Jane Superexecutive.  As analytics get more complex, it's even more important to focus on key questions and expose only the data / views on that data that illuminate those key questions.
  • So if you believe that that this web / digital / multichannel analytics thing has legs, then putting these two firms together and working both ends to the middle faster than might otherwise have happened is a smart thing to do.
  • The other reason to do this is to anticipate the trend in "custom reporting" and "advanced segmentation" capabilities in the "lower-end" analytics offerings (e.g., GA) from folks like Google.  I've been using these capabilities recently, and they get you a meaningful way, eroding the value of higher-end offerings on both the front (Adobe) and middle-back (Omniture) ends.

July 31, 2009

Clunkalytics

This afternoon I listened to an NPR segment on the government's "Cash for Clunkers" program.  It sounds like quite the  goat rodeo.  

A senior researcher at JD Power & Associates in Detroit interviewed for the segment noted that although the program provided incentives sufficient to fund sales of 250,000 cars ($1Bn in incentives at ~$4k/car traded in / retired), his firm's estimates are that only 40,000 sales will be incremental (over and above) what would have otherwise have been sold anyway.  Hmm.  If they're right, a billion dollars to lift sales by 40k cars.  That's $25k for each new, fuel-efficient car sold!  (In fairness, that's actually a lot less porky than a lot of things we hear about.)

Would the government have done better to simply buy 40,000 cars, perhaps for a little less than $25k apiece?  Then it could have run a contest where Americans could enter their most egregious gas guzzlers (via online video, natch) in the hope of winning a replacement, which would have been more fun, and bought the government lots more -- and more positive -- coverage of the program (perhaps giving a whole new meaning to "cap and trade").  

Of course this ignores the benefit of getting the other ~200k gas guzzlers off the road.  But treated as an independent objective, surely there would have been a better mechanism for encouraging drivers who were going to buy anyway to buy north of 20 MPG?

Cynically, one might say that the real beneficiaries of this program aren't auto workers, but the dealers whose glutted lots get cleared (especially since the program can be used to buy "foreign" as well as "domestic" cars -- ironically the folks interviewed on NPR sounded a common refrain: "Trading in my old GMC / Ford for a new Toyota pickup!").  It's hard to believe that under current conditions the dealers will order replacement inventories from the plants sufficient to replace what they sold.  

It's easy to understand why this program was so popular in Congress, since there are dealers in every state.  But if you wanted to make the most of a billion dollar stimulus for the present and the future of the nation, would putting it into the pockets of auto salesmen nationwide have been the best way to go? (It's a serious question, since dealers are often at the center of their communities and do spread a lot of money around, so maybe the multiplier is significant.)

Notwithstanding, I'm not coming at this from an ideological position.  I get the need for a stimulus to ameliorate the recession.  I'm thinking about this in the context of other retail promotion programs we see, many of which have the same inefficient dynamics -- subsidizing sales that would have happened anyway, motivating sales of the wrong products, and making the channel happy for a little while rather than encouraging more lasting customer loyalty.  And since "you manage what you measure", I'm also thinking about how I might have set up an analytic framework to execute the program more effectively.

There is a web site for this program (FWIW, it's using Google Analytics).  To measure how well the program attracts possible customers, perhaps the government could have channeled prospective users of the program through it, to gather information (e.g., pre-existing purchase intent, perhaps in combination with data from a BT network) in exchange for the "coupon".  To measure engagement and conversion, surely -- I can imagine a number of options -- consumers could have been tracked through to dealers.

Then, providing open access to the data and crowd-sourcing suggestions for improving the program would have been cool, and good practice for aspiring web analytics professionals (free job training in a growth category!).  Sadly, but more likely, we'd be filing FOIA requests.  Oh well.

July 21, 2009

Facebook at 250 (Million): What's Next? And What Will Your Share Be?

Facebook announced last week that it had passed 250 million members.  Since no social network grows to the sky (as MySpace experienced before it), it's useful to reflect on the enablers and constraints to that growth, and on the challenges and opportunities those constraints present to other major media franchises (old and new) that are groping for a way ahead.

"Structured Collaboration" principles say social media empires survive and thrive based on how well they support value, affinity, and simplicity.  That is,

  • how useful (rationally and emotionally) are the exchanges of information they support?
  • how well do they support group structures that maximize trust and lower information vetting costs for members? 
  • how easy do they make it for users to contribute and consume information? 
(There are of course additional, necessary "means to these ends" factors, like "liquidity" -- the seed content and membership necessary to prime the pump -- and "extensibility" -- the degree to which members can adapt the service to their needs -- but that's for another post.)

My own experience with Facebook as a user, as well as my professional experience with it in client marketing efforts, has been:
  • Facebook focuses on broad, mostly generic emotional exchanges -- pictures, birthday reminders, pokes.  I get the first two, and I admire the economy of meaning in the third.  The service leaves it to you to figure out what else to share or swap.  As a result, it is (for me anyway) <linkbait> only sometimes  relevant as an element in a B2C campaign, and rarely relevant in a B2B campaign </linkbait>
  • Facebook roared past MySpace because it got affinity right -- initially.  That is, Facebook's structure was originally constrained -- you had to have an email address from the school whose Facebook group you sought to join.  Essentially, there had to be some pre-existing basis for affinity, and Facebook just helped (re-)build this connective tissue.  Then, Facebook allowed anyone to join, and made identifying the nature of relationships established or reinforced there optional.  Since most of us including me are some combination of busy and lazy, we haven't used this feature consistently to describe the origins and nature  of these relationships.  And, it's cumbersome and awkward to have to go back and re-categorize "friends". (An expedient hack on this might be to allow you to organize your friends into groups, and then ask you which groups you want to publish items to, as you go.)
  • Facebook is a mixed bag as a UI.  On one hand, by allowing folks to syndicate blogs and tweets into Facebook, they've made our life easier.  On the other, the popular unstructured communications vehicles -- like the "Wall" -- have created real problems for some marketers.  Structured forms of interaction that would have created less risky choices for marketers, like polls, have come later than they should have and are still problematic ( for example, you can't add polls to groups yet, which would be killer).  And, interacting with Facebook through my email client -- on my PC and on my smartphone -- is still painful.  To their credit, Facebook opened up a great API to enable others to build specialized forms of structured interaction on its social graph. But in doing so it's ceded an opportunity to own the data associated with potentially promising ones.  (Like prediction markets; Inkling Markets, for example, lets you syndicate notices of your trades to Facebook, but the cupboard's pretty bare still for pm apps running against Facebook directly.)
The big picture: Facebook's optimizing size of the pie versus share of the pie.  It can't be all things to all people, so it's let others extend it and share in the revenue and create streams of their own.  Estimates of the revenues to be earned this year by the ecosystem of third party app developers running on Facebook and MySpace run to $300-500 million, growing at 35% annually.  
Them's not "digital dimes", especially in the context of steep declines in September ad page trends in, say, revenues of leading magazine franchises, as well as stalled television network upfronts. But, folks might argue, "Do I want to live in thrall to the fickle Facebook API, and rent their social graph at a premium?"  The answer isn't binary -- how much of an app's functionality lives in Facebook, versus living on a publisher's own server, is a choice.  Plus, there's ways to keep Facebook honest, like getting behind projects like OpenSocial, as other social networks have done. (OpenSocial is trying to become to Facebook's social graph as Linux is to Windows.  Engineer friends, I know -- only sort of.)  And, for Old Media types who don't feel they are up to the engineering tasks necessary, there are modern-day Levi Strausses out there selling jeans to the miners -- like Ning, which just today raised more money at a high valuation.  Still too risky? Old Media could farm out app development to their own third party developer networks, improving viral prospects by branding and promoting (to their suscriber lists) the ones they like in exchange for a cut of any revenues.  In this scenario, content gets added as an ingredient, not the whole main course.  

What is true in the new environment is that reach-based ad network plays surfing on aggregated content won't pay any more.  Rather we have to think about services that would generate more revenue from narrower audiences.  The third-party games created by Facebook app developers referenced above demonstrate how those revenues might stem from value through entertainment.  As we speak, Apple and its developers are earning non-trivial sums from apps.  Phonetag has its hands in folks' pockets (mine included) for $10/month for its superuseful -- albeit non-social -- transcription service.  Filtering for relevant content is a big challenge and opportunity.  Might someone aggregate audiences with similar interests and offer a retail version sourced at wholesale from filtering service firms like Crimson Hexagon?  Looks like Porter Novelli may already be thinking down these lines...

Let's push the math: a winner service by anyone's measure would earn, say, $50M a year. Four bucks a month from each person is roughly $50/ year.  You'd then need a million folks, 1/250th of Facebook's user base to sign up.  Reasonability check -- consider the US circulation of some major magazine titles

If your application service is especially useful, maybe you can get $2/month directly from each person.  Maybe you can make the rest up in ecommerce affiliate commissions (a 10% commission on $125 in annual purchases by each person gets you ~$1/month) and ad revenue (the $12 million/year nut would require one dollar per member per month; a $10 CPM would mean getting each of your million users to account for one impression on your service a couple of times per week, more or less, to cover that nut.)

We also have to be prepared to live in a world where the premiums such services earn are as evanescent as mayflies, especially if we build them on open social graphs.  But that's ok -- just as Old Media winners built empires on excellent, timely editorial taste in content, New Media winners will build their franchises on "editorial noses" for function-du-jour, and function-based insights relevant to their advertisers.  And last time I checked, function and elegance were not mutually exclusive.

So, even as we salute the Facebook juggernaut as it steams past Media Beach, it's time to light some design workshop campfires, and think application services that have "Value, Affinity, Simplicity."

July 07, 2009

Testing Across Your Web Presence: A Conversation With SiteSpect's Eric Hansen

A recurring theme in our work with clients is to cast the analytic net across their web presences, not just their web sites.  For example, let's say you're a retailer and you run a "discounted shipping" promotion, with different levels.  The campaign for this should be evaluated not just in terms of conversion on your web site, but in terms of how different levels and creative versions performed in terms of attracting, engaging, and converting customers in all the places you published them -- display ads, SEM units, emails, your site itself, affiliates' sites, etc.

Implementing a test strategy like this has been hard to date, because you may not control all of the different properties involved in the campaign; even if you do, each channel may have its own testing tool; and, even if you've consolidated things somewhat, the tagging job is still a bear.

So, I was really interested to see this recent announcement of SiteSpect's "URL Tunnel" capability, which allows users of this service to implement integrated tests across multiple properties in their digital ecosystem.  I met  SiteSpect CEO Eric Hansen and CMO Kim King a couple of weeks ago at Web Analytics Wednesday, and had a chance to talk with Eric last week by phone about how this works and about how their customers are getting valuable new insights from this capability.

Here's a simple example of how this works.  Let's say I'm a retailer and I have a gift finder on my site that enables parametric search across my catalog (search by gender, age, price range, etc.).  I build a widget version of this gift finder that can be embedded on affiliates' sites, or published as a display ad unit.  I set up "mywidgets.myretailsite.com" and respond to calls from the syndicated widget code from here.  Then I direct this traffic through SiteSpect's solution (either their "SiteSpect ASP" service or their "Enterprise" appliance running in my data center).  Now I can implement and track multivariate tests on widget variants through SiteSpect.

Eric described how a SiteSpect travel customer has used this.  Before, the travel portal acted as a catalog of destination packages, but booking happened on the private-labeled booking engine / travel shopping cart another travel industry partner provided to them.  From a user experience perspective, when a customer pressed "Book it!", he or she was taken off the travel site to the partner's site, e.g., "booking.partner.com", and the travel operator lost visibility to how different variants of those booking pages performed.  Now, the booking engine is still run by the partner, but it's embedded in pages served from a distinct "partner" subdomain off of the travel operator's site, and routed through a SiteSpect server where test variants are defined, applied, and tracked.  

How does the partner feel about this?  In principle, you might expect them to be concerned about the loss of control.  But in practice, since they weren't going to tweak the travel portal's marketing mix elements themselves, they are thrilled give the travel portal the opportunity for insights that will raise conversion rates, and therefore earn more booking fees for the travel partner.  Plus they may learn stuff that they can generalize for the benefit of other relationships.