About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

19 posts categorized "Application Design"

January 26, 2010

What's NYT.com Worth To You, Part II

OK, with the response curve for my survey tailing off, I'm calling it.  Here, dear readers, is what you said (click on the image to enlarge it):

Octavianworld nyt com paid content survey

(First, stats: with ~40 responses -- there are fewer points because of some duplicate answers -- you can be 95% sure that answers from the rest of the ~20M people that read the NYT online would be +/- 16% from what's here.)

90% of respondents would pay at least $1/month, and several would pay as much as $10/month. And, folks are ready to start paying after only ~2 articles a day.  Pretty interesting!  More latent value than I would have guessed.  At the same time, it's also interesting to note that no one went as high as the $14 / month Amazon wants to deliver the Times on the Kindle. (I wonder how many Kindle NYT subs are also paper subs getting the Kindle as a freebie tossed in?)

Only a very few online publishers aiming at "the general public" will be able to charge for content on the web as we have known it, or through other newer channels.  Aside from highly-focused publishers whose readers can charge subscriptions to expense accounts, the rest of the world will scrape by on pennies from AdSense et al

But, you say, what about the Apple Tablet (announcement tomorrow! details yesterday), and certain publishers' plans for it?  I see several issues:

  • First, there's the wrestling match to be had over who controls the customer relationship in Tabletmediaworld. 
  • Second, I expect the rich, chocolatey content (see also this description of what's going in R&D at the Times) planned for this platform and others like it to be more expensive to produce than what we see on the web today, both because a) a greater proportion of it will be interactive (must be, to be worth paying for), but also because b) producing for multiple proprietary platforms will also drive costs up (see for example today's good article in Ad Age by Josh Bernoff on the "Splinternet"). 
  • Third, driving content behind pay walls lowers traffic, and advertising dollars with it, raising the break-even point for subscription-based business models. 
  • Fourth, last time I checked, the economy isn't so great. 
The most creative argument I've seen "for" so far is that pushing today's print readers/ subscribers to tablets will save so much in printing costs that it's almost worth giving readers tablets (well, Kindles anyway) for free -- yet another edition of the razor-and-blade strategy, in "green" wrapping perhaps.

The future of paid content is in filtering information and increasing its utility.  Media firms that deliver superior filtering and utility at fair prices will survive and thrive.  Among its innovations in visual displays of information (which though creative, I'd guess have a limited monetization impact) is evidence that the Times agrees with this, at least in part (from the article on Times R&D linked to above):

When Bilton swipes his Times key card, the screen pulls up a personalized version of the paper, his interests highlighted. He clicks a button, opens the kiosk door, and inside I see an ordinary office printer, which releases a physical printout with just the articles he wants. As it prints, a second copy is sent to his phone.

The futuristic kiosk may be a plaything, but it captures the essence of R&D’s vision, in which the New York Times is less a newspaper and more an informative virus—hopping from host to host, personalizing itself to any environment.

Aside from my curiosity about the answers to the survey questions themselves, I had another reason for doing this survey.  All the articles I saw on the Times' announcement that it would start charging had the usual free-text commenting going.  Sprinkled through the comments were occasional suggestions from readers about what they might pay, but it was virtually impossible to take any sort of quantified pulse on this issue in this format.  Following "structured collaboration" principles, I took five minutes to throw up the survey to make it easy to contribute and consume answers.  Hopefully I've made it easier for readers to filter / process the Times' announcement, and made the analysis useful as well -- for example, feel free to stick the chart in your business plan for a subscription-based online content business ;-)  If anyone can point me to other, larger, more rigorous surveys on the topic, I'd be much obliged.

The broader utility of structuring the data capture this way is perhaps greatest to media firms themselves:  indirectly for ad and content targeting value, and perhaps because once you have lots of simple databases like this, it becomes possible to weave more complex queries across them, and out of these queries, some interesting, original editorial possibilities.

Briefly considered, then rejected for its avarice and stupidity: personalized pricing offers to subscribe to the NYT online based on how you respond to the survey :-)

Postscript: via my friend Thomas Macauley, NY (Long Island) Newsday is up to 35 paid online subs.

December 26, 2009

A Springier #iPhone Springboard: Why, When, and How

Once again it's the Year Of Mobile.  Let's put aside for the moment whether you think this is still another macromyopic projection.  Assuming you buy that, there's no denying the iPhone's leadership position in the mobile ecosystem.  If mobile's important to you, the iPhone desktop is "strategic ground" whose evolution you should care about.

A frequent beef about the iPhone is that all apps are accessed from a single-level desktop, and that you have to swipe across several screens to get to the app you want.  (Sometimes, this can be life-threatening, as when a friend launches PhoneSaber, and you're slow on the draw.)  Today we're mostly stuck with this AFAIK, since my cursory research (browsing plus buttonholing some Apple Store folks) didn't reveal any immediate plans to upgrade the iPhone OS to address this.

It's interesting to see how what tribe you're from influences how you'd solve this.  If Microsoft (of yore, anyway) made the iPhone, the solution might likely be some sort of Windows Explorer-type hierarchical folders.  If Google made the iPhone, the answer to this challenge might be Gmail-style labels / tags.  If you come from the Apple/ Adobe RIA world, Expose might appeal to you.

From the business side, my mind runs to the "Why" that will shape the "When" and "How".  Here's a 2010 prediction:  big firms will stop thinking in terms of having one iPhone app, and more in terms of fielding "branded suites" of iPhone apps. 

Let's say you're a media firm, with multiple media properties.  These properties might share a similar functional need solved by a common app, like a reader.  Or, a single media property (say, a men's lifestyle one) might want a collection of lighter-weight, function-specific apps like a wine-chooser, a tie-chooser (take pictures of your ties, then have the app suggest -- via expert opinion, crowdsourcing, or an API for your significant other to code to -- which of your ties might go well with a shirt you see / snap a picture of at the store), and so on.

Without more dimensionality to Springboard, the BigCo app developer has two choices:

  • Lard up a single app to do more within the "brand experience" it creates with its iPhone app.  But monolithic apps are slower and less reliable, presumably even if you're using the Tab Bar framework.  Plus, monolithic apps don't expand BigCo's share of the iPhone Springboard desktop, presumably a desirable strategic objective.
  • Build multiple apps that get scattered across the Springboard, compromising the "critical mass" feel of the "branded suite" (apps that appear together make more of a brand impression than apps appearing separately, on different screens.  I don't have any data to support this, and you could argue the opposite, that apps scattered across screens provide more frequent brand reminders.  I think folks might be likelier mnemonically to remember "five swipes to the men's lifestyle screen".  Anybody got data?).

The BigCo marketing department has a choice not available to the lowly app developer, however, and that's to write Apple a check.  It's reasonable to expect that we won't all get access to the new "MDS" (Multi-Dimensional Springboard) API BigCo gets.  Today, Apple already price-discriminates among iPhone developers: the Standard enrollment charge is $99, while the Enterprise is $299.  As this platform becomes even more important, and as BigCos want to do more with it, it's reasonable to expect that Apple will get even more creative with its pricing, private or publicly.

So that's the "Why".  As for "When", I'm guessing no earlier than 2011, given Apple's Cathedral-style approach to iPhone development (this might provide an opportunity for Android, BTW). (Thanks for re-tweeting this, @perryhewitt .)

And "How"? I'm betting on an Expose-style interface.  Swipe down to "zoom in" to a single screen, swipe up to move to a "higher altitude" and view multiple screens at once, perhaps with a subtle label or background (brand-appropriate, natch) for each one.

Who's closer to this?  What do you know?

November 18, 2009

@Chartbeat: Biofeedback For Your Web Presence

Via an introduction by my friend Perry Hewitt, I had a chance yesterday to learn more about Chartbeart, the real-time web analytics product, from its GM Tony Haile.

Chartbeat provides a tag-based tracking mechanism, dashboard, and API for understanding your site's users in real time.  So, you say, GA and others are only slightly lagged in their reporting.  What makes Chartbeat differentially useful?

I recently wrote a post titled "Fly-By-Wire Marketing" that reacted to an article in Wired on Demand Media's business model, and suggested a roadmap for firms interested in using analytics to automate web publishing processes. 

After listening to Tony (partly with "Fly-By-Wire Marketing" notions in mind), it occurred to me that perhaps the most interesting possibilities lay in tying a tool like Chartbeat into a web site's CMS, or more ambitiously into a firm's marketing automation / CRM platform, to adjust on the fly what's published / sent to users.

Have a look at their live dashboard demo, which tracks user interactions with Fred Wilson's blog, avc.com.  Here's a question: if you were Fred -- and Fred's readers -- how would avc.com evolve during the day if you (as Fred or one of Fred's readers) could see this information live on the site, perhaps via a widget that allowed you to toggle through different views?  Here are some ideas:

1. If I saw a disproportionate share of visitors coming through from a particular location, I might push stories tagged with that location to a "featured stories" section / widget, on the theory that local friends tell local friends, who might then visit direct to the home page url.

2. If I saw that a particular story was proving unusually popular, I might (as above) feature "related content", both on a home page and on the story page itself.

3. If I saw that traffic was being driven disproportionately by a particular keyword, I might try to wire a threshold / trigger into my AdWords account (or SEM generally) to boost spending on that keyword, and I might ask relevant friends for some link-love (though this obviously is slowed by how frequently search engines re-index you). 

(Note: pushing this further, as we discussed with Tony, we'd subscribe to a service that would give us a sense for how much of the total traffic being driven to Chartbeat users by that keyword is coming our way, and use that as a metric for optimizing our traffic-driving efforts in real time.  Of course such a service would have to anonymize competitor information, be further aggregated to protect privacy, and be offered on an opt-in basis, but could be valuable even at low opt-in rates, since what we're after is relative improvement indications, and not absolute shares.)

4. If you saw lots of traffic from a particular place, or keyword, or on a particular product, you might connect this information to your email marketing system and have it influence what goes out that day.  Or, you might adjust prices, or promotions, dynamically based on some of this information.

Some of you will wonder how these ideas relate to personalization, which is already a big if imperfectly implemented piece of many web publishers' and e-retailers' capabilities.  I say personalization is great for recognizing and adjusting to each of you, but not to all of you.  For example, pushing this further, I wonder about the potential for "analytics as content".  NYT's "most-emailed" list is a good example of this, albeit in a graphically unexciting form.  What if you had a widget that plotted visitors on a map (which exists today of course) but also color-coded them according to their source, momentarily flashing the site or keyword that referred them?  At minimum it would be entertaining, but it would also hold a mirror up to the site's users showing them who they are (their locations and interests), in a way that would reinforce the sense of community that the site may be trying to foster otherwise. 

Reminds me a bit of Spinvision, and by proxy of this old post

October 06, 2009

Twitter Idea Of The Day

I just read Clive Owen's piece in on wired.com describing the rise of search engines focused on real-time trend monitoring, as opposed to indexing based on authority.  It's good, short, and I recommend it.

Building on ideas I had a while back, it provoked an idea for a web service that would allow a group sponsor to register Twitter feeds (or, for that matter, any kind of feed) from members of the group, do a word-frequency analysis on those feeds (with appropriate filters of course), and then display snapshots (perhaps with a word cloud) of popularity, and trend analysis (fastest-rising, fastest-falling).  You could also have specialized content variants: most popular URLs, most popular tags.  Clicking through from any particular word (or url or tag) you could do a network analysis: which member of the group first mentioned the item, who re-tweeted him or her, either with attribution or without.

The builder of a service like this would construct it as a platform that would allow group sponsors to set up individual accounts with one or more groups, and it would allow these sponsors to aggregate groups up or drill down from an aggregate cross-group view down to individual ones, perhaps with some comparative analysis -- "show me the relative popularity of any given word / content item across my groups", for example.

Twitter already has trending topics, as do others, but the lack of grouping for folks relevant to me makes it (judging by the typical results) barely interesting and generally useless to me.  There are visual views of news, like Newsmap, but they pre-filter content by focusing on published news stories.

An additional layer of sophistication based on semantic analysis technology like, say, Crimson Hexagon's, would translate individual key words into broader categories of meaning from all this, so you could, at a glance, in what ways and proportions your group members were feeling about different things: "Well, it's Monday morning, and 2/3 of my users are feeling 'anxious' about work, while 1/3 are feeling 'inspired'  on vacation."

As for making money, buzz-tracking services are already bought by / licensed by / subscribed to by a number of organizations.  I could see a two-stage model here where group sponsors who aggregate and process their members' feeds could then re-syndicate fine-grained analysis of those feeds to media and other organizations to whom that aggregated view would be useful. "What are alumni of university X / readers of magazine Y focused on right now?"  The high-level cuts would be free, perhaps used to drive traffic.

July 21, 2009

Facebook at 250 (Million): What's Next? And What Will Your Share Be?

Facebook announced last week that it had passed 250 million members.  Since no social network grows to the sky (as MySpace experienced before it), it's useful to reflect on the enablers and constraints to that growth, and on the challenges and opportunities those constraints present to other major media franchises (old and new) that are groping for a way ahead.

"Structured Collaboration" principles say social media empires survive and thrive based on how well they support value, affinity, and simplicity.  That is,

  • how useful (rationally and emotionally) are the exchanges of information they support?
  • how well do they support group structures that maximize trust and lower information vetting costs for members? 
  • how easy do they make it for users to contribute and consume information? 
(There are of course additional, necessary "means to these ends" factors, like "liquidity" -- the seed content and membership necessary to prime the pump -- and "extensibility" -- the degree to which members can adapt the service to their needs -- but that's for another post.)

My own experience with Facebook as a user, as well as my professional experience with it in client marketing efforts, has been:
  • Facebook focuses on broad, mostly generic emotional exchanges -- pictures, birthday reminders, pokes.  I get the first two, and I admire the economy of meaning in the third.  The service leaves it to you to figure out what else to share or swap.  As a result, it is (for me anyway) <linkbait> only sometimes  relevant as an element in a B2C campaign, and rarely relevant in a B2B campaign </linkbait>
  • Facebook roared past MySpace because it got affinity right -- initially.  That is, Facebook's structure was originally constrained -- you had to have an email address from the school whose Facebook group you sought to join.  Essentially, there had to be some pre-existing basis for affinity, and Facebook just helped (re-)build this connective tissue.  Then, Facebook allowed anyone to join, and made identifying the nature of relationships established or reinforced there optional.  Since most of us including me are some combination of busy and lazy, we haven't used this feature consistently to describe the origins and nature  of these relationships.  And, it's cumbersome and awkward to have to go back and re-categorize "friends". (An expedient hack on this might be to allow you to organize your friends into groups, and then ask you which groups you want to publish items to, as you go.)
  • Facebook is a mixed bag as a UI.  On one hand, by allowing folks to syndicate blogs and tweets into Facebook, they've made our life easier.  On the other, the popular unstructured communications vehicles -- like the "Wall" -- have created real problems for some marketers.  Structured forms of interaction that would have created less risky choices for marketers, like polls, have come later than they should have and are still problematic ( for example, you can't add polls to groups yet, which would be killer).  And, interacting with Facebook through my email client -- on my PC and on my smartphone -- is still painful.  To their credit, Facebook opened up a great API to enable others to build specialized forms of structured interaction on its social graph. But in doing so it's ceded an opportunity to own the data associated with potentially promising ones.  (Like prediction markets; Inkling Markets, for example, lets you syndicate notices of your trades to Facebook, but the cupboard's pretty bare still for pm apps running against Facebook directly.)
The big picture: Facebook's optimizing size of the pie versus share of the pie.  It can't be all things to all people, so it's let others extend it and share in the revenue and create streams of their own.  Estimates of the revenues to be earned this year by the ecosystem of third party app developers running on Facebook and MySpace run to $300-500 million, growing at 35% annually.  
Them's not "digital dimes", especially in the context of steep declines in September ad page trends in, say, revenues of leading magazine franchises, as well as stalled television network upfronts. But, folks might argue, "Do I want to live in thrall to the fickle Facebook API, and rent their social graph at a premium?"  The answer isn't binary -- how much of an app's functionality lives in Facebook, versus living on a publisher's own server, is a choice.  Plus, there's ways to keep Facebook honest, like getting behind projects like OpenSocial, as other social networks have done. (OpenSocial is trying to become to Facebook's social graph as Linux is to Windows.  Engineer friends, I know -- only sort of.)  And, for Old Media types who don't feel they are up to the engineering tasks necessary, there are modern-day Levi Strausses out there selling jeans to the miners -- like Ning, which just today raised more money at a high valuation.  Still too risky? Old Media could farm out app development to their own third party developer networks, improving viral prospects by branding and promoting (to their suscriber lists) the ones they like in exchange for a cut of any revenues.  In this scenario, content gets added as an ingredient, not the whole main course.  

What is true in the new environment is that reach-based ad network plays surfing on aggregated content won't pay any more.  Rather we have to think about services that would generate more revenue from narrower audiences.  The third-party games created by Facebook app developers referenced above demonstrate how those revenues might stem from value through entertainment.  As we speak, Apple and its developers are earning non-trivial sums from apps.  Phonetag has its hands in folks' pockets (mine included) for $10/month for its superuseful -- albeit non-social -- transcription service.  Filtering for relevant content is a big challenge and opportunity.  Might someone aggregate audiences with similar interests and offer a retail version sourced at wholesale from filtering service firms like Crimson Hexagon?  Looks like Porter Novelli may already be thinking down these lines...

Let's push the math: a winner service by anyone's measure would earn, say, $50M a year. Four bucks a month from each person is roughly $50/ year.  You'd then need a million folks, 1/250th of Facebook's user base to sign up.  Reasonability check -- consider the US circulation of some major magazine titles

If your application service is especially useful, maybe you can get $2/month directly from each person.  Maybe you can make the rest up in ecommerce affiliate commissions (a 10% commission on $125 in annual purchases by each person gets you ~$1/month) and ad revenue (the $12 million/year nut would require one dollar per member per month; a $10 CPM would mean getting each of your million users to account for one impression on your service a couple of times per week, more or less, to cover that nut.)

We also have to be prepared to live in a world where the premiums such services earn are as evanescent as mayflies, especially if we build them on open social graphs.  But that's ok -- just as Old Media winners built empires on excellent, timely editorial taste in content, New Media winners will build their franchises on "editorial noses" for function-du-jour, and function-based insights relevant to their advertisers.  And last time I checked, function and elegance were not mutually exclusive.

So, even as we salute the Facebook juggernaut as it steams past Media Beach, it's time to light some design workshop campfires, and think application services that have "Value, Affinity, Simplicity."

Structured Collaboration in Social Media: Design For Analytics (IBM Research Talk Recap)

At Kate Ehrlich's invitation, I gave a one-hour talk yesterday at IBM Research and the Center for Social Software titled "Structured Collaboration in Social Media: Design for Analytics".

The presentation made three main points:
  • Social marketing today is limited by the (un-)structure of social media. (And, most marketers accept the constraints of these media as they exist today, and "reasonably" adapt themselves to them.)
  • Applying "Structured Collaboration" principles to social media design ("unreasonably" adapting the medium) can expand and improve user engagement.
  • These same principles can inform the design of "engagement" for marketing analytics, to make segmentation and targeting easier and more effective.
The presentation included a survey and evaluation of the social marketing landscape today, and described two important shortcomings that limit what marketers can do with it:
  • "Off-the-rack" collaboration structures that don't align -- and often work against -- what marketers are trying to accomplish 
  • Sample bias given the "1-10-90" nature of participation 
The presentation then described and illustrated "Structured Collaboration" principles, and talked about how marketers like Nike are realizing significant results through initiatives that reflect them.  Then, we discussed how the design of these initiatives can reveal segmentation and targeting insights much more readily than the typical application of marketing analytics to conventional social media.

Finally, the talk offered some existing examples for how to exploit these principles cheaply, as well as some ideas for how several firms could apply them to realize their own versions of what Nike, and more recently others like Fiat, have done.

Thanks once again to Kate, Ethan, and Irene, and to all their colleagues for participating!

July 02, 2009

From The "Stop Whining About Obscurity And Start Rethinking Content For Usability" Dept.: What's your iCalendar Strategy?

At  a number of events I've been to recently, one common refrain has been "Gee, how is it that no one knows about all the cool stuff that's going on in sector X around here?  We've been working hard to get the word out, but we're still flying under the radar."

Around the same time, I was fiddling with my Google Calendars, trying to integrate them and add additional calendars to them (like weather, holidays, etc.).  I was really surprised to see the scarcity of things already registered with Google that I could add.  There's an option for adding a URL for different organizations' iCalendar feeds, but no way readily apparent to me to search / discover such feeds, either on Google Calendar itself or via a logical search.  

I went to the web sites of some organizations with events series that I like to get to, to see if they offered iCalendar feeds that would place those events on my calendar, so I could have them there as reminders if by chance I could get to them.  So far, no joy.  Reassuringly, I'm not alone with this idea or my curiosity at the lack of feeds, or their discoverability, as this post by Jon Udell suggests.

If you are a publisher with a series of events you'd like people to come to, how can you publish an iCalendar feed?  Here are a number of options for tools to publish calendars on your site that also support publishing an associated iCalendar feed.

I also checked out services like Eventbrite  and Eventful, thinking that these intermediaries logically would publish iCalendar feeds for organizations that publicize and manage individual events through them.  Again, no luck -- maybe I'm missing something?


The broader point:  Democratization of publishing tools means content exposed as content will find it harder and harder to get its day in the warm sunlight of user attention.  Smart publishers will think about how they can expose their content in formats and applications that are more tightly tied to how users employ that information.  Many things can be turned into event series -- if you are a recipe site, how about a "meal-of-the-week" series?  If you are the US Government -- or Forbes.com -- where's my iCalendar feed for economics statistics announcements?  Other examples: interest rates integrated with interest rate calculators, apartment rental listings integrated with mapping applications that support getting directions...  what can you think of?

May 04, 2009

Going Green and UI Design: The Parable of the Prius

This year I was a judge in the MITX Technology Awards Competition, in the Business Intelligence and Devices categories, along with Charles Berman (SVP at Fidelity) and James LiVigni (VP Sales at Kronos).  The devices category had a number of green tech entrants.  As we opened our discussion, Charles shared  an observation that struck me as pretty profound.

"I've been driving a Prius for a while," he told us.  "It dawned on me that it represented a pretty interesting experiment in psychology. I mean, the interior's just ok and the performance is only one step up from a Yugo.  So why are people so rabid about it?  Sure, there's the green angle generally.  But my personal experience has been that it's about the mpg meter.  The Prius has this gauge that goes from zero to a hundred mpgs. It becomes a game to see if you can keep the average above your target -- mine's 50.  You get this constant feedback.  It becomes the focal point for how you drive.  You get hooked, not just rationally, but biochemically."

His broader point of course was to suggest that we should consider the degree to which the entrants in the category had addressed the need for behavior modification in the design of their solutions.  It isn't enough to provide reports on energy savings.  You have to put this information in very visible places, integrate targets, and make a game of it if you want people to change their ways.  If we want kids to remember to turn off the lights, why are the electric meters only outdoors?

Take it one step further, whether in business or consumer applications:  BTUs are really BTU$.  Why not displays that integrate actual versus target usage with energy prices updated wirelessly?  Then, in any given period where actual usage is below target usage, pay the relevant users a 10% (for example) "commission" on the savings?  Or, take it two steps further.  Network the measurement devices, and provide competitive feedback as part of a new MMPG: instead of green monsters "World Of WarCraft", create green heros in "Globe of GreenCraft".  Three steps beyond: track relative usage across the network, and target users with usage tips and ads for relevant products and services (reminds me of the Binge-o-Matic). Four steps beyond: create town-level teams, and enroll your neighbors in friendly competitions?  (No cheating by turning the local park's trees into firewood.)

So how does your application re-wire the user's brain in addictive ways?

Postscript May 11, 2009: AKQA wins One Show Interactive for Fiat ecoDrive.  Nice, but somewhat derivative?

March 31, 2002

Brain Stem Software Marketing Strategy

(I originally published this post here.)

Brain-Stem Strategy Questions

  • What activities or problems do many people find expensive or painful enough to be on their radar screens?
  • What technologies will soon perform well enough, or become cheap enough to help?
  • How can these technologies be applied to lower costs
    and ease pain? What cool new things become possible that will create
    new revenue opportunities for customers? Who can afford them? What else
    becomes a “must-have” at the same time?
  • Who else sees this? What are they doing about it? Are they ahead, or do they have enough money and talent that we should worry?
  • What’s our plan for beating them to the punch, and making money at it?

Let’s consider an example and then generalize from there.

Markets Worth Serving: A Case Study of Web-Based CRM

Take customer service. It has both implicit and explicit costs.
Try to skimp on it, and you experience higher returns, warranty
expenses, and customer defections — and a damaged reputation that
keeps new customers from coming on board. As customer expectations for
quality and service have risen over the last couple of decades, so has
spending on customer service. Building and running call centers got to
be a big expense in some industries. Finding ways to cut this cost
while improving service became a pretty worthwhile thing to work on.

Along comes the web. At first, not many people were hooked up
to it, and those that did mostly had slow connections. While that
didn’t make the browsing experience so great, it was perfect for email.
Compared with wading through an IVR’s (interactive voice response)
menu, giving up and then holding for one of the handful of operators
left after the rest were fired to pay for the IVR, email is a great
substitute for 80% of customer service transactions.

Next thing, managers come into work to find the
[email protected]” mailbox full. Sorting through messages,
to figure out what each complaint is about and to whom to route it,
becomes painful and expensive itself. Into the market rush new ISV’s
with “Bayesian Inference Engines” to read and classify messages, and
workflow applications to route and track their resolution. CRM is born.
At one point, Kana, one of the early leaders, reaches a market
capitalization approaching $25 billion dollars (roughly two and a half
times GM’s at the time).

But the bloom comes off the CRM rose quickly. Email’s easy for
customers, so they send lots of it and get mad when it isn’t answered
quickly. Unfortunately, CRM vendors’ technologies and applications are
hyped ahead of what they can actually support. It turns out the
automated email readers aren’t very accurate, so loads of messages
still have to be read, sorted, and answered manually. And early
workflow capabilities are insufficiently flexible to fit target
business processes and handle exceptions. From a competitive angle, big
ERP vendors slap web UI’s on their old client-server CRM versions and
push their way into the market, crushing prices as they turn a product
category into a feature of their broader suites.

As web connections proliferate and speed up, attention turns
to “self-service”. Search-engine-based dot-coms figure this out and
start licensing their software to corporations for their customer
service “portals”. For a while, search is huge. But soon disappointment
sets in. Search isn’t very accurate, and less than half the answers are
documented and accessible. To address the latter problem, content and
document management vendors move out of their respective early niches
and become ubiquitous tools for employes corporate-wide. As they
publish and reveal themselves to be experts, intranet applications that
help them interact with each other become popular. But these are
initially deployed with an “if you build it, they will come” mentality.
ROI sucks, because in an empty bboard no one can hear your screams.
This is where ArsDigita came in (which is why I picked this example).
But that’s a story for a different paper.

As these waves play themselves out, CIO’s ramp up the pace at
which they pull their hair out. Remember, somebody has to make all of
these things talk to each other. Somebody’s got to fix them when they
break (almost never, of course). Somebody’s got to customize them to
fit what the business needs to do. And somebody has to train people to
use them. A handy rule of thumb, borne out in my experience, is that
lifetime costs (that is, over 3-4 years) of deploying and supporting an
application can run to 10x original license costs.

Hindsight’s surely 20-20. But there is a certain Homer-Simpsonesque doh!
to all of this, that can be vaguely guessed at in advance by thinking a
couple of steps ahead (without, IMHO, any of that fancy scenario
modelling software people hawk — remember, if you can’t put it in your
brain-stem…) Of course, not all of this happens perfectly
sequentially. These categories emerged in parallel in many cases.
However, their relative “hotness” does seem to have followed a pattern
that the above questions can help to puzzle out. Use caution though –
it hasn’t made me rich yet.

Ok, let’s generalize a bit.

First, enterprise software can help businesses realize value in four different ways:

  • present information
  • support analysis
  • enable coordination
  • provide automation

To identify these opportunities, you can ask questions like:

  • What’s going on that’s worth knowing about? (How could software help the user find out, or find out faster?)
  • What does this information mean? (How could software help a user interpret information more easily and usefully?)
  • What should the user do in consequence? (How could software
    help get the right people working on a problem, in the right order, and
    at the right times?)
  • Must a user get involved at all? (Can software perform a task better, cheaper, faster?)

Second, as we discussed, CIO’s have to make all of this work, and so they have needs as well:

  • easy integration
  • easy maintenance
  • easy training
  • easy customization

Again, some clarifying questions:

  • What systems are worth hooking up? (And consequently, what integration solutions make sense?)
  • Who’s got to look after things? What corollary requirements
    emerge? (e.g., security becomes a big deal in a web-based world) (What
    does the profile of the maintainer suggest would be valuable technical
    aids?)
  • Who’s going to use the solution? (What do they need to do, what do they already know?)
  • What extensions are likely, and who will build them? (What API’s make sense?)

Depending on where we are in a particular technology cycle, answers
to all these questions may come more or less easily. A few years ago,
when the Web was young (at least as most business types would think of
it), Everyone had an opinion and a business plan for it, but no one
really had a clue (remember public B2B exchanges?). In the midst of
this confusion, vendors hatched schemes and placed bets based on their
own vested interests and hunches about how technologies would evolve
(remember HP’s e-Speak?). As experience has sifted the more valuable
business ideas from the turn-of-the-millenium chaff, so have
technologies been winnowed to a few remaining options, and along with
them, to a still-consolidating roster of vendors.

Let’s turn to assessing competition.

Fear and Greed: Competition and the Role of Standards

“The ground means the location, the place of pitched
battle — gain the advantage and you live, lose the advantage and you
die…”

– Sun Tzu, “The Art Of War”, Chapter One, page 1.

The software playing field is most usefully thought of as a
“technology stack”, with each layer using the services of the layer
beneath it and enabling the services of the layer above it:

Applications
Middleware
Databases
Operating Systems
Hardware

(Note: vaguely inspired by the “ArsDigita Layer Cake”)

When the technologies in each layer aren’t changing much, firms
that commercialize those technologies focus on increasingly narrow
niches of problems to solve with them in a bid to distinguish
themselves from each other.

Periodically, new technologies sweep through and shake up the
competitive landscape in their respective layers (for example, what
optical technologies are doing to networking gear, as shipping plain
old light waves down fiber is eclipsed by “wave-division-multiplexing”,
and now “dense wave-division-multiplexing”). Sometimes the feature,
cost, and performance improvements are so great that they revolutionize
adjacent layers as well (as the advent of browser applications
ultimately did for everything in the layers below them).

When these waves wash through, vendors jockey for advantage (and more often survival. They do this in one of three ways:

  • they develop and promote the disruptive technology (or alternative variants) themselves
  • they “surf” the new technology by extending it with enabling
    tools. Sometimes this is symbiotic, and sometimes it’s an attempt to
    “embrace and extend” (see below)
  • they acquiesce, shifting their business focus from being a
    vendor of the penultimate technology to being more of an “agnostic”
    solutions provider for the new sheriff in town.

(A good example of the first approach is what BEA did with
application servers, and what Microsoft is doing with web services. The
last approach is exemplified by what IBM has done over the last decade
with Global Services.)

As technology waves pass, and once-whizzy must-haves attract
competition, price points collapse — often by a factor of ten on a
price/performance basis, in as little as a year or two (just ask any
app server vendor’s sales guys). To continue to grow, the latest
leaders at each layer tend to creep their product offerings “up the
stack” into the next layer. This desire to move up creates significant
opportunities for smaller players in the next layer up. By tailoring
their products to work especially well with those of the big guys
coming up, they position themselves at minimum as attractive to the
encroaching firms’ sales forces, as well for possible acquisition.

Since revolutionizing a layer can lead to huge profits, there
are frequently multiple aspirants, each with its own flavor of the core
technology, competing to lead the revolution. But since a fractured
landscape in a given layer diminishes the appeal of the new technology
to the layer above, these aspirants have to agree on standards for how
their technology will connect with adjoining layers. Of course,
depending on how these standards evolve, they can significantly favor
one player over another.

Competitors are constantly engaged in a tense game of
“co-opetition”, where they balance efforts to establish their
individual technologies and associated standards tweaks as dominant,
against the need to agree on a single scheme so that adoption by the
layer above is accelerated. In short, it’s the age old
size-versus-share-of-the-pie, on steroids. High fixed costs make for
very high marginal returns, so the macro-level brinksmanship and
field-level competition for customers, talent, and buzz are
extraordinarily aggressive.

If all of this seems pointy-headed to you, just look at the
way web application development tools have unfolded. We started out
with scripting languages. Remember Javascript applets? John Gage, Sun’s
Chief researcher, once told me the story of how Tcl (a server-side
web-scripting language) was nearly selected and advanced in
Javascript’s place. Early versions of the ArsDigita Community System,
along with other vendors’ products like Vignette’s, were built with
Tcl. We got hammered over the fact that Tcl was a dead language, and
our transition to Java was tortuous, expensive, and lengthy enough that
it costs us most of an important market window. But for the fateful
decisions in the backroom of a technical advisory board, there might be
a lot more Tcl developers in the world today.

We weren’t the only ones to lose at the standards game.
Consider once-white-hot Art Technology Group (ATG). ATG soared
initially because of its early support of Java for web development. But
ATG rode its non-J2EE-compliant Dynamo Server product too long and got
eclipsed by BEA, which evangelized EJB’s running on their WebLogic
product. Now the J2EE camp is struggling to catch up with Microsoft,
which has established a very likely insurmountable early lead in web
services. Standards are the playing fields on which the hand-to-hand
battles of modern software wars are fought. Companies are made and
crushed there. You ignore their evolution at your peril.

Standards become and stay popular because their sponsors make
them really useful. Let’s say I speak English really well. It’s in my
interest to get everyone else to speak English, because I’ll then have
a communications advantage over my less-articulate peers. Being
Shakespeare himself won’t help me in a world that speaks Swahili. If I
want the world to speak English, I’ll publish primers, and write lots
of appealing stories that force people to learn English to fully
understand them.

J2EE is popular not because of “write-once-run-anywhere” (an
empty promise in practice), but because Sun stimulated and suported a
lot of really useful software development using the Java language, and rolled that into the J2EE specification (for example, the Pet Store reference implementation). Similarly, the vendors behind the W3C
have been sufficiently supportive (some far-sightedly, most because
they know they have no choice) that they have stimulated the
contribution of a lot of really useful free software.

For vendors this is a double-edged sword. While supporting and
conforming to a popular standard can ease market entry, it can be
expensive, erode differentiation, and make it harder to lock up a
customer base. Big players sometimes have the clout to try to “embrace
and extend” standards, as Sun did with Unix in the early 1980’s and
Microsoft did with Java in the mid-1990’s. But small guys fear –
correctly — that they have no choice but to hew to the lingua franca
of the day, no matter how marginally valuable that may render what
they’ve done on top of the standards. For customers, this is actually
healthy, because it means vendors must distinguish themselves on how
well they have solved a business problem, and not on how tightly they
have locked up their customers with proprietary technology.

(“Embrace and extend” means supporting a popular
standard to reap the benefit of having people think that your products
will run software written on that standard, but extending the standard
in a way that allows software written to your variant to work only with
your products. So for example, applications written with Microsoft’s
version of Java only run on Windows. The big J2EE application server
vendors have done the same thing in a less public way: getting your
J2EE app to run on BEA’s WebLogic is no guarantee it will run on IBM’s
Websphere.)

Again, today’s software business is built on a model that can be
uniquely profitable for vendors that can establish large installed
bases. Marginal profit on software licensing can be over 90%, and speed
of technical evolution has numbed customers to accepting incomplete and
buggy products to start with and then paying for pricey “maintenance”
contracts so they can get (only some of) the bug fixes and new features
they need. Although there’s some grumbling about this, customers have yet to mount any serious response to this imbalance. If your software is a near de facto
standard, say, like Oracle among high-end RDBMS’s, it raises the cost
of switching even higher, and extends the window for above-average
profits.

Accordingly, razor-and-blade product structure and pricing is
a common tactic in the software business. Most vendors today either
give developer kits away for free or for a very modest charge to
encourage adoption. Bill Gates priced DOS licenses for ubiquity in the
early days. Getting DOS out there gave him a big market for Microsoft’s
applications, which themselves then became a standard (reinforced by
locking users into unique interface and proprietary document types).
With their establishment as a standard, he could then raise prices on
Windows, since you now have to have the MS-proprietary OS to run the
apps you’re hooked on. Sun promoted Java to not only consolidate the
fractured Unix world against inroads from Microsoft VB-on-Windows, but
also to extend its reach into its Unix competitors’ market shares:
“Write the app for your big HPUX server in Java, and it’s much easier
to swap out for a shiny Sun/ Solaris box when the time comes.” It even
represented an offensive threat into Microsoft’s server customer base
for the same reason, which partially motivated Microsoft’s “embrace and
extend” response.

(Scott McNealy thought he had Bill Gates on the
run, but Bill cheated by breaking the Java license and playing
“rope-a-dope” in court. Microsoft eventually settled for a sum that
paled beside the benefit to Microsoft of blunting the Java threat. Sun,
with greatly eroded market power, can’t do much today to prevent the
fracturing of the J2EE camp as app server vendors try to lock up their
customer bases with proprietary extensions that need “tweaked” JVM’s to
work. And Microsoft has now mounted its own offensive, trying to
establish .Net as the de-facto standard for implementing web
services. Whether Microsoft’s nakedly self-serving Passport component
will erode .Net’s prospects as a standard enough for alternatives like
SunOne and the Liberty Alliance to gain ground remains to be seen, but
I’m not betting against the folks in Redmond.)

A constant tension in playing the game is how much of the goods
to give away to tilt the field in your favor and drive adoption, versus
how much to hold back until people start writing checks. Historically
software companies have found hype to be a cheap but effective
substitute for samples, but this may be changing as the stakes for
using substandard software go up and pressure emerges from newer
quarters, like the open-source movement. Microsoft’s relative
generosity with .Net tools and education to date may be one possible
example that illustrates this trend.

So What’s Your Plan?

Let’s review the options again:

You can try to lead a technical revolution. But it’s guaranteed
that “if you build it, they won’t come” on their own. You’ve got to
bridge people’s adoption to your new widget. This may possibly mean
razor-and blade product structuring and pricing. It will certainly mean
major usability investments. Finally, be prepared to work with
competitors on common gateways — API’s for example — to each of your
respective products.

If you can’t have or be a standard yourself, there are two other less profitable but also less risky ways to make money.

“Plan A”: do a great job of solving a particular set of customer
problems, and be standards-agnostic. At ArsDigita, we used to say that
ACS, our software, was “data-model-plus-page-flow”, and that the tools
and standards on which it was built were just artifacts of what was
convenient to use for our circumstances at a given point in time. What
really counted was our experience in building highly-scalable
collaborative applications to support online communities. This was
great and we made a lot of money for a while (we grew to $20 million a
year in revenues in one year on internal profits alone). We managed
this even with open-source distribution of our software — designed, of
course, to establish us as a standard.

But after a while, it was clear a bus had left the station,
and it wasn’t the Tcl/AolServer one we were sitting in, even if we did
run on the Oracle line. For a time, sales pitches I did had a
depressingly similar pattern. The business folks loved us, but the tech
folks stopped us dead in our tracks when we said “Tcl” instead of
“Java”. Their bottom line: “we support Java, and only do business with
vendors with the same commitment.”

So “Plan B” is to build a
product or offer a service that may not be a direct solution for end
customers, but does a great job of making a standard technology more
usable. Examples here are visual development environments for the
popular languages, or more esoterically some of the caching
enhancements Oracle has made to Apache in its 9iAS product. At
ArsDigita, ACS for Java took a while to build, and by the time we did
others had stolen a march on our relative position as leaders in
solving the end customer’s problem, relegating us to a high-end niche.
But as a very sophisticated application development framework for the
Java world, it had significant appeal for all of the major J2EE
application server vendors. Even Microsoft approached us early on (in
.Net’s evolution) to encourage us to port it to .Net to speed adoption
of that infrastructure.

Software technologies evolve in predictable patterns which can
help you figure out which of these strategies might make more sense for
you. When technologies are young, people still are figuring out what
solutions that use these technologies will end up being valuable. So on
the margin they prefer toolkits and frameworks which permit them to
explore, to packaged applications that box them in. Vendors of young
technologies will be inordinately attracted to other complementary
vendors that make their toolkits more usable and higher-performing.
Later, as the technologies mature, and better ways of solving problems
with it become apparent, there’s a premium on being first with packages
that ship a good solution to as many end users as possible. The degree
of consensus and support for a given technology’s standards can be a
good indicator for which way to go and where the specific opportunities
might lie.