About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

60 posts categorized "Social Software"

January 04, 2011

Facebook at Fifty (Billion)

Is Facebook worth $50 billion?  Some caveman thoughts on this valuation:

1. It's worth $50 billion because Goldman Sachs says so, and they make the rules.

2. It's worth $50 billion because for an evanescent moment, some people are willing to trade a few shares at that price. (Always a dangerous way to value a firm.)

3.  Google's valuation provides an interesting benchmark:

a. Google's market cap is close to $200 billion.  Google makes (annualizing Q32010) $30 billion a year in revenue and $8 billion a year in profit (wow), for a price to earnings ratio of approximately 25x.

b. Facebook claims $2 billion a year in revenue for 2010, a number that's likely higher if we annualize latest quarters (I'm guessing, I haven't seen the books).   Google's clearing close to 30% of its revenue to the bottom line.  Let's assume Facebook's getting similar results, and let's say that annualized, they're at $3 billion in revenues, yielding a $1 billion annual profit (which they're re-investing in the business, but ignore that for the moment).  That means a "P/E" of about 50x, roughly twice Google's.  Facebook has half Google's uniques, but has passed Google in visits.  So, maybe this growth, and potential for more, justifies double the multiple.  Judge for yourself; here's a little data on historical P/E ratios (and interest rates, which are very low today, BTW), to give you some context.  Granted, these are for the market as a whole, and Facebook is a unique high-growth tech firm, but not every tree grows to the sky.

c. One factor to consider in favor of this valuation for Facebook is that its revenues are better diversified than Google's.  Google of course gets 99% of its revenue from search marketing. Facebook gets a piece of the action on all those Zynga et. al. games, in addition to its core display ad business.  You might argue that these game revenues are stable and recurring, and point the way to monetizing the Facebook API to very attractive utility-like economic levels (high fixed costs, but super-high marginal profits once revenues pass those, with equally high barriers to entry).

d. Further, since viral / referral marketing is every advertiser's holy grail, and Facebook effectively owns the Web's social graph at the moment, it should get some credit for the potential value of owning a better mousetrap.  (Though, despite Facebook's best attempts -- see Beacon -- to Hoover value out of your and my relationship networks, the jury's still out on whether and how they will do that.  For perspective, consider that a $50 billion valuation for Facebook means investors are counting on each of today's 500 million users to be good for $100, ignoring future user growth.)

e. On the other hand,  Facebook's dominant source of revenue (about 2/3 of it) is display ad revenue, and it doesn't dominate this market the way Google dominates the search ad market (market dominance means higher profit margins -- see Microsoft circa 1995 -- beyond their natural life).  Also, display ads are more focused on brand-building, and are more vulnerable in economic downturns.

4. In conclusion: if Facebook doubles revenues and profits off the numbers I suggested above, Facebook's valuation will more or less track Google's on a relative basis (~25x P/E).  If you think this scenario is a slam dunk, then the current price being paid for Facebook is "fair", using Google's as a benchmark.  If you think there's further upside beyond this doubling, with virtually no risk associated with this scenario, then Facebook begins to look cheap in comparison to Google.

Your move.

Who's got a better take?

Postscript:  my brother, the successful professional investor, does; see his comment below (click "Comments")

July 16, 2010

Analytic Commons Project

With inspiration and encouragement from @perryhewitt, New Circle Consulting and Force Five Partners have launched the Analytics Commons Project (http://analyticscommons.com).  Here's the pitch:

Web analytics is a relatively new field that is evolving very quickly. Fortunately, it's been our experience that the community of web analysts is welcoming, vibrant, and very willing to share. The Web Analytics forum on Yahoo! is a wonderful example of this.

Analytics Commons is an effort to improve on this sharing by structuring it a bit. With structure, we can make relevant knowledge a little easier to find, and we can also make it easier to vet the expertise and reliability of the source of that knowledge. (The new Web Analytics Association Certification program is another good step in this direction.)

In designing Analytics Commons, we also decided to start by focusing on a specific form of analytics knowledge, rather than trying now to architect some general information architecture about the field that could capture all its (quickly changing) variety. In particular, we noticed:

  • Google Analytics is ubiquitous.
  • We're happy users of it.
  • It recently added the ability to share Advanced Segments and Custom Reports.
  • While GA has an Apps Gallery that features third-party creations, there is currently no public registry of such shared reports that we're aware of.
  • But, there does seem to be pent-up demand for sharing reports.
  • And, we had a specific itch we needed to scratch ("Target Towns", more on that below) that would help us Keep It Real.

We also figured we would start with something that would be within our ability to actually get done. Our ambition for this initiative doesn't stop here, however. So, the service also provides a way for visitors and users to suggest feedback to shape the vision and path for getting there.

So how does it work? If we've done our job well, it's hopefully self-evident. You register on the Analytics Commons site, and tell us a little about yourself, ideally through links to places where you keep your description up to date (e.g., LinkedIn, Twitter, etc.). If you've got a report to contribute, you get the url for it by clicking on the "Share button" in Google Analytics' Custom Reports or Advanced Segments sections from a GA profile in which you have access to them. Then, you add the url to our service and tag and describe what you've shared. If you need a report, you search for it on our service. If you find and try a report, all we ask is that you rate and comment on it to tell us how well it matched what you needed. Hopefully, discussions about each report will happen on our service, but if you want to connect privately with a report contributor, we've made room in our registered user profiles for folks to provide contact information if they wish. If you don't find what you were looking for, we let you store the search on our service, and if something matches in the future, we'll send you an email with the search results. If you want, you can subscribe to a weekly email listing new reports that have been added to our service, or get an RSS feed of the same.

The service is free to its users. Our privacy policy is simple: everything here is public, except your registration email if you choose not to share that. We won't share that with anyone, period. If you share a report, we assume you have the authority to do that. If you comment on a report, please be polite and constructive. We reserve the right to moderate comments, and to ban anyone who posts material we deem to be inappropriate or offensive

We saved some space on our pages for advertising / sponsorship, to help cover the server bills. If you're interested, please contact us.

Questions? Suggestions? contact us if you wish at help@analyticscommons.com.

About "Target Towns"

In our work for a client, we observed the following:

  • They target wealthy customers.
  • Wealth is highly concentrated in the US.
  • Wealthy people are highly concentrated in a few towns.

Therefore, we thought it would be useful to track traffic and behavior from these "Target Towns".

We tried to construct an Advanced Segment for "Target Towns" through the GA UI. It didn't appear to support what we had in mind. So we asked for help. Avinash Kaushik, Nick Mihailovski, Judah Phillips, and Justin Cutroni all helped us with a piece of the puzzle (Thank You all!). In the end, the answer turned out that we needed to use the GA API. But the API also had limits on how much information you could hit it with in a single query. So we figured we needed a service that would pass the towns ("Dimensions") about which you wanted information ("Metrics") past the API sequentially, and then would aggregate and present the results in a usable form.

Then we thought: "This is a report many people are likely to need!" So, the "Target Towns" service seemed like it would be a good candidate to help seed our Analytic Commons initiative.

April 13, 2010

MITX Panel: "Integrating Cross-Channel Customer Experiences" (April 29, 2010)

On the morning of April 29 I'll be moderating a MITX panel discussion titled "Integrating Cross-Channel Customer Experiences", in Cambridge, MA (Kendall Square).  More here, more posts to follow.  Hope to see you there!

March 13, 2010

Fly-By-Wire Marketing, Part II: The Limits Of Real Time Personalization

A few months ago I posted on what I called "Fly-By-Wire Marketing", or the emergence of the automation of marketing decisions -- and sometimes the automation of the development of rules for guiding those decisions.

More recently Brian Stein introduced me to Hunch, the new recommendation service founded by Caterina Fake of Flickr fame.  (Here's their description of how it works.  Here's my profile, I'm just getting going.)  When you register, you answer questions to help the system get to know you.  When you ask for a recommendation on a topic, the system not only considers what others have recommended under different conditions, but also what you've told it about you, and how you compare with others who have sought advice on the subject.

It's an ambitious service, both in terms of its potential business value (as an affiliate on steroids), but also in terms of its technical approach to "real time personalization".  Via Sim Simeonov's blog, I read this GigaOm post by Tom Pinckney, a Hunch co-founder and their VP of Engineering.  Sim's comment sparked an interesting comment thread on Tom's post.  They're useful to read to get a feel for the balance between pre-computation and on-the-fly computation, as well as the advantages of and limits to large pre-existing data sets about user preferences and behavior, that go into these services today.

One thing neither post mentions is that there may be diminishing returns to increasingly powerful recommendation logic if the set of things from which a recommendation can ultimately be selected is limited at a generic level.  For example, take a look at Hunch's recommendations for housewarming gifts.  The results more or less break down into wine, plants, media, and housewares.  Beyond this level, I'm not sure the answer is improved by "the wisdom of Hunch's crowd" or "Hunch's wisdom about me", as much as my specific wisdom about the person for whom I'm getting the gift, or maybe by what's available at a good price. (Perhaps this particular Hunch "topic" could be further improved by crossing recommendations against the intended beneficiary's Amazon wish list?)

My point isn't that Hunch isn't an interesting or potentially useful service.  Rather, as I argued several months ago,

The [next] question you ask yourself is, "How far down this road does it makes sense for me to go, by when?"  Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity.  The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.

For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit?  My experience has been that the answer is usually "yes".  But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.

At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data...)

Hunch is an interesting specific example of the increasingly broad RTP trend.  The NYT had an interesting article on real time bidding for display ads yesterday, for example.  The deeper issue in the trend I find interesting is the shift in power and profit toward specialized third parties who develop the capability to match the right cookie to the right ad unit (or, for humans, the right user to the right advertiser), and away from publishers with audiences.  In the case of Hunch, they're one and the same, but they're the exception.  How much of the increased value advertisers are willing to pay for better targeting goes to the specialized provider with the algorithm and the computing power, versus the publisher with the audience and the data about its members' behavior?  And for that matter, how can advertisers better optimize their investments across the continuum of targeting granularity?  Given the dollars now flooding into digital marketing, these questions aren't trivial.

February 09, 2010

Google Buzz: Right On Schedule

As reported in Mediapost today.  Here's my New Year's Day post on Google Wave, predicting what they're calling Buzz.  Interesting, bit not surprising -- no FB integration. 

January 01, 2010

Grokking Google Wave: The Homeland Security Use Case (And Why You Should Care)

A few people asked me recently what I thought of Google WaveLike others, I've struggled to answer this.

In the past few days I've been following the news about the failed attempt to blow up Northwest 253 on Christmas Day, and the finger-pointing among various agencies that's followed it.  More particularly, I've been thinking less about whose fault it is and more about how social media / collaboration tools might be applied to reduce the chance of a Missed Connection like this.

A lot of the comments by folks in these agencies went something like, "Well, they didn't tell us that they knew X," or "We didn't think we needed to pass this information on."  What most of these comments have in common is that they're rooted in a model of person-to-person (or point-to-point) communication, which creates the possibility that one might "be left out of the loop" or "not get the memo".

For me, this created a helpful context for understanding how Google Wave is different from email and IM, and why the difference is important.  Google Wave's issue isn't that the fundamental concept's not a good idea.  It is.  Rather, its problem is that it's paradigmatically foreign to how most people (excepting the wikifringe) still think.

Put simply, Google Wave makes conversations ("Waves") primary, and who's participating secondary.  Email, in contrast, makes participants primary, and the subjects of conversations secondary.  In Google Wave, with the right permissions, folks can opt into reading and participating in conversations, and they can invite others.  The onus for awareness shifts from the initiator of a conversation to folks who have the permission and responsibility to be aware of the conversation.  (Here's a good video from the Wave team that explains the difference right up front.)  If the conversation about Mr. Abdulmutallab's activities had been primary, the focus today would be about who read the memo, rather than who got it.  That would be good.  I'd rather we had a filtering problem than an information access / integration problem.

You may well ask, "Isn't the emperor scantily clad -- how is this different from a threaded bboard?"  Great question.   One answer might be that "Bboards typically exist either independently, or as features of separate purpose-specific web sites.  Google Wave is to threaded bboard discussions as Google Reader is to RSS feeds -- a site-independent conversation aggregator, just as Google Reader is a site-independent content aggregator."   Nice!  Almost: one problem of course is that Google Wave today only supports conversations that start natively in Google Wave.  And, of course, that you can (sometimes) subscribe to RSS feeds of bboard posts, as in Google Groups, or by following conversations by subscribing to RSS feeds for Twitter hashtags.  Another question: "How is Google Wave different from chat rooms?"  In general, most chats are more evanescent, while Waves appear (to me) to support both synchronous chat and asynchronous exchanges equally well.

Now the Big Question: "Why should I care?  No one is using Google Wave anyway."  True (only 1 million invitation-only beta accounts as of mid-November, active number unknown) -- but at least 146 million people use Gmail.  Others already expect Google Wave eventually will be introduced as a feature for Gmail: instead of / in addition to sending a message, you'll be able to start a "Wave".  It's one of the top requests for the Wave team.  (Gmail already approximates Wave by organizing its list of messages into threads, and by supporting labeling and filtering.)  Facebook, with groups and fan pages, appears to have stolen a march on Google for now, but for the vast bulk of the world that still lives in email, it's clunky to switch back and forth.  The killer social media / collaboration app is one that tightly integrates conversations and collaboration with messaging, and the prospect of Google-Wave-in-Gmail is the closest solution with any realistic adoption prospects that I can imagine right now.

So while it's absurdly early, marketers, you read it here first: Sponsored Google Waves :-)  And for you developers, it's not too early to get started hacking the Google Wave API and planning how to monetize your apps.

Oh, and Happy New Year!

Postscript: It was the software's fault...

Postscript #2: Beware the echo chamber

December 10, 2009

#Foursquare: So Very 2006

All this fuss generally about 2010 as (finally) The Year Of Mobile and specifically about Foursquare reminded me of a post my former Marketspace colleague Michael Fedor wrote in 2006 about the social possibilities of early location-based services technologies like Kmaps (for the Treo 650, which was an early coal-powered smartphone for those of you born after 2007).  Re-reading the post made me (again) proud of Michael, and proud to have worked with him and our compatriots.

November 18, 2009

@Chartbeat: Biofeedback For Your Web Presence

Via an introduction by my friend Perry Hewitt, I had a chance yesterday to learn more about Chartbeart, the real-time web analytics product, from its GM Tony Haile.

Chartbeat provides a tag-based tracking mechanism, dashboard, and API for understanding your site's users in real time.  So, you say, GA and others are only slightly lagged in their reporting.  What makes Chartbeat differentially useful?

I recently wrote a post titled "Fly-By-Wire Marketing" that reacted to an article in Wired on Demand Media's business model, and suggested a roadmap for firms interested in using analytics to automate web publishing processes. 

After listening to Tony (partly with "Fly-By-Wire Marketing" notions in mind), it occurred to me that perhaps the most interesting possibilities lay in tying a tool like Chartbeat into a web site's CMS, or more ambitiously into a firm's marketing automation / CRM platform, to adjust on the fly what's published / sent to users.

Have a look at their live dashboard demo, which tracks user interactions with Fred Wilson's blog, avc.com.  Here's a question: if you were Fred -- and Fred's readers -- how would avc.com evolve during the day if you (as Fred or one of Fred's readers) could see this information live on the site, perhaps via a widget that allowed you to toggle through different views?  Here are some ideas:

1. If I saw a disproportionate share of visitors coming through from a particular location, I might push stories tagged with that location to a "featured stories" section / widget, on the theory that local friends tell local friends, who might then visit direct to the home page url.

2. If I saw that a particular story was proving unusually popular, I might (as above) feature "related content", both on a home page and on the story page itself.

3. If I saw that traffic was being driven disproportionately by a particular keyword, I might try to wire a threshold / trigger into my AdWords account (or SEM generally) to boost spending on that keyword, and I might ask relevant friends for some link-love (though this obviously is slowed by how frequently search engines re-index you). 

(Note: pushing this further, as we discussed with Tony, we'd subscribe to a service that would give us a sense for how much of the total traffic being driven to Chartbeat users by that keyword is coming our way, and use that as a metric for optimizing our traffic-driving efforts in real time.  Of course such a service would have to anonymize competitor information, be further aggregated to protect privacy, and be offered on an opt-in basis, but could be valuable even at low opt-in rates, since what we're after is relative improvement indications, and not absolute shares.)

4. If you saw lots of traffic from a particular place, or keyword, or on a particular product, you might connect this information to your email marketing system and have it influence what goes out that day.  Or, you might adjust prices, or promotions, dynamically based on some of this information.

Some of you will wonder how these ideas relate to personalization, which is already a big if imperfectly implemented piece of many web publishers' and e-retailers' capabilities.  I say personalization is great for recognizing and adjusting to each of you, but not to all of you.  For example, pushing this further, I wonder about the potential for "analytics as content".  NYT's "most-emailed" list is a good example of this, albeit in a graphically unexciting form.  What if you had a widget that plotted visitors on a map (which exists today of course) but also color-coded them according to their source, momentarily flashing the site or keyword that referred them?  At minimum it would be entertaining, but it would also hold a mirror up to the site's users showing them who they are (their locations and interests), in a way that would reinforce the sense of community that the site may be trying to foster otherwise. 

Reminds me a bit of Spinvision, and by proxy of this old post

October 24, 2009

Activating Latent Social Networks

This morning via TechCrunch  I read Sean Parker's Web 2.0 Summit presentation materials, in which he says that the future belongs to "network services" that connect people, like Facebook, and not to "information services" that connect us to data, like Google.  My experiences at Contact Networks taught me to think of email patterns as proxies for social networks.  So, the following idea occurred to me.

Google has Gmail.  Google allows people to publish profiles.  What if Gmail had a button that allowed me to "recognize" a recipient by linking to his / her public profile when I send an email to him / her?

If I have a public profile and the recipient has one too, by pressing this "recognize" button I would make our relationship "provisionally acknowledged" (like a "friend request"); the link would become "acknowledged" if the recipient agreed.  Further, either side (with mutual agreement) could choose to "publish" this relationship in multiple social nets they participate in: Facebook, LinkedIn, Orkut, or they could even make it fully public.

The more two-way email traffic there is between the two users, the stronger the link is assumed by the service to be.  Note that this wouldn't be scored in a linear way.  Probably some sort of recency and frequency considerations would be involved, just as we had at Contact Networks.

Taking a page out of PageRank (pun partially intended), the scoring algorithm could also consider the popularity of the URLs I associated with my Google profile to consider the "centrality of my node" in the uber-network, and therefore the "value" of my "acknowledgements", when given.  Link-love could be configured by each user to be given by-the-message or by default to different email recipients.  Recipients could also "transfer" this link-love, with permission, to their other web presences (e.g., blogs).

The idea isn't limited to the major mail platforms, either.  Any media firm with an online community has a latent social network that could be defined by the response patterns in forum posts.  Users wouldn't experience the pain and inconvenience of joining YASNS, just a minor modification -- perhaps a welcome one, if accompanied by a little extra valuable information -- to how they interact already in the communities they belong to.  "Activating" such social networks through mechanisms similar to the ones described above would enhance the viral marketing potential of the communities, which would appeal to advertisers.

Since basically everyone uses email, doing this would also "democratize the social graph".  What I mean is that today there are two kinds of networks.  Either they are private -- owned and run by Facebook, LinkedIn, etc. -- or they are "public-but-elite", defined by the link structure of the Web.  In the former case, if amigo ergo sum ("I friend therefore I am"), I exist at Facebook's whim.  In the latter case, only folks who take the time to establish a public web presence and get linked to (say, through a blog, or a social net public profile) exist.  (Reminds me of Steve Martin's excitement at making it into the phone book in The Jerk.)  An open, more inclusive social graph mechanism than either of these currently provides would help bridge the digital divide, among other benefits.

Who's doing this?  The idea isn't entirely original.  Partially relevant: Facebook has just updated its News Feed to consider interactions between users as inputs for how to filter items to each user.  I'm sure this must have occurred to the major portals with email services.  Seems like a natural feature for Google Wave, for example, though I haven't seen it.  Surely (as with Contact Networks) it's also valuable to large organizations to establish "enterprise social networks", inside and beyond. 

Postscript: Gather.com CEO Tom Gerace commented they are working on a patent-pending capability they call PeopleRank that will do what I describe above in the online community section of this post. Google's been thinking about this for at least a year -- how come we haven't heard more yet?

October 06, 2009

Twitter Idea Of The Day

I just read Clive Owen's piece in on wired.com describing the rise of search engines focused on real-time trend monitoring, as opposed to indexing based on authority.  It's good, short, and I recommend it.

Building on ideas I had a while back, it provoked an idea for a web service that would allow a group sponsor to register Twitter feeds (or, for that matter, any kind of feed) from members of the group, do a word-frequency analysis on those feeds (with appropriate filters of course), and then display snapshots (perhaps with a word cloud) of popularity, and trend analysis (fastest-rising, fastest-falling).  You could also have specialized content variants: most popular URLs, most popular tags.  Clicking through from any particular word (or url or tag) you could do a network analysis: which member of the group first mentioned the item, who re-tweeted him or her, either with attribution or without.

The builder of a service like this would construct it as a platform that would allow group sponsors to set up individual accounts with one or more groups, and it would allow these sponsors to aggregate groups up or drill down from an aggregate cross-group view down to individual ones, perhaps with some comparative analysis -- "show me the relative popularity of any given word / content item across my groups", for example.

Twitter already has trending topics, as do others, but the lack of grouping for folks relevant to me makes it (judging by the typical results) barely interesting and generally useless to me.  There are visual views of news, like Newsmap, but they pre-filter content by focusing on published news stories.

An additional layer of sophistication based on semantic analysis technology like, say, Crimson Hexagon's, would translate individual key words into broader categories of meaning from all this, so you could, at a glance, in what ways and proportions your group members were feeling about different things: "Well, it's Monday morning, and 2/3 of my users are feeling 'anxious' about work, while 1/3 are feeling 'inspired'  on vacation."

As for making money, buzz-tracking services are already bought by / licensed by / subscribed to by a number of organizations.  I could see a two-stage model here where group sponsors who aggregate and process their members' feeds could then re-syndicate fine-grained analysis of those feeds to media and other organizations to whom that aggregated view would be useful. "What are alumni of university X / readers of magazine Y focused on right now?"  The high-level cuts would be free, perhaps used to drive traffic.