I just finished reading Converge, the new book on integrating technology, creativity, and media by Razorfish CEO Bob Lord and his colleague Ray Velez, the firm’s CTO. (Full disclosure: I’ve known Bob as a colleague, former boss, and friend for more than twenty years and I’m a proud Razorfish alum from a decade ago.)
Reflecting on the book I’m reminded of the novelist William Gibson’s famous comment in a 2003 Economist interview that “The future’s already here, it’s just not evenly distributed.” In this case, the near-perfect perch that two already-smart guys have on the Digital Revolution and its impact on global brands has provided them a view of a new reality most of the rest of us perceive only dimly.
So what is this emerging reality? Somewhere along the line in my business education I heard the phrase, “A brand is a promise.” Bob and Ray now say, “The brand is a service.” In virtually all businesses that touch end consumers, and extending well into relevant supply chains, information technology has now made it possible to turn what used to be communication media into elements of the actual fulfillment of whatever product or service the firm provides.
One example they point to is Tesco’s virtual store format, in which images of stocked store shelves are projected on the wall of, say, a train station, and commuters can snap the QR codes on the yogurt or quarts of milk displayed and have their order delivered to their homes by the time they arrive there: Tesco’s turned the billboard into your cupboard. Another example they cite is Audi City, the Kinnect-powered configurator experience through which you can explore and order the Audi of your dreams. As the authors say, “marketing is commerce, and commerce is marketing.”
But Bob and Ray don’t just describe, they also prescribe. I’ll leave you to read the specific suggestions, which aren’t necessarily new. What is fresh here is the compelling case they make for them; for example, their point-by-point case for leveraging the public cloud is very persuasive, even for the most security-conscious CIO. Also useful is their summary of the Agile method, and of how they’ve applied it for their clients.
Looking more deeply, the book isn’t just another surf on the zeitgeist, but is theoretically well-grounded. At one point early on, they say, “The villain in this book is the silo.” On reading this (nicely turned phrase), I was reminded of the “experience curve” business strategy concept I learned at Bain & Company many years ago. The experience curve, based on the idea that the more you make and sell of something, the better you (should) get at it, describes a fairly predictable mathematical relationship between experience and cost, and therefore between relative market share and profit margins. One of the ways you can maximize experience is through functional specialization, which of course has the side effect of encouraging the development of organizational silos. A hidden assumption in this strategy is that customer needs and associated attention spans stay pinned down and stable long enough to achieve experience-driven profitable ways to serve them. But in today’s super-fragmented, hyper-connected, kaleidoscopic marketplace, this assumption breaks down, and the way to compete shifts from capturing experience through specialization, to generating experience “at-bats” through speedy iteration, innovation, and execution. And this latter competitive mode relies more on the kind of cross-disciplinary integration that Bob and Ray describe so richly.
The book is a quick, engaging read, full of good stories drawn from their extensive experiences with blue-chip brands and interesting upstarts, and with some useful bits of historical analysis that frame their arguments well (in particular, I Iiked their exposition of the television upfront). But maybe the best thing I can say about it is that it encouraged me to push harder and faster to stay in front of the future that’s already here. Or, as a friend says, “We gotta get with the ‘90’s, they’re almost over!”
A while back I worked at a free software firm (ArsDigita, where early versions of the ArsDigita Community System were licensed under GPL) and was deeply involved in developing an "open source" license that balanced our needs, interests, and objectives with our clients' (the ArsDigita Public License, or ADPL, which was closely based on the Mozilla Public License, or MPL). I've been to O'Reilly's conferences (<shameless> I remember a ~20-person 2001 Birds-of-a-Feather session in San Diego with Mitch Kapor and pre-Google Eric Schmidt on commercializing open source </shameless>). Also, I'm a user of O'Reilly's books (currently have Charles Severance's Using Google App Engine in my bag). So I figured I should read this carefully and have a point of view about the essay. And despite having recently read Nicholas Carr's excellent and disturbing 2011 book The Shallows about how dumb the Internet has made me, I thought nonetheless that I should brave at least a superficial review of Morozov's sixteen-thousand-word piece.
To summarize: Morozov describes O'Reilly as a self-promoting manipulator who wraps and justifies his evangelizing of Internet-centered open innovation in software, and more recently government, in a Randian cloak sequined with Silicon Valley rhinestones. My main reaction: "So, your point would be...?" More closely:
First, there's what Theodore Roosevelt had to say about critics. (Accordingly, I fully cop to the recursive hypocrisy of this post.) If, as Morozov says of O'Reilly, "For all his economistic outlook, he was not one to talk externalities..." then Morozov (as most of my fellow liberals do) ignores the utility of motivation. I accept and embrace that with self-interest and the energy to pursue it, more (ahem, taxable) wealth is created. So when O'Reilly says something, I don't reflexively reject it because it might be self-promoting; rather, I first try to make sure I understand how that benefits him, so I can better filter for what might benefit me. For example, Morozov writes:
In his 2007 bestseller Words That Work, the Republican operative Frank Luntz lists ten rules of effective communication: simplicity, brevity, credibility, consistency, novelty, sound, aspiration, visualization, questioning, and context. O’Reilly, while employing most of them, has a few unique rules of his own. Clever use of visualization, for example, helps him craft his message in a way that is both sharp and open-ended. Thus, O’Reilly’s meme-engineering efforts usually result in “meme maps,” where the meme to be defined—whether it’s “open source” or “Web 2.0”—is put at the center, while other blob-like terms are drawn as connected to it.Where Morozov offers a warning, I see a manual! I just have to remember my obligation to apply it honestly and ethically.
Second, Morozov chooses not to observe that if O'Reilly and others hadn't broadened the free software movement into an "open source" one that ultimately offered more options for balancing the needs and rights of software developers with those of users (who themselves might also be developers), we might all still be in deeper thrall to proprietary vendors. I know from first-hand experience that the world simply was not and is still not ready to accept GPL as the only option.
Nonetheless, good on Morozov for offering this critique of O'Reilly. Essays like this help keep guys like O'Reilly honest, as far as that's necessary. They also force us to think hard about what O'Reilly's peddling -- a responsibility that should be ours. I used to get frustrated by folks who slapped the 2.0 label on everything, to the point of meaninglessness, until I appreciated that the meme and its overuse drove me to think and presented me with an opportunity to riff on it. I think O'Reilly and others like him do us a great service when they try to boil down complexities into memes. The trick for us is to make sure the memes are the start of our understanding, not the end of it.
Paul Simon wrote, "Every generation throws a hero at the pop charts." Now it's Marissa Mayer's turn to try to make Yahoo!'s chart pop. This will be hard because few tech companies are able to sustain value creation much past their IPOs.
What strategic path for Yahoo! satisfies the following important requirements?
Yahoo!'s company profile is a little buzzwordy but offers a potential point of departure. What Yahoo! says:
"Our vision is to deliver your world, your way. We do that by using technology, insights, and intuition to create deeply personal digital experiences that keep more than half a billion people connected to what matters the most to them – across devices, on every continent, in more than 30 languages. And we connect advertisers to the consumers who matter to them most – the ones who will build their businesses – through our unique combination of Science + Art + Scale."
What Cesar infers:
Yahoo! is a filter.
Here are some big things the Internet helps us do:
Every one of these functions has an 800 lb. gorilla, and a few aspirants, attached to it:
Um, filter... Filter. There's a flood of information out there. Who's doing a great job of filtering it for me? Google alerts? Useful but very crude. Twitter? I browse my followings for nuggets, but sometimes these are hard to parse from the droppings. Facebook? Sorry friends, but my inner sociopath complains it has to work too hard to sift the news I can use from the River of Life.
Filtering is still a tough, unsolved problem, arguably the problem of the age (or at least it was last year when I said so). The best tool I've found for helping me build filters is Yahoo! Pipes. (Example)
As far as I can tell, Pipes has remained this slightly wonky tool in Yahoo's bazaar suite of products. Nerds like me get a lot of leverage from the service, but it's a bit hard to explain the concept, and the semi-programmatic interface is powerful but definitely not for the general public.
Now, what if Yahoo! were to embrace filtering as its core proposition, and build off the Pipes idea and experience under the guidance of Google's own UI guru -- the very same Ms. Mayer, hopefully applying the lessons of iGoogle's rise and fall -- to make it possible for its users to filter their worlds more effectively? If you think about it, there are various services out there that tackle individual aspects of the filtering challenge: professional (e.g. NY Times, Vogue, Car and Driver), social (Facebook, subReddits), tribal (online communities extending from often offline affinities), algorithmic (Amazon-style collaborative filtering), sponsored (e.g., coupon sites). No one is doing a good job of pulling these all together and allowing me to tailor their spews to my life. Right now it's up to me to follow Gina Trapani's Lifehacker suggestion, which is to use Pipes.
OK so let's review:
Well, let's look at this a bit. I'd argue that a good filter is effectively a "passive search engine". Basically through the filters people construct -- effectively "stored searches" -- they tell you what it is they are really interested in, and in what context and time they want it. With cookie-based targeting under pressure on multiple fronts, advertisers will be looking for impression inventories that provide search-like value propositions without the tracking headaches. Whoever can do this well could make major bank from advertisers looking for an alternative to the online ad biz Hydra (aka Google, Facebook, Apple, plus assorted minor others).
Savvy advertisers and publishers will pooh-pooh the idea that individual Pipemakers would be numerous enough or consistent enough on their own to provide the reach that is the reason Yahoo! is still in business. But I think there's lots of ways around this. For one, there's already plenty of precedent at other media companies for suggesting proto-Pipes -- usually called "channels", Yahoo! calls them "sites" (example), and they have RSS feeds. Portals like Yahoo!, major media like the NYT, and universities like Harvard suggest categories, offer pre-packaged RSS feeds, and even give you the ability to roll your own feed out of their content. The problem is that it's still marketed as RSS, which even in this day and age is still a bit beyond for most folks. But if you find a more user-friendly way to "clone and extend" suggested Pipes, friends' Pipes, sponsored Pipes, etc., you've got a start.
Check? Lots of hand-waving, I know. But what's true is that Yahoo! has suffered from a loss of a clear identity. And the path to re-growing its value starts with fixing that problem.
Good luck Marissa!
In May 2007, Microsoft paid $6 billion to buy aQuantive. Today, only five years later, they wrote off the whole investment. Since I wrote about this a lot five years ago (here, here and here), it prompted me to think about what happened, and what I might learn. Here are a few observations:
1. 2006 / 2007 was a frothy time in the ad network market, both for ads and for the firms themselves, reflecting the economy in general.
2. Microsoft came late to the party, chasing aQuantive (desperately) after Google had taken DoubleClick off the table.
3. So, Microsoft paid a 100% premium to aQuantive's market cap to get the firm.
4. Here's the way Microsoft might have been seeing things at the time:
a. "Thick client OS and productivity applications business in decline -- the future is in the cloud."
b. "Cloud business model uncertain, but certainly lower price point than our desktop franchise; must explore all options; maybe an ad-supported version of a cloud-based productivity suite?"
c. "We have MSN. Why should someone else sit between us and our MSN advertisers and collect a toll on our non-premium, non-direct inventory? In fact, if we had an ad network, we could sit between advertisers and other publishers and collect a toll!"
5. Here's the way things played out:
a. The economy crashed a year later.
b. When budgets came back, they went first to the most accountable digital ad spend: search.
c. Microsoft had a new horse in that race: Bing (launched June 2009). Discretionary investment naturally flowed there.
d. Meanwhile, "display" evolved: video display, social display (aka Facebook), mobile display (Dadgurnit! Google bought AdMob, Apple has iAd! Scraps again for the rest of us...). (Good recent eMarketer presentation on trends here.)
e. Whatever's left of "traditional" display: Google / DoubleClick, as the category leader, eats first.
f. Specialized players do continue to grow in "traditional" display, through better targeting technologies (BT) and through facilitating more efficient buys (for example, DataXu, which I wrote about here). But to grow you have to invest and innovate, and at Microsoft, by this point, as noted above, the money was going elsewhere.
g. So, if you're Microsoft, and you're getting left behind, what do you do? Take 'em with you! "Do not track by default" in IE 10 as of June 2012. That's old school medieval, dressed up in hipster specs and a porkpie hat. Steve Ballmer may be struggling strategically, but he's still as brutal as ever.
a. $6 Big Ones is only 2% of MSFT's market cap. aQuantive may have come at a 2x premium, but it was worth the hedge. The rich are different from you and me.
b. The bigger issue though is how does MSFT steal a march on Google, Apple, Facebook? Hmmm. video's hot. Still bandwidth constrained, but that'll get better. And there's interactive video. Folks will eventually spend lots of time there, and ads will follow them. Google's got Hangouts, Facebook's got Facetime, Apple's got iChat... and now MSFT has Skype, for $8B. Hmm.
a. Some of the smartest business guys I worked with at Bain in the late 90's (including Torrence Boone and Jason Trevisan) ended up at aQuantive and helped to build it into the success it was. An interesting alumni diaspora to follow.
b. Some of the smartest folks I worked with at Razorfish in the early 2000's (including Bob Lord) ended up at aQuantive. The best part is that Microsoft may have gotten more value from buying and selling Razorfish (to Publicis) than from buying and writing off the rest of aQuantive. Sweet, that.
c. Why not open-source Atlas?
So Facebook's finally filed to do an IPO. Should you like? A year ago, I posted about how a $50 billion valuation might make sense. Today, the target value floated by folks is ~$85 billion. One way to look at it then, and now, is to ask whether each Facebook user (500 million of them last January, 845 million of them today) has a net present value to Facebook's shareholders of $100. This ignores future users, but then also excludes hoped-for appreciation in the firm's value.
One way to get your arms around a $100/ user NPV is to simply discount a perpetuity: divide an annual $10 per user cash flow (assumed = to profit here, for simplicity) by a 10% discount rate. Granted, this is more of a bond-than-growth-stock approach to valuation, but Facebook's already pretty big, and Google's making up ground, plus under these economic conditions it's probably OK to be a bit conservative.
Facebook's filing indicated they earned $1 billion in profit on just under $4 billion in revenue in 2011. This means they're running at about $1.20 per user in profit. To bridge this gap between $1.20 and $10, you have to believe there's lots more per-user profit still to come.
Today, 85% of Facebook's revenues come from advertising. So Facebook needs to make each of us users more valuable to its advertisers, perhaps 4x so to bridge half the gap. That would mean getting 4x better at targeting us and/or influencing our behavior on advertisers' behalf. What would that look like?
The other half of the gap gets bridged by a large increase in the share of Facebook's revenues that comes from its cut of what app builders running on the FB platform, like Zynga, get from you. At Facebook's current margin of 25%, $5 in incremental profit would require $20 in incremental net revenue. Assume Facebook's cut from its third party app providers is 50%, and that means an incremental $40/year each user would have to kick in at retail. Are each of us good for another $40/year to Facebook? If so, where would it come from?
My guess is that Facebook will further cultivate, through third-party developers most likely, some combination of paid content and productivity app subscription businesses. It's possible that doing so would not only raise revenues directly but also have a synergistic positive effect on ad rates the firm can command, with more of our time and activity under the firm's gaze.
Saw the news (though missed the show) that IBM's Watson won on Jeopardy. Interesting to see this and other articles call out Watson's "stumble" -- as though they expected perfection, which is a milestone in itself. Here's a great explanation of "what went wrong".
There are two notable things to me about this development / achievement.
The first is to ask whether this puts us ahead or behind Ray Kurzweil's schedule for 2019 (as predicted in 1999). (Really worth reading his predictions, since we're within shouting distance! What would you "keep / change / drop / add"?)
The second is a little closer in. Given the pace of this development, what does it mean for us as humans / users / consumers / citizens on the one hand, and as marketers / investors, etc. on the other -- from "now" to, say, "two years out"?
Imagine for example that in two years, IBM provides access to a more generalized form of Watson as a cloud-based API. What might you, as a person or as a business or other organization, do with a service that can understand speech, parse meanings, and optimize spending and investment recommendations based on how sure it is of the answer?
Cesar: "Watson, our lease is up soon, can you suggest some available space options nearby that would make sense for a business like Force Five Partners?"
Watson: "Cesar, here are five choices, with suggestions for what you should be paying for each, based on what I can find out right now..."
A stretch? Apple's integrated Wolfram Alpha - based support into the Siri app for the iPhone now. Try asking Siri, out loud: "What is the market capitalization of Goldman Sachs, divided by the US population?" Answer back to me, in three seconds (iPhone 3GS / AT&T):
(FWIW, this hits 3 of 4 criteria in a prediction framework I suggested nearly six years ago.)
Wow. We had barely figured out SEO, when we got slammed with SNO -- Social Network Optimization (as well as the frozen kind)! Now we have to figure out Computational Engine Optimization? (Confusingly, natch, "CEO" -- you read it here first!) How do I optimize for "What inexpensive steakhouses are nearby?" How do we even think about that?
(Possible direction: Semantic Web Optimization -- "SWO", of course. Make sure you are well tagged-for, and indexed-by, the data stores and services where the terms "inexpensive", "steakhouse", and "nearby" would be judged. Or, in plain English: if Wolfram Alpha looks to Yelp to help answer this question, make sure your restaurant's entry there is labeled as a steakhouse, has an accurate address, and is accurately price-rated as "$". Whatever gaming ensues, just don't blame IBM / Apple / Wolfram /(Google too) for going for the mega-cheddar.)
It's trite to say that change is accelerating as technology develops. ("We're only in the second inning!") Some dismiss this (as Arthur C. Clarke said, we always overestimate the impact of technology in the short term, but underestimate it in the long term). But, if you doubt, this chart is worth a look. And then think about the degree to which "social" and "mobile" are now reinforcing, amplifying, and accelerating each other...
(Insert shameless commercial:) What are you doing to help your organization keep up?
I broke my own rule earlier today and twitched (that's tweeted+*itched -- you read it here first) an impulsive complaint about how Google does not allow you to opt out of having it consider your location as a relevance factor in the search results it offers you:
I don't take it back. But, I do think I owe a constructive suggestion for how this could be done, in a way that doesn't compromise the business logic I infer behind this regrettable choice. Plus, I'll lay out what I infer this logic to be, and the drivers for it, in the hope that someone can improve my understanding. Finally, I'll lay out some possible options for SEO in an ever-more-local digital business context.
OK, first, here's the problem. In one client situation I'm involved with, we're designing an online strategy with SEO as a central objective. There are a number of themes we're trying to optimize for. One way you improve SEO is to identify the folks who rank / index highly on terms you care about, and cultivate a mutually valuable relationship in which they eventually may link to relevant content you have on a target theme. To get a clean look at who indexes well on a particular theme and related terms, you can de-personalize your search. You do this with a little url surgery:
Start with the search query:
Then graft on a little string to depersonalize the query:
Now, when I did this, I noticed that Google was still showing me local results. These usually seem less intrusive. But now, like some invasive weed, they'd choked off my results, ranging all the way to the third position and clogging up most of the rest of the first page, for a relatively innocuous term ("law"; lots of local law firms, I guess).
Then I realized that "&pws=0" tells Google to stop rummaging around in the cookies it's set on my browser, plus other information in my http requests, and won't help me prevent Google guessing / using my location, since that's based on the location of the ISP's router between my computer and the Google cloud.
Annoyed, I poked around to see what else I could do about it. Midway down the left-hand margin of the search results page, I noticed this:
So naturally, my first thought was to specify "none", or "null", to see if I could turn this off. No joy.
Next, some homework to see if there's some way to configure my way out of this. That led me to Rishi's post (see the third answer, dated 12/2/2010, to the question).
Unbelieving that an organization with as fantastic a UI aesthetic -- that is to say, functional / usable in the extreme -- as Google would do this, I probed further.
First stop: Web Search Help. The critical part:
Q. Can I turn off location-based customization?
A. The customization of search results based on location is an important component of a consistent, high-quality search experience. Therefore, we haven't provided a way to turn off location customization, although we've made it easy for you to set your own location or to customize using a general location as broad as the country that matches your local domain...
Ah, so, "It's a feature, not a bug." :-)
...If you find that your results for a particular search are more local than what you're looking for, you can set your location to a broader geographical area (such as a country instead of a city, zip code, or street address). Please note that this will greatly reduce the amount of locally relevant results that you’ll see. [emphasis mine]
Exactly! So I tried to game the system:
Drat! Foiled again. Ironic, this "Location not recognized" -- from the people who bring us Google Earth!
Surely, I thought, some careful consideration must have gone into turning the Greatest Tool The World Has Ever Known into the local Yellow Pages. So, I checked the Google blog. A quick search there for "location", and presto, this. Note that at this point, February 26, 2010, it was still something you could add.
Later, on October 18, 2010 -- where I have I been? -- this, which effectively makes "search nearby" non-optional:
We’ve always focused on offering people the most relevant results. Location is one important factor we’ve used for many years to customize the information that you find. For example, if you’re searching for great restaurants, you probably want to find ones near you, so we use location information to show you places nearby.
Today we’re moving your location setting to the left-hand panel of the results page to make it easier for you to see and control your preferences. With this new display you’re still getting the same locally relevant results as before, but now it’s much easier for you to see your location setting and make changes to it.
(BTW, is it just me, or is every Google product manager a farmer's-market-shopping, restaurant-hopping foodie? Just sayin'... but I seriously wonder how much designers' own demographic biases end up influencing assumptions about users' needs and product execution.)
Now, why would Google care so much about "local" all of a sudden? Is it because Marissa Mayer now carries a torch for location (and Foursquare especially)? Maybe. But it's also a pretty good bet that it's at least partly about the Benjamins. From the February Google post, a link to a helpful post on SocialBeat, with some interesting snippets:
Google has factored location into search results for awhile without explicitly telling the user that the company knows their whereabouts. It recently launched ‘Nearby’ search in February, returning results from local venues overlaid on top of a map.
Other companies also use your IP address to send you location-specific content. Facebook has long served location-sensitive advertising on its website while Twitter recently launched a feature letting users geotag where they are directly from the site. [emphasis mine]
Facebook's stolen a march on Google in the social realm (everywhere but Orkut-crazed Brazil; go figure). Twitter's done the same to Google on the real-time front. Now, Groupon's pay-only-for-real-sales-and-then-only-if-the-volumes-justify-the-discount threatens the down-market end of Google's pay-per-click business with a better mousetrap, from the small biz perspective. (BTW, that's why Groupon's worth $6 billion all of a sudden.) All of these have increasingly (and in Groupon's case, dominantly) local angles where the value to both advertiser and publisher (Facebook / Twitter / Groupon) are presumably highest.
Ergo, Google gets more local. But that's just playing defense, and Eric, Sergey, Larry, and Marissa are too smart (and, with $33 billion in cash on hand, too rich) to do just that.
Enter Android. Hmm. Just passed Apple's iOS and now is running the table in the mobile operating system market share game. Why wouldn't I tune my search engine to emphasize local search results, if more and more of the searches are coming from mobile devices, and especially ones running my OS? Yes, it's an open system, but surely dominating it at multiple layers means I can squeeze out more "rent", as the economists say?
Now, back to my little problem. What could Google do that would still serve its objective of global domination through local search optimization, while satisfying my nerdy need for "de-localized" results? The answer's already outlined above -- just let me type in "world", and recognize it for the pathetic niche plea that it is. Most folks will never do this, and this blog's not a bully-enough pulpit to change that. Yet.
The bigger question, though, is how to do SEO in a world where it's all location, location, location, or as SEOmoz writes
Location-based results raise political debates, such as "this candidate is great" showing up as the result in one location while "this candidate is evil" in another. Location-based queries may increase this debate. I need only type in a candidate's name and Instant will tell me what is the prevailing opinion in my area. I may not know if that area is the size of a city block or the entire world, but if I am easily influenced then the effect of the popular opinion has taken one step closer (from search result to search query) to the root of thought. The philosphers among you can debate whether or not the words change the very nature of ideas.
OK, never leave without a recommendation. Here are two:
First, consider that for any given theme, some keywords might be more "local" than others. Under the theme "Law", the keyword "law" will dredge up a bunch of local law firms. But another keyword, say "legal theory", is less likely to have that effect (until discussing that topic in local indie coffee shops becomes popular, anyway). So you might explore re-optimizing for these less-local alternatives. (Here's an idea: some enterprising young SEO expert might build a web service that would, for any "richly local" keyword, suggest less-local alternatives from a crowd-sourced database compiled by angry folks like me. Sort of a "de-localization thesaurus". Then, eventually, sell it to a big ad agency holding company.)
Second, as location kudzu crawls its way up Google's search results, there's another phenomenon happening in parallel. These days, for virtually any major topic, the Wikipedia entry for it sits at or near the top of Google's results. So, if as with politics, now too search and SEO are local, and much harder therefore to play, why not shift your optimization efforts to the place that the odds-on top Google result will take you, if theme leadership is a strategic objective?
PS Google I still love you. Especially because you know where I am.
Is Facebook worth $50 billion? Some caveman thoughts on this valuation:
1. It's worth $50 billion because Goldman Sachs says so, and they make the rules.
2. It's worth $50 billion because for an evanescent moment, some people are willing to trade a few shares at that price. (Always a dangerous way to value a firm.)
3. Google's valuation provides an interesting benchmark:
a. Google's market cap is close to $200 billion. Google makes (annualizing Q32010) $30 billion a year in revenue and $8 billion a year in profit (wow), for a price to earnings ratio of approximately 25x.
b. Facebook claims $2 billion a year in revenue for 2010, a number that's likely higher if we annualize latest quarters (I'm guessing, I haven't seen the books). Google's clearing close to 30% of its revenue to the bottom line. Let's assume Facebook's getting similar results, and let's say that annualized, they're at $3 billion in revenues, yielding a $1 billion annual profit (which they're re-investing in the business, but ignore that for the moment). That means a "P/E" of about 50x, roughly twice Google's. Facebook has half Google's uniques, but has passed Google in visits. So, maybe this growth, and potential for more, justifies double the multiple. Judge for yourself; here's a little data on historical P/E ratios (and interest rates, which are very low today, BTW), to give you some context. Granted, these are for the market as a whole, and Facebook is a unique high-growth tech firm, but not every tree grows to the sky.
c. One factor to consider in favor of this valuation for Facebook is that its revenues are better diversified than Google's. Google of course gets 99% of its revenue from search marketing. Facebook gets a piece of the action on all those Zynga et. al. games, in addition to its core display ad business. You might argue that these game revenues are stable and recurring, and point the way to monetizing the Facebook API to very attractive utility-like economic levels (high fixed costs, but super-high marginal profits once revenues pass those, with equally high barriers to entry).
d. Further, since viral / referral marketing is every advertiser's holy grail, and Facebook effectively owns the Web's social graph at the moment, it should get some credit for the potential value of owning a better mousetrap. (Though, despite Facebook's best attempts -- see Beacon -- to Hoover value out of your and my relationship networks, the jury's still out on whether and how they will do that. For perspective, consider that a $50 billion valuation for Facebook means investors are counting on each of today's 500 million users to be good for $100, ignoring future user growth.)
e. On the other hand, Facebook's dominant source of revenue (about 2/3 of it) is display ad revenue, and it doesn't dominate this market the way Google dominates the search ad market (market dominance means higher profit margins -- see Microsoft circa 1995 -- beyond their natural life). Also, display ads are more focused on brand-building, and are more vulnerable in economic downturns.
4. In conclusion: if Facebook doubles revenues and profits off the numbers I suggested above, Facebook's valuation will more or less track Google's on a relative basis (~25x P/E). If you think this scenario is a slam dunk, then the current price being paid for Facebook is "fair", using Google's as a benchmark. If you think there's further upside beyond this doubling, with virtually no risk associated with this scenario, then Facebook begins to look cheap in comparison to Google.
Who's got a better take?
Postscript: my brother, the successful professional investor, does; see his comment below (click "Comments")
I'll be moderating a panel at the OMMA Metrics & Measurement Conference in San Francisco on July 22.
The topic of the panel is, "Modeling Attribution: Practitioner Perspectives on the Media Mix". Here's the conference agenda page.
The panel description:
How do you determine the channels that influence offline and online behavior and marketing performance?
How should you allocate your budget across CRM emails, display ads, print advertising, television and radio commercials, direct mail, and other marketing sources?
What models, techniques, and technologies should you use develop attribution and predictive models that can drive your business?
Do you need SAS, SPSS, and a PhD in Statistics?
Does first click, last click, direct, indirect, or appropriate attribution matter – which is best?
What about multiple logistic regression?
What is the impact of survey and voice-of-the-customer data on attribution?
Hear from experts who have to answer these questions and tackle these tough issues as they work hard in the field every day for their consultancies, agencies, and brands.
So far, Manu Mathew, CEO from VisualIQ, and Todd Cunningham, SVP Research at MTV Networks, will be participating on the panel as well.
Hope to see you there. Meanwhile, please suggest questions you'd like to ask the panelists by commenting here. Thanks!