About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

63 posts categorized "Advertising"

July 03, 2012

#Microsoft Writes Off #aQuantive. What Can We Learn?

In May 2007, Microsoft paid $6 billion to buy aQuantive.  Today, only five years later, they wrote off the whole investment.  Since I wrote about this a lot five years ago (herehere and here), it prompted me to think about what happened, and what I might learn.  Here are a few observations:

1. 2006 / 2007 was a frothy time in the ad network market, both for ads and for the firms themselves, reflecting the economy in general.

2. Microsoft came late to the party, chasing aQuantive (desperately) after Google had taken DoubleClick off the table.

3. So, Microsoft paid a 100% premium to aQuantive's market cap to get the firm.

4. Here's the way Microsoft might have been seeing things at the time:

a. "Thick client OS and productivity applications business in decline -- the future is in the cloud."

b. "Cloud business model uncertain, but certainly lower price point than our desktop franchise; must explore all options; maybe an ad-supported version of a cloud-based productivity suite?"

c. "We have MSN.  Why should someone else sit between us and our MSN advertisers and collect a toll on our non-premium, non-direct inventory?  In fact, if we had an ad network, we could sit between advertisers and other publishers and collect a toll!"

5. Here's the way things played out:

a. The economy crashed a year later.

b. When budgets came back, they went first to the most accountable digital ad spend: search.  

c. Microsoft had a new horse in that race: Bing (launched June 2009).  Discretionary investment naturally flowed there.

d. Meanwhile, "display" evolved:  video display, social display (aka Facebook), mobile display (Dadgurnit!  Google bought AdMob, Apple has iAd!  Scraps again for the rest of us...). (Good recent eMarketer presentation on trends here.)

e. Whatever's left of "traditional" display: Google / DoubleClick, as the category leader, eats first.

f. Specialized players do continue to grow in "traditional" display, through better targeting technologies (BT) and through facilitating more efficient buys (for example, DataXu, which I wrote about here).  But to grow you have to invest and innovate, and at Microsoft, by this point, as noted above, the money was going elsewhere.

g. So, if you're Microsoft, and you're getting left behind, what do you do?  Take 'em with you!  "Do not track by default" in IE 10 as of June 2012.  That's old school medieval, dressed up in hipster specs and a porkpie hat.  Steve Ballmer may be struggling strategically, but he's still as brutal as ever. 

6. Perspective

a. $6 Big Ones is only 2% of MSFT's market cap.  aQuantive may have come at  a 2x premium, but it was worth the hedge.  The rich are different from you and me.  

b. The bigger issue though is how does MSFT steal a march on Google, Apple, Facebook? Hmmm. video's hot.  Still bandwidth constrained, but that'll get better.  And there's interactive video. Folks will eventually spend lots of time there, and ads will follow them. Google's got Hangouts, Facebook's got Facetime, Apple's got iChat... and now MSFT has Skype, for $8B.   Hmm.

7. Postscripts:

a. Some of the smartest business guys I worked with at Bain in the late 90's (including Torrence Boone and Jason Trevisan) ended up at aQuantive and helped to build it into the success it was.  An interesting alumni diaspora to follow.

b. Some of the smartest folks I worked with at Razorfish in the early 2000's (including Bob Lord) ended up at aQuantive. The best part is that Microsoft may have gotten more value from buying and selling Razorfish (to Publicis) than from buying and writing off the rest of aQuantive.  Sweet, that.

c. Why not open-source Atlas?

March 20, 2012

Organic Data Modeling in the Age of the Extrabase #analytics

Sorry for the buzzwordy title of this post, but hopefully you'll agree that sometimes they can be useful to communicating an important Zeitgeist.

I'm working with one of our clients right now to develop a new, advanced business intelligence capability that uses state-of-the art in-memory data visualization tools like Tableau and Spotfire that will ultimately connect multiple data sets to answer a range of important questions.  I've also been involved recently in a major analysis of advertising effectiveness that included a number of data sources that were either external to the organization, or non-traditional, or both.  In both cases, these efforts are likely to evolve toward predictive models of behavior to help prioritize efforts and allocate scarce resources.

Simultaneously, today's NYT carried an article about Clear Story, a Silicon Valley startup that aggregates APIs to public data sources about folks, and provides a highly simplified interface to those APIs for analysts and business execs.  I haven't yet tried their service, but I'll save that for a separate post.  The point here is that the emergence of services like this represent an important step in the evolution of Web 2.0 -- call it Web 2.2 -- that's very relevant for marketing analytics in enterprise contexts.

So, what's significant about these experiences?

Readers of Ralph Kimball's classic Data Warehouse Toolkit will appreciate both the wisdom of his advice, but also today, how the context for it has changed.  Kimball is absolutely an advocate for starting with a clear idea of the questions you'd like to answer and for making pragmatic choices about how to organize information to answer them.  However, the major editions of the book were written in a time when three things were true:

  • You needed to organize information more thoughtfully up front, because computing resources to compensate for poor initial organization were less capable and more expensive
  • The number of data sources you could integrate were far more limited, allowing you to be more definitive up front about the data structures you defined to answer your target questions
  • The questions themselves, or the range of possible answers to them, were more limited and less dynamic, because the market context was so as well

Together, these things made for business intelligence / data warehouse / data management efforts that were longer, and a bit more "waterfall" and episodic in execution.  However, over the past decade, many have critiqued such efforts for high failure rates, mostly in which they collapse of their own weight: too much investment, too much complexity, too few results.  Call this Planned Data Modeling.

Now back to the first experience I described above.  We're using the tools I mentioned to simultaneously hunt for valuable insights that will help pay the freight of the effort, define useful interfaces for users to keep using, and through these efforts, also determine the optimal data structures we need underneath to scale from the few million rows in one big flat file we've started with to something that will no doubt be larger, more multi-faceted, and thus more complex.  In particular, we're using the ability of these tools to calculate synthetic variables on the fly out of the raw data to point the way toward summaries and indeces we'll eventually have to develop in our data repository.  This will improve the likelihood that the way we architect that will directly support real reporting and analysis requirements, prioritized based on actual usage in initial pilots, rather than speculative requirements obtained through more conventional means.  Call this Organic Data Modeling.

Further, the work we've done anticipates that we will be weaving together a number of new sources of data, many of them externally provided, and that we'll likely swap sources in and out as we find that some are more useful than others.  It occurred to me that this large, heterogenous, and dynamic collection of  data sources would have characteristics sufficiently different in terms of their analytic and administrative implications that a different name altogether might be in order for the sum of the pieces.  Hence, the Extrabase.

These terms are not meant to cover up a cop-out.  In other words, some might say that mashing up a bunch of files in an in-memory visualization tool could reflect and further contribute to a lack of intellectual discipline and wherewithal to get it right.  In our case, we're hedging that risk, by having the data modelers responsible for figuring out the optimal data repository structure work extremely closely with the "front-end" analysts so that as potential data structure implications flow out of the rubber-meets-the-road analysis, we're able to sift them and decide which should stick and which we can ignore. 

But, as they say sometimes in software, "that's a feature, not a bug."  Meaning, mashing up files in these tools and seeing what's useful is a way of paying for and disciplining the back end data management process more rigorously, so that what gets built is based on what folks actually need, and gets delivered faster to boot.

February 02, 2012

Please Help Me Get Listed On The #Google #Currents Catalog. And Please ReTweet!

Hi folks, I need a favor.  I need 200 subscribers to this blog via Google Currents to get Octavianworld listed in the Currents catalog.  If you're reading this on an iPhone, iPad, or Android device, follow this link:

http://www.google.com/producer/editions/CAow75wQ/octavianworld

If you are looking at this on a PC, just snap this QR code with your iPhone or Android phone after getting the Currents app.

 

Img

 



Here's what I look like on Currents:

 

Photo

 

 

What is Currents?  If you've used Flipboard or Zite, this is Google's entry. If you've used an RSS reader, but haven't used any of these yet, you're probably a nerdy holdout (it takes one to know one).  If you've used none of these, and have no idea what I'm talking about, apps like these help folks like me (and big media firms too) publish online magazines that make screen-scrollable content page-flippable and still-clickable.  Yet another distribution channel to help reach new audiences.  

Thank you!

#Facebook at 100 (Almost)

So Facebook's finally filed to do an IPO.  Should you like?  A year ago, I posted about how a $50 billion valuation might make sense.  Today, the target value floated by folks is ~$85 billion.  One way to look at it then, and now, is to ask whether each Facebook user (500 million of them last January, 845 million of them today) has a net present value to Facebook's shareholders of $100. This ignores future users, but then also excludes hoped-for appreciation in the firm's value.  

One way to get your arms around a $100/ user NPV is to simply discount a perpetuity:  divide an annual $10 per user cash flow (assumed = to profit here, for simplicity) by a 10% discount rate.  Granted, this is more of a bond-than-growth-stock approach to valuation, but Facebook's already pretty big, and Google's making up ground, plus under these economic conditions it's probably OK to be a bit conservative.

Facebook's filing indicated they earned $1 billion in profit on just under $4 billion in revenue in 2011.  This means they're running at about $1.20 per user in profit.  To bridge this gap between $1.20 and $10, you have to believe there's lots more per-user profit still to come.  

Today, 85% of Facebook's revenues come from advertising.  So Facebook needs to make each of us users more valuable to its advertisers, perhaps 4x so to bridge half the gap.  That would mean getting 4x better at targeting us and/or influencing our behavior on advertisers' behalf.  What would that look like?

The other half of the gap gets bridged by a large increase in the share of Facebook's revenues that comes from its cut of what app builders running on the FB platform, like Zynga, get from you.  At Facebook's current margin of 25%, $5 in incremental profit would require $20 in incremental net revenue.  Assume Facebook's cut from its third party app providers is 50%, and that means an incremental $40/year each user would have to kick in at retail.  Are each of us good for another $40/year to Facebook?  If so, where would it come from?  

My guess is that Facebook will further cultivate, through third-party developers most likely, some combination of paid content and productivity app subscription businesses.  It's possible that doing so would not only raise revenues directly but also have a synergistic positive effect on ad rates the firm can command, with more of our time and activity under the firm's gaze.

 

 

 

 

January 26, 2012

Controlling for Impression Volatility in Digital Ad Spend Tests @DataXu

I've recently been involved in evaluating the results of a matched market test that looked at the impact of changes in digital advertising spend by comparing test vs. control markets, and by comparing differential lift in these markets over prior periods (e.g., year on year).  One of the challenges involved in such tests is significant "impression volatility" across time periods -- basically, each dollar can buy you very different volumes of impressions from year to year.  

You can unpack this volatility into at least three components:  

  • changes in overall macro-economic conditions that drive target audiences' attention,
  • changes in the buying approach you took / networks you bought through, due to network-specific structural (like what publishers are included) and supply-demand drivers (like the relative effectiveness of the network's targeting approach)
  • changes in "buy-specific" parameters (like audiences and palcements sought).  

Let's assume that you handle the first with your test / control market structure.  Let's also assume that the third is to be held constant as much as possible, for the purposes of the test (that is, buying the same properties / audiences, and using the same ad positions / placements for the tests).   So my question was, how much volatility does the second factor contribute, and what can be done to control for that in a test?

Surfing around I came on DataXu's March 2011 Market Pulse study.  DataXu is a service that allows you to buy across networks more efficiently in real time, sort of like what Kayak would be to travel if it were a fully automated agent and you flew every day.  The firm noted a year-on-year drop in average daily CPM volatility from 102% to 42% from May 2010 to February 2011 (meaning I think the average day to day change in price across all networks in each of the two months compared).  They attributed this to "dramatically increased volume of impressions bought and sold as well as maturation of trading systems".  Notwithstanding, the study still pointed to a 342% difference in average indexed CPMs across networks during February 2011.  

A number this big naturally piqued my interest, and so I read into the report to understand it better.  The top of page 2 of the report summary presents a nice graph that shows average monthly indexed CPMs across 11 networks, and indeed shows the difference between the highest-priced and the lowest-priced network to be 342%.  Applying "Olympic scoring" (tossing out highest- and lowest-priced exchanges) cuts that difference to about 180%, or roughly by half -- still a significant discrepancy of course.  Looking further, one standard deviation in the whole sample (including the top and bottom values) is about 44%.  Again, though perhaps a bit less dramatic for marketers' tastes, still lots.

(It's hard to know how "equivalent" the buys compared were, in terms of volumes, contextual consistency, and audience consistency, since the summary doesn't address these.  But let's assume they were, roughly.)

So what? If your (display) ad buys are not so property-specific / audience-targeted that run-of-network buys in contextual or audience categories are OK, future tests might channel buys through services like DataXu and declare the buys "fully-price-optimized" across the periods and markets compared, allowing you to ignore +/- ~50% "impression volatility" swings, assuming the Feb 2011 spreads hold.

However, if what you're buying is very specific -- and only available through direct purchase, or one or two specialized networks at most -- then you ignore factor 2, trust the laws of supply and demand, and assume that you've bought essentially the same "attention" regardless of the difference in impressions.

I've asked some knowledgeable friends to suggest some perspectives on this, and will pass along their ideas.  Other feedback welcome, especially from digital advertising / testing pros!  Oh and if you're really interested, check out the DataXu TC50 2009 pitch video.

November 11, 2011

Sponsored Occupations :-)

NYT.com had an interactive poll / visualization today taking readers' pulse on the Occupy protests.  Browsing through the usual left-right snarks and screeds, and on the heels of a recent stroll through one of the protest sites, it occurred to me that we're missing a chance to think beyond the politics of the movement, to the economic opportunity it represents.

Think of the protest sites as outdoor ad inventory.  This inventory is in great locations -- in the hearts of the world's financial districts, with lots of people with very high disposable incomes to see your ads every day, all day, right outside their windows -- the same people that fancy watchmakers pay the WSJ big bucks to reach.  

Photo (30)

Yet currently, this valuable inventory is currently filled with PSAs...

Occupation 2

...Or it goes begging altogether:

Agenda

So it dawned on me: "Sponsored Occupations" -- the outdoor ad network that monetizes protest movements!  This concept meets several needs simultaneously:

  • One stated objective of the movement is to "Make Them Pay".  The concept creates a practical mechanism for realizing this goal.


    Make them pay

 

  • Events and guerilla marketing in premium locations without a permitting process -- an advertiser's dream!
  • Plus, sponsors could negotiate special perks, like keeping the protesters from "Going all Oakland" (just heard that term) on their retail stores.
  • Cash-strapped municipalities can muscle a cut of the publishers' share, turning what's today a drag on public resources (police, etc.) into a money-maker.

Photo (32)

There's another important benefit.  This idea is a job creator.  After all, the network needs people to pitch the "publishers" at each location, and sales folks to recruit the advertisers, and staff to traffic the ads, keep the books, etc.  Politicians right and left could fold this into their platforms immediately.

Finally, for the entrepreneur who starts it all, there's the chance to Sell Out To The Man -- at a very attractive premium!  And, for the protesters who back the venture, and get options working for it, a chance to cash out too, just like the guys they're protesting.

Porky

After all, Don't Get Mad, Get Even.

 

Postscript in Rolling Stone: "How I Learned to Stop Worrying and Love the OWS Protests". Plus some thoughtful suggestions here.

 

 

 

April 16, 2011

The Marketing Funnel is a Brita Water Filter #OMMAMetrics @judah @schmults

Something you hear a lot these days is, "The 'Marketing Funnel' concept's dead.  It's just not clear what's replaced it."  At the recent OMMA Metrics conference in NY, IBM/Unica's Yuchun Lee described its successor as some sort of "spaghetti".  Judah Phillips had an excellent article in yesterday's Online Metrics Insider (Thanks to Rob Schmults for pointing me to it!) in which he suggested a "tumbler" metaphor and "seeking-shopping-sharing" structure for what we now do.

Here's my entry into the rename-the-funnel sweepstakes: The Brita Water Filter.

Let's consider our requirements for a metaphor to succeed the "funnel":

1. We still have a "current" whose power combines the "push" of customer needs and desires with the "pull" of companies with products that could satisfy those. ("I have testosterone, Porsche makes 911s.")  To me, that still makes "linear" metaphors useful.

2. For me, "Attract", "Engage", and "Convert", plus sometimes "Retain" -- or variants thereof, like Judah's, still work for me as basic stage descriptors.  What's changed is that channels have exploded in number, and audiences have fragmented as they use different ones.  So a good metaphor will describe a journey that is less predictable in both flow path and rate. (It's probably also useful to use stage descriptors that reflect the customer's perspective, not the marketer's -- "Awareness", "Consideration", "Purchase", plus sometimes hopefully "Loyalty" and  "Advocacy" -- but they're not quite as punchy.  Nonetheless, you get the point.)

3. The channel system that lies between nascent demand and final purchase is, these days, much more replete with advice that educates folks as they flow through, qualifying and intensifying final demand.  The metaphor we use has to describe the "chemistry" that happens in these intermediate spaces and interactions.  And, the behaviors we observe in these stages should inform, as Judah suggests in his article, how we market afterward.

4. Over time, however, the system in between gets clogged with a lot of junky information, or becomes technologically obsolete, so you need to refresh or replace your presences in this system on a regular basis.  The metaphor has to anticipate this need as well.

5. The operational and analytic processes for marketing within this flow are higher-tech than a simple funnel.

OK, so let's look at the Brita filter:

1. Water still generally flows through in a linear fashion; and, not all of it flows through at once.

2. As water flows through the charcoal however, it breaks up into much smaller droplets, that flow through at different rates and and less predictable paths.

3. The interaction of the water with activated charcoal only got rid of some impurities; Brita's engineers learned to add an ionic coating to get some minerals out of the water too.

4. Ideally, you change it now and then to keep its performance up.

5. It's higher-tech than a funnel (and more expensive too, but the benefits are sometimes worth it).

So, you say, "Cute but esoteric -- how is this conceptualization useful?"

1. For me, it helps me to validate frameworks I'm familiar with, like the Channel Pathways (TM) framework we used years ago at Monitor / Marketspace, or the similar "Wiggly Line" chart Akin Arikan and Don Peppers describe in Multichannel Marketing.  

2. It also helps me to extend them, by considering differential flow rates through discrete paths, and to focus on what we can learn from interactions at one stage and channel that might inform what we do in subsequent, different ones.

3. It helps to remind me that when populating and managing any such framework, when it gets too complex or overly-"excepted" with rules, I might be better off replacing it.  

4. It helps me to remember that at best I can only be probabilistic and not deterministic in my understanding of "customer flow dynamics", and that what's important is to be explicit about probability levels so that group decisions can be helped by a shared understanding of these.

5. Finally, it's oddly memorable (to me anyway), and the filter's specific properties help me remember the requirements better.

So, what do you think? Please answer the poll below, and comment with questions / alternatives.

 

 

January 07, 2011

Great Moments in 4G Smartphone Marketing: "These Go To 11"

Announced yesterday:

I've long felt that Nigel Tufnel would make a great pitchman. Maybe his day has come?

 

January 06, 2011

#Google Search and The Limits of #Location

I broke my own rule earlier today and twitched (that's tweeted+*itched -- you read it here first) an impulsive complaint about how Google does not allow you to opt out of having it consider your location as a relevance factor in the search results it offers you:

Epic fail

I don't take it back.  But, I do think I owe a constructive suggestion for how this could be done, in a way that doesn't compromise the business logic I infer behind this regrettable choice.  Plus, I'll lay out what I infer this logic to be, and the drivers for it, in the hope that someone can improve my understanding.  Finally, I'll lay out some possible options for SEO in an ever-more-local digital business context.

OK, first, here's the problem.  In one client situation I'm involved with, we're designing an online strategy with SEO as a central objective.  There are a number of themes we're trying to optimize for.  One way you improve SEO is to identify the folks who rank / index highly on terms you care about, and cultivate a mutually valuable relationship in which they eventually may link to relevant content you have on a target theme.  To get a clean look at who indexes well on a particular theme and related terms, you can de-personalize your search.  You do this with a little url surgery:

Start with the search query:

http://www.google.com/search?q=[theme]

Then graft on a little string to depersonalize the query:

http://www.google.com/search?q=[theme]&pws=0

Now, when I did this, I noticed that Google was still showing me local results.  These usually seem less intrusive.  But now, like some invasive weed, they'd choked off my results, ranging all the way to the third position and clogging up most of the rest of the first page, for a relatively innocuous term ("law"; lots of local law firms, I guess).  

Then I realized that "&pws=0" tells Google to stop rummaging around in the cookies it's set on my browser, plus other information in my http requests, and won't help me prevent Google guessing / using my location, since that's based on the location of the ISP's router between my computer and the Google cloud.

 Annoyed, I poked around to see what else I could do about it.  Midway down the left-hand margin of the search results page, I noticed this:

Google Search Location Control

 

So naturally, my first thought was to specify "none", or "null", to see if I could turn this off.  No joy. 

Next, some homework to see if there's some way to configure my way out of this.  That led me to Rishi's post (see the third answer, dated 12/2/2010, to the question).  

Unbelieving that an organization with as fantastic a UI aesthetic -- that is to say, functional / usable in the extreme -- as Google would do this, I probed further. 

First stop: Web Search Help.  The critical part:

Q. Can I turn off location-based customization?

A. The customization of search results based on location is an important component of a consistent, high-quality search experience. Therefore, we haven't provided a way to turn off location customization, although we've made it easy for you to set your own location or to customize using a general location as broad as the country that matches your local domain...

Ah, so, "It's a feature, not a bug." :-)

...If you find that your results for a particular search are more local than what you're looking for, you can set your location to a broader geographical area (such as a country instead of a city, zip code, or street address). Please note that this will greatly reduce the amount of locally relevant results that you’ll see. [emphasis mine]

 Exactly!  So I tried to game the system:

Google Search Location Control world

Drat!  Foiled again.  Ironic, this "Location not recognized" -- from the people who bring us Google Earth!

Surely, I thought, some careful consideration must have gone into turning the Greatest Tool The World Has Ever Known into the local Yellow Pages.  So, I checked the Google blog.  A quick search there for "location", and presto, this. Note that at this point, February 26, 2010, it was still something you could add.  

Later, on October 18, 2010 -- where I have I been? -- this, which effectively makes "search nearby" non-optional:

We’ve always focused on offering people the most relevant results. Location is one important factor we’ve used for many years to customize the information that you find. For example, if you’re searching for great restaurants, you probably want to find ones near you, so we use location information to show you places nearby.

Today we’re moving your location setting to the left-hand panel of the results page to make it easier for you to see and control your preferences. With this new display you’re still getting the same locally relevant results as before, but now it’s much easier for you to see your location setting and make changes to it.

(BTW, is it just me, or is every Google product manager a farmer's-market-shopping, restaurant-hopping foodie?  Just sayin'... but I seriously wonder how much designers' own demographic biases end up influencing assumptions about users' needs and product execution.)

Now, why would Google care so much about "local" all of a sudden?  Is it because Marissa Mayer now carries a torch for location (and Foursquare especially)?  Maybe.  But it's also a pretty good bet that it's at least partly about the Benjamins.  From the February Google post, a link to a helpful post on SocialBeat, with some interesting snippets: 

"Location may get a central place in Google’s web search redesign"

Google has factored location into search results for awhile without explicitly telling the user that the company knows their whereabouts. It recently launched ‘Nearby’ search in February, returning results from local venues overlaid on top of a map.

Other companies also use your IP address to send you location-specific content. Facebook has long served location-sensitive advertising on its website while Twitter recently launched a feature letting users geotag where they are directly from the site. [emphasis mine]

Facebook's stolen a march on Google in the social realm (everywhere but Orkut-crazed Brazil; go figure).  Twitter's done the same to Google on the real-time front.  Now, Groupon's pay-only-for-real-sales-and-then-only-if-the-volumes-justify-the-discount threatens the down-market end of Google's pay-per-click business with a better mousetrap, from the small biz perspective.  (BTW, that's why Groupon's worth $6 billion all of a sudden.)  All of these have increasingly (and in Groupon's case, dominantly) local angles  where the value to both advertiser and publisher (Facebook / Twitter / Groupon) are presumably highest.

Ergo, Google gets more local.  But that's just playing defense, and Eric, Sergey, Larry, and Marissa are too smart (and, with $33 billion in cash on hand, too rich) to do just that.

Enter Android.  Hmm.  Just passed Apple's iOS and now is running the table in the mobile operating system market share game.  Why wouldn't I tune my search engine to emphasize local search results, if more and more of the searches are coming from mobile devices, and especially ones running my OS?  Yes, it's an open system, but surely dominating it at multiple layers means I can squeeze out more "rent", as the economists say?

The transcript of Google's Q3 earnings call is worth a read.

Now, back to my little problem.  What could Google do that would still serve its objective of global domination through local search optimization, while satisfying my nerdy need for "de-localized" results?  The answer's already outlined above -- just let me type in "world", and recognize it for the pathetic niche plea that it is.  Most folks will never do this, and this blog's not a bully-enough pulpit to change that. Yet.

The bigger question, though, is how to do SEO in a world where it's all location, location, location, or as SEOmoz writes

"Is Every Query Local Now?" 

Location-based results raise political debates, such as "this candidate is great" showing up as the result in one location while "this candidate is evil" in another.  Location-based queries may increase this debate.  I need only type in a candidate's name and Instant will tell me what is the prevailing opinion in my area.  I may not know if that area is the size of a city block or the entire world, but if I am easily influenced then the effect of the popular opinion has taken one step closer (from search result to search query) to the root of thought.   The philosphers among you can debate whether or not the words change the very nature of ideas.

Heavy.

OK, never leave without a recommendation.  Here are two:

First, consider that for any given theme, some keywords might be more "local" than others.  Under the theme "Law", the keyword "law" will dredge up a bunch of local law firms.  But another keyword, say "legal theory", is less likely to have that effect (until discussing that topic in local indie coffee shops becomes popular, anyway).  So you might explore re-optimizing for these less-local alternatives.  (Here's an idea: some enterprising young SEO expert might build a web service that would, for any "richly local" keyword, suggest less-local alternatives from a crowd-sourced database compiled by angry folks like me.  Sort of a "de-localization thesaurus".  Then, eventually, sell it to a big ad agency holding company.)

Second, as location kudzu crawls its way up Google's search results, there's another phenomenon happening in parallel.  These days, for virtually any major topic, the Wikipedia entry for it sits at or near the top of Google's results.  So, if as with politics, now too search and SEO are local, and much harder therefore to play, why not shift your optimization efforts to the place that the odds-on top Google result will take you, if theme leadership is a strategic objective?

 

PS Google I still love you.  Especially because you know where I am. 

 

January 04, 2011

Facebook at Fifty (Billion)

Is Facebook worth $50 billion?  Some caveman thoughts on this valuation:

1. It's worth $50 billion because Goldman Sachs says so, and they make the rules.

2. It's worth $50 billion because for an evanescent moment, some people are willing to trade a few shares at that price. (Always a dangerous way to value a firm.)

3.  Google's valuation provides an interesting benchmark:

a. Google's market cap is close to $200 billion.  Google makes (annualizing Q32010) $30 billion a year in revenue and $8 billion a year in profit (wow), for a price to earnings ratio of approximately 25x.

b. Facebook claims $2 billion a year in revenue for 2010, a number that's likely higher if we annualize latest quarters (I'm guessing, I haven't seen the books).   Google's clearing close to 30% of its revenue to the bottom line.  Let's assume Facebook's getting similar results, and let's say that annualized, they're at $3 billion in revenues, yielding a $1 billion annual profit (which they're re-investing in the business, but ignore that for the moment).  That means a "P/E" of about 50x, roughly twice Google's.  Facebook has half Google's uniques, but has passed Google in visits.  So, maybe this growth, and potential for more, justifies double the multiple.  Judge for yourself; here's a little data on historical P/E ratios (and interest rates, which are very low today, BTW), to give you some context.  Granted, these are for the market as a whole, and Facebook is a unique high-growth tech firm, but not every tree grows to the sky.

c. One factor to consider in favor of this valuation for Facebook is that its revenues are better diversified than Google's.  Google of course gets 99% of its revenue from search marketing. Facebook gets a piece of the action on all those Zynga et. al. games, in addition to its core display ad business.  You might argue that these game revenues are stable and recurring, and point the way to monetizing the Facebook API to very attractive utility-like economic levels (high fixed costs, but super-high marginal profits once revenues pass those, with equally high barriers to entry).

d. Further, since viral / referral marketing is every advertiser's holy grail, and Facebook effectively owns the Web's social graph at the moment, it should get some credit for the potential value of owning a better mousetrap.  (Though, despite Facebook's best attempts -- see Beacon -- to Hoover value out of your and my relationship networks, the jury's still out on whether and how they will do that.  For perspective, consider that a $50 billion valuation for Facebook means investors are counting on each of today's 500 million users to be good for $100, ignoring future user growth.)

e. On the other hand,  Facebook's dominant source of revenue (about 2/3 of it) is display ad revenue, and it doesn't dominate this market the way Google dominates the search ad market (market dominance means higher profit margins -- see Microsoft circa 1995 -- beyond their natural life).  Also, display ads are more focused on brand-building, and are more vulnerable in economic downturns.

4. In conclusion: if Facebook doubles revenues and profits off the numbers I suggested above, Facebook's valuation will more or less track Google's on a relative basis (~25x P/E).  If you think this scenario is a slam dunk, then the current price being paid for Facebook is "fair", using Google's as a benchmark.  If you think there's further upside beyond this doubling, with virtually no risk associated with this scenario, then Facebook begins to look cheap in comparison to Google.

Your move.

Who's got a better take?

Postscript:  my brother, the successful professional investor, does; see his comment below (click "Comments")