I lead Force Five Partners, a marketing analytics consulting firm (bio). I've been writing here about marketing, technology, e-business, and analytics since 2003 (blog name explained).
Email or follow me:
The May 20th 2013 edition of The New Yorker has an article by Vogue writer Nathan Heller on Massive Online Open Courses (MOOCs) titled "Laptop U: Has the future of college moved online?" The author explores, or at least raises, a number of related questions. How (well) does the traditional offline learning experience transfer online? Is the online learning experience more or less effective than the traditional one? (By what standard? For what material? What is gained and lost?) What will MOOCs mean for different colleges and universities, and their faculties? How will the MOOC revolution be funded? (In particular, what revenue model will emerge?)
Having worked a lot in the sector, for both public and private university clients, developing everything from technology, to online-enabled programs themselves, to analytic approaches, and even on marketing and promotion, the article was a good prompt for me to try to boil out some ways to think about answering these questions.
The article focuses almost exclusively on Harvard and EdX, the 12-school joint venture through which it's pursuing MOOCs. Obviously this skews the evaluation. Heller writes:
Education is a curiously alchemical process. Its vicissitudes are hard to isolate. Why do some students retain what they learned in a course for years, while others lose it through the other ear over their summer breaks? Is the fact that Bill Gates and Mark Zuckerberg dropped out of Harvard to revolutionize the tech industry a sign that their Harvard educations worked, or that they failed? The answer matters, because the mechanism by which conveyed knowledge blooms into an education is the standard by which MOOCs will either enrich teaching in this country or deplete it.
For me, the first step to boiling things out is to define what we mean by -- and want from -- an "education". So, let's try to unpack why people go to college. In most cases, Reason One is that you need a degree to get any sort of decent job. Reason Two is to plug into a network of people -- fellow students, alumni, faculty -- that provide you a life-long community. Of course you need a professional community for that Job thing, but also because in an otherwise anomic society you need an archipelago to seed friendships, companionships, and self-definition (or at least, as scaffolding for your personal brand: as one junior I heard on a recent college visit put it memorably, "Being here is part of the personal narrative I'm building.") Reason Three -- firmly third -- is to get an "education" in the sense that Heller describes. (Apropos: check this recording of David Foster Wallace's 2005 commencement address at Kenyon College.)
Next, this hierarchy of needs then gives us a way to evaluate the prospects for MOOCs.
If organization X can produce graduates demonstrably better qualified (through objective testing, portfolios of work, and experience) to do job Y, at a lower cost, then it will thrive. If organization X can do this better and cheaper by offering and/or curating/ aggregating MOOCs, then MOOCs will thrive. If a MOOC can demonstrate an adequately superior result / contribution to the end outcome, and do it inexpensively enough to hold its place in the curriculum, and do it often enough that its edge becomes a self-fulfilling prophecy -- a brand, in other words -- then it will crowd out its competitors, as surely as one plant shuts out the sunlight to another. Anyone care to bet against Georgia Tech's new $7K Master's in Computer Science?
If a MOOC-mediated social experience can connect you to a Club You Want To Be A Member Of, you will pay for that. And if a Club That Would Have You As A Member can attract you to its clubhouse with MOOCs, then MOOCs will line the shelves of its bar. The winning MOOC cocktails will be the ones that best produce the desired social outcomes, with the greatest number of satisfying connections.
Finally, learning is as much about the frame of mind of the student as it is about the quality of the teacher. If through the MOOC the student is able to choose a better time to engage, and can manage better the pace of the delivery of the subject matter, then the MOOC wins.
Beyond general prospects, as you consider these principles, it becomes clear that it's less about whether MOOCs win, but which ones, for what and for whom, and how.
The more objective and standardized -- and thus measurable and comparable -- the learning outcome and the standard of achievement, the greater the potential for a MOOC to dominate. My program either works, or it doesn't.
If a MOOC facilitates the kinds of content exchanges that seed and stimulate offline social gatherings -- pitches to VCs, or mock interviewing, or poetry, or dance routines, or photography, or music, or historical tours, or bird-watching trips, or snowblower-maintenance workshops -- then it has a better chance of fulfilling the longings of its students for connection and belonging.
And, the more well-developed the surrounding Internet ecosystem (Wikipedia, discussion groups, Quora forums, and beyond) is around a topic, the less I need a Harvard professor, or even a Harvard grad student, to help me, however nuanced and alchemical the experience I miss might otherwise have been. The prospect of schlepping to class or office hours on a cold, rainy November night has a way of diluting the urge to be there live in case something serendipitous happens.
Understanding how MOOCs win then also becomes a clue to understanding potential revenue models.
If you can get accredited to offer a degree based in part or whole on MOOCs, you can charge for that degree, and gets students or the government to pay for it (Exhibit A: University of Phoenix). That's hard, but as a variant of this, you can get hired by an organization, or a syndicate of organizations you organize, to produce tailored degree programs -- think corporate training programs on steroids -- that use MOOCs to filter and train students. (Think "You, Student, pay for the 101-level stuff; if you pass you get a certificate and an invitation to attend the 201-level stuff that we fund; if you pass that we give you a job.")
Funding can come directly, or be subsidized by sponsors and advertisers, or both.
You can try to charge for content: if you produce a MOOC that someone else wants to include in a degree-based program, you can try to license it, in part or in whole.
You can make money via the service angle, the way self-publishing firms support authors, with a variety of best-practice based production services. Delivery might be offered via a freemium model -- the content might be free, but access to premium groups, with teaching assistant support, might come at a price. You can also promote MOOCs -- build awareness, drive distribution, even simply brand -- for a cut of the action, the way publishers and event promoters do.
Perhaps in the not-too-distant future we'll get the Academic Upfront, in which Universities front a semester's worth of classes in a MOOC, then pitch the class to sponsors, the way TV networks do today. Or, maybe the retail industry also offers a window into how MOOCs will be monetized. Today's retail environment is dominated by global brands (think professors as fashion designers) and big-box (plus Amazon) firms that dominate supply chains and distrubution networks. Together, Brands and Retailers effectively act as filters: we make assumptions that the products on their shelves are safe, effective, reasonably priced, acceptably stylish, well-supported. In exchange, we'll pay their markup. This logic sounds a cautionary note for many schools: boutiques can survive as part of or at the edges of the mega-retailers' ecosystems, but small-to-mid-size firms reselling commodities get crushed.
Of course, these are all generic, unoriginal (see Ecclesiastes 1:9) speculations. Successful revenue models will blend careful attention to segmenting target markets and working back from their needs, resources, and processes (certain models might be friendlier to budgets and purchasing mechanisms than others) with thoughtful in-the-wild testing of the ideas. Monolithic executions with Neolithic measurement plans ("Gee, the focus group loved it, I can't understand why no one's signing up for the paid version!") are unlikely to get very far. Instead, be sure to design with testability in mind (make content modular enough to package or offer a la carte, for example). Maybe even use Kickstarter as a lab for different models!
I just finished reading Converge, the new book on integrating technology, creativity, and media by Razorfish CEO Bob Lord and his colleague Ray Velez, the firm’s CTO. (Full disclosure: I’ve known Bob as a colleague, former boss, and friend for more than twenty years and I’m a proud Razorfish alum from a decade ago.)
Reflecting on the book I’m reminded of the novelist William Gibson’s famous comment in a 2003 Economist interview that “The future’s already here, it’s just not evenly distributed.” In this case, the near-perfect perch that two already-smart guys have on the Digital Revolution and its impact on global brands has provided them a view of a new reality most of the rest of us perceive only dimly.
So what is this emerging reality? Somewhere along the line in my business education I heard the phrase, “A brand is a promise.” Bob and Ray now say, “The brand is a service.” In virtually all businesses that touch end consumers, and extending well into relevant supply chains, information technology has now made it possible to turn what used to be communication media into elements of the actual fulfillment of whatever product or service the firm provides.
One example they point to is Tesco’s virtual store format, in which images of stocked store shelves are projected on the wall of, say, a train station, and commuters can snap the QR codes on the yogurt or quarts of milk displayed and have their order delivered to their homes by the time they arrive there: Tesco’s turned the billboard into your cupboard. Another example they cite is Audi City, the Kinnect-powered configurator experience through which you can explore and order the Audi of your dreams. As the authors say, “marketing is commerce, and commerce is marketing.”
But Bob and Ray don’t just describe, they also prescribe. I’ll leave you to read the specific suggestions, which aren’t necessarily new. What is fresh here is the compelling case they make for them; for example, their point-by-point case for leveraging the public cloud is very persuasive, even for the most security-conscious CIO. Also useful is their summary of the Agile method, and of how they’ve applied it for their clients.
Looking more deeply, the book isn’t just another surf on the zeitgeist, but is theoretically well-grounded. At one point early on, they say, “The villain in this book is the silo.” On reading this (nicely turned phrase), I was reminded of the “experience curve” business strategy concept I learned at Bain & Company many years ago. The experience curve, based on the idea that the more you make and sell of something, the better you (should) get at it, describes a fairly predictable mathematical relationship between experience and cost, and therefore between relative market share and profit margins. One of the ways you can maximize experience is through functional specialization, which of course has the side effect of encouraging the development of organizational silos. A hidden assumption in this strategy is that customer needs and associated attention spans stay pinned down and stable long enough to achieve experience-driven profitable ways to serve them. But in today’s super-fragmented, hyper-connected, kaleidoscopic marketplace, this assumption breaks down, and the way to compete shifts from capturing experience through specialization, to generating experience “at-bats” through speedy iteration, innovation, and execution. And this latter competitive mode relies more on the kind of cross-disciplinary integration that Bob and Ray describe so richly.
The book is a quick, engaging read, full of good stories drawn from their extensive experiences with blue-chip brands and interesting upstarts, and with some useful bits of historical analysis that frame their arguments well (in particular, I Iiked their exposition of the television upfront). But maybe the best thing I can say about it is that it encouraged me to push harder and faster to stay in front of the future that’s already here. Or, as a friend says, “We gotta get with the ‘90’s, they’re almost over!”
A while back I worked at a free software firm (ArsDigita, where early versions of the ArsDigita Community System were licensed under GPL) and was deeply involved in developing an "open source" license that balanced our needs, interests, and objectives with our clients' (the ArsDigita Public License, or ADPL, which was closely based on the Mozilla Public License, or MPL). I've been to O'Reilly's conferences (<shameless> I remember a ~20-person 2001 Birds-of-a-Feather session in San Diego with Mitch Kapor and pre-Google Eric Schmidt on commercializing open source </shameless>). Also, I'm a user of O'Reilly's books (currently have Charles Severance's Using Google App Engine in my bag). So I figured I should read this carefully and have a point of view about the essay. And despite having recently read Nicholas Carr's excellent and disturbing 2011 book The Shallows about how dumb the Internet has made me, I thought nonetheless that I should brave at least a superficial review of Morozov's sixteen-thousand-word piece.
To summarize: Morozov describes O'Reilly as a self-promoting manipulator who wraps and justifies his evangelizing of Internet-centered open innovation in software, and more recently government, in a Randian cloak sequined with Silicon Valley rhinestones. My main reaction: "So, your point would be...?" More closely:
First, there's what Theodore Roosevelt had to say about critics. (Accordingly, I fully cop to the recursive hypocrisy of this post.) If, as Morozov says of O'Reilly, "For all his economistic outlook, he was not one to talk externalities..." then Morozov (as most of my fellow liberals do) ignores the utility of motivation. I accept and embrace that with self-interest and the energy to pursue it, more (ahem, taxable) wealth is created. So when O'Reilly says something, I don't reflexively reject it because it might be self-promoting; rather, I first try to make sure I understand how that benefits him, so I can better filter for what might benefit me. For example, Morozov writes:
In his 2007 bestseller Words That Work, the Republican operative Frank Luntz lists ten rules of effective communication: simplicity, brevity, credibility, consistency, novelty, sound, aspiration, visualization, questioning, and context. O’Reilly, while employing most of them, has a few unique rules of his own. Clever use of visualization, for example, helps him craft his message in a way that is both sharp and open-ended. Thus, O’Reilly’s meme-engineering efforts usually result in “meme maps,” where the meme to be defined—whether it’s “open source” or “Web 2.0”—is put at the center, while other blob-like terms are drawn as connected to it.Where Morozov offers a warning, I see a manual! I just have to remember my obligation to apply it honestly and ethically.
Second, Morozov chooses not to observe that if O'Reilly and others hadn't broadened the free software movement into an "open source" one that ultimately offered more options for balancing the needs and rights of software developers with those of users (who themselves might also be developers), we might all still be in deeper thrall to proprietary vendors. I know from first-hand experience that the world simply was not and is still not ready to accept GPL as the only option.
Nonetheless, good on Morozov for offering this critique of O'Reilly. Essays like this help keep guys like O'Reilly honest, as far as that's necessary. They also force us to think hard about what O'Reilly's peddling -- a responsibility that should be ours. I used to get frustrated by folks who slapped the 2.0 label on everything, to the point of meaninglessness, until I appreciated that the meme and its overuse drove me to think and presented me with an opportunity to riff on it. I think O'Reilly and others like him do us a great service when they try to boil down complexities into memes. The trick for us is to make sure the memes are the start of our understanding, not the end of it.
Facebook's Sponsored Stories feature is one of the ad targeting horses the firm's counting on to pull it out of its current valuation morass (read this, via @bussgang).
Sponsored Stories is a virality-enhancing mechanism (no jokes please, that was an "a" not an "i") that allows Facebook advertisers to increase the reach of Facebook users' interactions with the advertisers' brands on Facebook (Likes, Check-ins, etc.). It does this by increasing the number of a user's Facebook friends who see such engagements with the advertisers' brands beyond the limited number who would, under normal application of the Facebook news feed algorithm, see those endorsements.
Many users are outraged that this unholy Son-Of-Beacon feature violates their privacy, to the point that they sue-and-settle (or try to, oops).
What they are missing perhaps is the opportunity to "surf" an advertiser's Sponsored Stories investment to amplify their own self-promotion or mere narcissism.
Consider the following simple example. Starbucks is / has been using this ad program. Let's say I go to Starbucks and "check in" on Facebook. Juiced by Sponsored Stories (within the additional impressions Starbucks has paid for), all of my Facebook friends browsing their news feeds will see I've checked in at Starbucks (and presumably feel all verklempt about a brand that could attract such a valued friend).
Now, what if I, savvy small business person, comment in my check in that I'm "at Starbucks, discussing my <link>NEW BOOK</link> with friends!" I've pulled off the social media equivalent of pasting my bumper sticker on Starbucks' billboard.
I need to look more closely into this, but as I understand it, the Sponsored Stories feature can't today prevent users from including negative feedback in their brand engagements, where such flexibility is provided for. So if they can't handle the negative yet, it may still be that they can't prevent more general forms of off-brand messaging.
I'm sure others have considered this and other possibilities. Comments very welcome! Meanwhile, I'm off to Starbucks to discuss my upcoming NEW BOOK.
Paul Simon wrote, "Every generation throws a hero at the pop charts." Now it's Marissa Mayer's turn to try to make Yahoo!'s chart pop. This will be hard because few tech companies are able to sustain value creation much past their IPOs.
What strategic path for Yahoo! satisfies the following important requirements?
Yahoo!'s company profile is a little buzzwordy but offers a potential point of departure. What Yahoo! says:
"Our vision is to deliver your world, your way. We do that by using technology, insights, and intuition to create deeply personal digital experiences that keep more than half a billion people connected to what matters the most to them – across devices, on every continent, in more than 30 languages. And we connect advertisers to the consumers who matter to them most – the ones who will build their businesses – through our unique combination of Science + Art + Scale."
What Cesar infers:
Yahoo! is a filter.
Here are some big things the Internet helps us do:
Every one of these functions has an 800 lb. gorilla, and a few aspirants, attached to it:
Um, filter... Filter. There's a flood of information out there. Who's doing a great job of filtering it for me? Google alerts? Useful but very crude. Twitter? I browse my followings for nuggets, but sometimes these are hard to parse from the droppings. Facebook? Sorry friends, but my inner sociopath complains it has to work too hard to sift the news I can use from the River of Life.
Filtering is still a tough, unsolved problem, arguably the problem of the age (or at least it was last year when I said so). The best tool I've found for helping me build filters is Yahoo! Pipes. (Example)
As far as I can tell, Pipes has remained this slightly wonky tool in Yahoo's bazaar suite of products. Nerds like me get a lot of leverage from the service, but it's a bit hard to explain the concept, and the semi-programmatic interface is powerful but definitely not for the general public.
Now, what if Yahoo! were to embrace filtering as its core proposition, and build off the Pipes idea and experience under the guidance of Google's own UI guru -- the very same Ms. Mayer, hopefully applying the lessons of iGoogle's rise and fall -- to make it possible for its users to filter their worlds more effectively? If you think about it, there are various services out there that tackle individual aspects of the filtering challenge: professional (e.g. NY Times, Vogue, Car and Driver), social (Facebook, subReddits), tribal (online communities extending from often offline affinities), algorithmic (Amazon-style collaborative filtering), sponsored (e.g., coupon sites). No one is doing a good job of pulling these all together and allowing me to tailor their spews to my life. Right now it's up to me to follow Gina Trapani's Lifehacker suggestion, which is to use Pipes.
OK so let's review:
Well, let's look at this a bit. I'd argue that a good filter is effectively a "passive search engine". Basically through the filters people construct -- effectively "stored searches" -- they tell you what it is they are really interested in, and in what context and time they want it. With cookie-based targeting under pressure on multiple fronts, advertisers will be looking for impression inventories that provide search-like value propositions without the tracking headaches. Whoever can do this well could make major bank from advertisers looking for an alternative to the online ad biz Hydra (aka Google, Facebook, Apple, plus assorted minor others).
Savvy advertisers and publishers will pooh-pooh the idea that individual Pipemakers would be numerous enough or consistent enough on their own to provide the reach that is the reason Yahoo! is still in business. But I think there's lots of ways around this. For one, there's already plenty of precedent at other media companies for suggesting proto-Pipes -- usually called "channels", Yahoo! calls them "sites" (example), and they have RSS feeds. Portals like Yahoo!, major media like the NYT, and universities like Harvard suggest categories, offer pre-packaged RSS feeds, and even give you the ability to roll your own feed out of their content. The problem is that it's still marketed as RSS, which even in this day and age is still a bit beyond for most folks. But if you find a more user-friendly way to "clone and extend" suggested Pipes, friends' Pipes, sponsored Pipes, etc., you've got a start.
Check? Lots of hand-waving, I know. But what's true is that Yahoo! has suffered from a loss of a clear identity. And the path to re-growing its value starts with fixing that problem.
Good luck Marissa!
(See here for Part 1)
Here's one summary of the experience that's making the rounds:
I wasn't able to be there all that long, but my impression was different. Men of all colors (especially if you count tattoos), and lots more women (many tattooed also, and extensively). I had a chance to talk with Doc Searls (I'm a huge Cluetrain fan) briefly at the Digital Harvard reception at The Parish; he suggested (my words) the increased ratio of women is a good barometer for the evolution of the festival from narcissistic nerdiness toward more sensible substance. Nonetheless, on the surface, it does remain a sweaty mosh pit of digital love and frenzied networking. Picture Dumbo on spring break on 6th and San Jacinto. With light sabers:
Sight that will haunt my dreams for a while: VC-looking guy, blazer and dress shirt, in a pedicab piloted by skinny grungy student (?) Dude, learn Linux, and your next tip from The Man at SXSW might just be a term sheet.
So whom did I meet, and what did I learn:
I had a great time listening to PRX.org's John Barth. The Public Radio Exchange aggregates independent content suitable for radio (think The Moth), adds valuable services like consistent content metadata and rights management, and then acts as a distribution hub for stations that want to use it. We talked about how they're planning to analyze listenership patterns with that metadata and other stuff (maybe gleaning audience demographics via Quantcast) for shaping content and targeting listeners. He related for example that stations seem to prefer either 1 hour programs they can use to fill standard-sized holes, or two- to seven- minute segments they can weave into pre-existing programs. Documentary-style shows that weave music and informed commentary together are especially popular. We explored whether production templates ("structured collaboration": think "Mad Libs" for digital media) might make sense. Maybe later.
Paul Payack explained his Global Language Monitor service to me, and we explored its potential application as a complement if not a replacement for episodic brand trackers. Think of it as a more sophisticated and source-ecumenical version of Google Insights for Search.
Kara Oehler's presentation on her Mapping Main Street project was great, and it made me want to try her Zeega.org service (a Harvard metaLAB project) as soon as it's available, to see how close I can get to replicating The Yellow Submarine for my son, with other family members spliced in for The Beatles. Add it to my list of other cool projects I like, such as mrpicassohead.
Finally, congrats to Perry Hewitt (here with Anne Cushing) and all her Harvard colleagues on a great evening!
So Facebook's finally filed to do an IPO. Should you like? A year ago, I posted about how a $50 billion valuation might make sense. Today, the target value floated by folks is ~$85 billion. One way to look at it then, and now, is to ask whether each Facebook user (500 million of them last January, 845 million of them today) has a net present value to Facebook's shareholders of $100. This ignores future users, but then also excludes hoped-for appreciation in the firm's value.
One way to get your arms around a $100/ user NPV is to simply discount a perpetuity: divide an annual $10 per user cash flow (assumed = to profit here, for simplicity) by a 10% discount rate. Granted, this is more of a bond-than-growth-stock approach to valuation, but Facebook's already pretty big, and Google's making up ground, plus under these economic conditions it's probably OK to be a bit conservative.
Facebook's filing indicated they earned $1 billion in profit on just under $4 billion in revenue in 2011. This means they're running at about $1.20 per user in profit. To bridge this gap between $1.20 and $10, you have to believe there's lots more per-user profit still to come.
Today, 85% of Facebook's revenues come from advertising. So Facebook needs to make each of us users more valuable to its advertisers, perhaps 4x so to bridge half the gap. That would mean getting 4x better at targeting us and/or influencing our behavior on advertisers' behalf. What would that look like?
The other half of the gap gets bridged by a large increase in the share of Facebook's revenues that comes from its cut of what app builders running on the FB platform, like Zynga, get from you. At Facebook's current margin of 25%, $5 in incremental profit would require $20 in incremental net revenue. Assume Facebook's cut from its third party app providers is 50%, and that means an incremental $40/year each user would have to kick in at retail. Are each of us good for another $40/year to Facebook? If so, where would it come from?
My guess is that Facebook will further cultivate, through third-party developers most likely, some combination of paid content and productivity app subscription businesses. It's possible that doing so would not only raise revenues directly but also have a synergistic positive effect on ad rates the firm can command, with more of our time and activity under the firm's gaze.
A lovely Saturday:
A perfect day for some refreshment:
Why? (And, why now?) Relational databases and SQL have been around for forty years. Yet, no reasonable business person would disagree that:
1. it's useful to know how to use spreadsheet software, both to DIY and manage others who do;
2. there's much more information out there today;
3. harnessing this information is not only advantageous but essential;
4. more powerful tools like database management systems are necessary for this.
Therefore, business people should know a little bit about these more powerful tools, to continue to be considered reasonable.
I broke my own rule earlier today and twitched (that's tweeted+*itched -- you read it here first) an impulsive complaint about how Google does not allow you to opt out of having it consider your location as a relevance factor in the search results it offers you:
I don't take it back. But, I do think I owe a constructive suggestion for how this could be done, in a way that doesn't compromise the business logic I infer behind this regrettable choice. Plus, I'll lay out what I infer this logic to be, and the drivers for it, in the hope that someone can improve my understanding. Finally, I'll lay out some possible options for SEO in an ever-more-local digital business context.
OK, first, here's the problem. In one client situation I'm involved with, we're designing an online strategy with SEO as a central objective. There are a number of themes we're trying to optimize for. One way you improve SEO is to identify the folks who rank / index highly on terms you care about, and cultivate a mutually valuable relationship in which they eventually may link to relevant content you have on a target theme. To get a clean look at who indexes well on a particular theme and related terms, you can de-personalize your search. You do this with a little url surgery:
Start with the search query:
Then graft on a little string to depersonalize the query:
Now, when I did this, I noticed that Google was still showing me local results. These usually seem less intrusive. But now, like some invasive weed, they'd choked off my results, ranging all the way to the third position and clogging up most of the rest of the first page, for a relatively innocuous term ("law"; lots of local law firms, I guess).
Then I realized that "&pws=0" tells Google to stop rummaging around in the cookies it's set on my browser, plus other information in my http requests, and won't help me prevent Google guessing / using my location, since that's based on the location of the ISP's router between my computer and the Google cloud.
Annoyed, I poked around to see what else I could do about it. Midway down the left-hand margin of the search results page, I noticed this:
So naturally, my first thought was to specify "none", or "null", to see if I could turn this off. No joy.
Next, some homework to see if there's some way to configure my way out of this. That led me to Rishi's post (see the third answer, dated 12/2/2010, to the question).
Unbelieving that an organization with as fantastic a UI aesthetic -- that is to say, functional / usable in the extreme -- as Google would do this, I probed further.
First stop: Web Search Help. The critical part:
Q. Can I turn off location-based customization?
A. The customization of search results based on location is an important component of a consistent, high-quality search experience. Therefore, we haven't provided a way to turn off location customization, although we've made it easy for you to set your own location or to customize using a general location as broad as the country that matches your local domain...
Ah, so, "It's a feature, not a bug." :-)
...If you find that your results for a particular search are more local than what you're looking for, you can set your location to a broader geographical area (such as a country instead of a city, zip code, or street address). Please note that this will greatly reduce the amount of locally relevant results that you’ll see. [emphasis mine]
Exactly! So I tried to game the system:
Drat! Foiled again. Ironic, this "Location not recognized" -- from the people who bring us Google Earth!
Surely, I thought, some careful consideration must have gone into turning the Greatest Tool The World Has Ever Known into the local Yellow Pages. So, I checked the Google blog. A quick search there for "location", and presto, this. Note that at this point, February 26, 2010, it was still something you could add.
Later, on October 18, 2010 -- where I have I been? -- this, which effectively makes "search nearby" non-optional:
We’ve always focused on offering people the most relevant results. Location is one important factor we’ve used for many years to customize the information that you find. For example, if you’re searching for great restaurants, you probably want to find ones near you, so we use location information to show you places nearby.
Today we’re moving your location setting to the left-hand panel of the results page to make it easier for you to see and control your preferences. With this new display you’re still getting the same locally relevant results as before, but now it’s much easier for you to see your location setting and make changes to it.
(BTW, is it just me, or is every Google product manager a farmer's-market-shopping, restaurant-hopping foodie? Just sayin'... but I seriously wonder how much designers' own demographic biases end up influencing assumptions about users' needs and product execution.)
Now, why would Google care so much about "local" all of a sudden? Is it because Marissa Mayer now carries a torch for location (and Foursquare especially)? Maybe. But it's also a pretty good bet that it's at least partly about the Benjamins. From the February Google post, a link to a helpful post on SocialBeat, with some interesting snippets:
Google has factored location into search results for awhile without explicitly telling the user that the company knows their whereabouts. It recently launched ‘Nearby’ search in February, returning results from local venues overlaid on top of a map.
Other companies also use your IP address to send you location-specific content. Facebook has long served location-sensitive advertising on its website while Twitter recently launched a feature letting users geotag where they are directly from the site. [emphasis mine]
Facebook's stolen a march on Google in the social realm (everywhere but Orkut-crazed Brazil; go figure). Twitter's done the same to Google on the real-time front. Now, Groupon's pay-only-for-real-sales-and-then-only-if-the-volumes-justify-the-discount threatens the down-market end of Google's pay-per-click business with a better mousetrap, from the small biz perspective. (BTW, that's why Groupon's worth $6 billion all of a sudden.) All of these have increasingly (and in Groupon's case, dominantly) local angles where the value to both advertiser and publisher (Facebook / Twitter / Groupon) are presumably highest.
Ergo, Google gets more local. But that's just playing defense, and Eric, Sergey, Larry, and Marissa are too smart (and, with $33 billion in cash on hand, too rich) to do just that.
Enter Android. Hmm. Just passed Apple's iOS and now is running the table in the mobile operating system market share game. Why wouldn't I tune my search engine to emphasize local search results, if more and more of the searches are coming from mobile devices, and especially ones running my OS? Yes, it's an open system, but surely dominating it at multiple layers means I can squeeze out more "rent", as the economists say?
Now, back to my little problem. What could Google do that would still serve its objective of global domination through local search optimization, while satisfying my nerdy need for "de-localized" results? The answer's already outlined above -- just let me type in "world", and recognize it for the pathetic niche plea that it is. Most folks will never do this, and this blog's not a bully-enough pulpit to change that. Yet.
The bigger question, though, is how to do SEO in a world where it's all location, location, location, or as SEOmoz writes
Location-based results raise political debates, such as "this candidate is great" showing up as the result in one location while "this candidate is evil" in another. Location-based queries may increase this debate. I need only type in a candidate's name and Instant will tell me what is the prevailing opinion in my area. I may not know if that area is the size of a city block or the entire world, but if I am easily influenced then the effect of the popular opinion has taken one step closer (from search result to search query) to the root of thought. The philosphers among you can debate whether or not the words change the very nature of ideas.
OK, never leave without a recommendation. Here are two:
First, consider that for any given theme, some keywords might be more "local" than others. Under the theme "Law", the keyword "law" will dredge up a bunch of local law firms. But another keyword, say "legal theory", is less likely to have that effect (until discussing that topic in local indie coffee shops becomes popular, anyway). So you might explore re-optimizing for these less-local alternatives. (Here's an idea: some enterprising young SEO expert might build a web service that would, for any "richly local" keyword, suggest less-local alternatives from a crowd-sourced database compiled by angry folks like me. Sort of a "de-localization thesaurus". Then, eventually, sell it to a big ad agency holding company.)
Second, as location kudzu crawls its way up Google's search results, there's another phenomenon happening in parallel. These days, for virtually any major topic, the Wikipedia entry for it sits at or near the top of Google's results. So, if as with politics, now too search and SEO are local, and much harder therefore to play, why not shift your optimization efforts to the place that the odds-on top Google result will take you, if theme leadership is a strategic objective?
PS Google I still love you. Especially because you know where I am.