About

Cesar A. Brea bio at Force Five Partners

     

Subscribe

Get new posts by email:

RSS

Subscribe via
Google Currents

19 posts categorized "Application Design"

May 10, 2013

Book Review: Converge by @rwlord and @rvelez #convergebook

I just finished reading Converge, the new book on integrating technology, creativity, and media by Razorfish CEO Bob Lord and his colleague Ray Velez, the firm’s CTO.  (Full disclosure: I’ve known Bob as a colleague, former boss, and friend for more than twenty years and I’m a proud Razorfish alum from a decade ago.)

Reflecting on the book I’m reminded of the novelist William Gibson’s famous comment in a 2003 Economist interview that “The future’s already here, it’s just not evenly distributed.”  In this case, the near-perfect perch that two already-smart guys have on the Digital Revolution and its impact on global brands has provided them a view of a new reality most of the rest of us perceive only dimly.

So what is this emerging reality?  Somewhere along the line in my business education I heard the phrase, “A brand is a promise.”  Bob and Ray now say, “The brand is a service.”  In virtually all businesses that touch end consumers, and extending well into relevant supply chains, information technology has now made it possible to turn what used to be communication media into elements of the actual fulfillment of whatever product or service the firm provides.  

One example they point to is Tesco’s virtual store format, in which images of stocked store shelves are projected on the wall of, say, a train station, and commuters can snap the QR codes on the yogurt or quarts of milk displayed and have their order delivered to their homes by the time they arrive there: Tesco’s turned the billboard into your cupboard.  Another example they cite is Audi City, the Kinnect-powered configurator experience through which you can explore and order the Audi of your dreams.  As the authors say, “marketing is commerce, and commerce is marketing.”

But Bob and Ray don’t just describe, they also prescribe.  I’ll leave you to read the specific suggestions, which aren’t necessarily new.  What is fresh here is the compelling case they make for them; for example, their point-by-point case for leveraging the public cloud is very persuasive, even for the most security-conscious CIO.  Also useful is their summary of the Agile method, and of how they’ve applied it for their clients.

Looking more deeply, the book isn’t just another surf on the zeitgeist, but is theoretically well-grounded.  At one point early on, they say, “The villain in this book is the silo.”  On reading this (nicely turned phrase), I was reminded of the “experience curve” business strategy concept I learned at Bain & Company many years ago.  The experience curve, based on the idea that the more you make and sell of something, the better you (should) get at it, describes a fairly predictable mathematical relationship between experience and cost, and therefore between relative market share and profit margins.  One of the ways you can maximize experience is through functional specialization, which of course has the side effect of encouraging the development of organizational silos.  A hidden assumption in this strategy is that customer needs and associated attention spans stay pinned down and stable long enough to achieve experience-driven profitable ways to serve them.  But in today’s super-fragmented, hyper-connected, kaleidoscopic marketplace, this assumption breaks down, and the way to compete shifts from capturing experience through specialization, to generating experience “at-bats” through speedy iteration, innovation, and execution.  And this latter competitive mode relies more on the kind of cross-disciplinary integration that Bob and Ray describe so richly.

The book is a quick, engaging read, full of good stories drawn from their extensive experiences with blue-chip brands and interesting upstarts, and with some useful bits of historical analysis that frame their arguments well (in particular, I Iiked their exposition of the television upfront).  But maybe the best thing I can say about it is that it encouraged me to push harder and faster to stay in front of the future that’s already here.  Or, as a friend says, “We gotta get with the ‘90’s, they’re almost over!”

(See this review and buy the book on Amazon.com)


April 10, 2013

Fooling Around With Google App Engine @googlecloud

A simple experiment: the "Influence Reach Factor" Calculator. (Um, it just multiplies two numbers together.  But that's beside the point, which was to sort out what it's like to build and deploy an app to Google's App Engine, their cloud computing service.)

Answer: pretty easy.  Download the App Engine SDK.  Write your program (mine's in Python, code here, be kind, props and thanks to Bukhantsov.org for a good model to work from).  Deploy to GAE with a single click.

By contrast, let's go back to 1999.  As part of getting up to speed at ArsDigita, I wanted to install the ArsDigita Community System (ACS), an open-source application toolkit and collection of modules for online communities.  So I dredged up an old PC from my basement, installed Linux, then Postgres, then AOLServer, then configured all of them so they'd welcome ACS when I spooled it up (oh so many hours RTFM-ing to get various drivers to work).  Then once I had it at "Hello World!" on localhost, I had to get it networked to the Web so I could show it to friends elsewhere (this being back in the days before the cable company shut down home-served websites).  

At which point, cue the Dawn Of Man.

Later, I rented servers from co-los. But I still had to worry about whether they were up, whether I had configured the stack properly, whether I was virus-free or enrolled as a bot in some army of darkness, or whether demand from the adoring masses was going to blow the capacity I'd signed up for. (Real Soon Now, surely!)

Now, Real Engineers will say that all of this served to educate me about how it all works, and they'd be right.  But unfortunately it also crowded out the time I had to learn about how to program at the top of the stack, to make things that people would actually use.  Now Google's given me that time back.

Why should you care?  Well, isn't it the case that you read everywhere about how you, or at least certainly your kids, need to learn to program to be literate and effective in the Digital Age?  And yet, like Kubrick's monolith, it all seems so opaque and impenetrable.  Where do you start?  One of the great gifts I received in the last 15 years was to work with engineers who taught me to peel it back one layer at a time.  My weak effort to pay it forward is this small, unoriginal advice: start by learning to program using a high-level interpreted language like Python, and by letting Google take care of the underlying "stack" of technology needed to show your work to your friends via the Web.  Then, as your functional or performance needs demand (which for most of us will be rarely), you can push to lower-level "more powerful" (flexible but harder to learn) languages, and deeper into the stack.

August 06, 2012

Zen and the Art of IT Planning #cio

It's been on my reading list forever, but this year I finally got around to Robert Pirsig's Zen and the Art of Motorcycle Maintenance.  It was heavy going in spots, but it didn't disappoint. So many wonderful ideas to think about and do something with. Among a thousand other things, I was taken with Pirsig's exposition of "gumption".  He describes it as a variable property developed in someone when he or she "connects with Quality" (the principal object of his inquiry).  He associates it with "enthusiasm", and writes:

A person filled with gumption doesn't sit around dissipating and stewing about things.  He's at the front of the train of his own awareness, watching to see what's up the track and meeting it when it comes.  That's gumption. (emphasis mine; Pirsig, Zen, p. 310, First Harper Perennial Modern Classics edition 2005)

In recent years I've tested my gumption limits in trivial and meaningful ways: built a treehouse, fixed an old snowblower, serviced sailboat winches, messed around in SQL and Python, started a business. For me, gumption was the "Well, here goes..." evanescent sense of that moment when preparation ends and experimentation begins, an amplified mix of anxiety and anticipation at the edge of the sort-of-known and the TBD.  Or, like the joy of catching a wave,  it's feeling for a short time what it's like to have your brain light up an order of magnitude more brightly than it manages on average, and watching your productivity soar.

So what's this got to do with IT planning?

For a while now I've been working with both big and small companies, and seen two types of IT planning happen in both settings. In one case there's endless talk of 3-year end-state architectures that seem to recede and disappear like mirages as you Gantt-crawl toward them.  In the other, there's endless hacks that "scratch itches" and make you feel like you're among the tribe of Real Men Who Ship, but  which toast you six months later with security holes or scaling limits.

Getting access to data and having enough operational flexibility to act on the insights we help produce with this data are crucial to the success we try to help our clients achieve, and hold ourselves accountable for. So, (sticking with the motorcycle metaphor) a big part of my job is to be able to read what "gear" an IT organization is in, and to help it shift into the right one if needed -- in other words, to find a proper balance of planning and execution, or "the right amount of gumption".  One crude measure I've learned to apply is what I'm calling the "slide-to-screen" ratio (aka the ".ppt-to-.php" score for nerdier friends).

It's a simple calculation.  Take the number of components yet to be delivered in an IT architecture chart or slide, and divide them by the number of components or applications delivered over the same time period looking backward.  For example, if the chart says 24 components will be delivered over the next three years, and the same number of comparable items have been delivered over the prior three years, you're running at "1".

Admittedly, the standard's arbitrary, and hard to compare across situations. It's the question that's valuable.  In one situation, there's lots of coding, but little clear sense of where it needs to go, tantamount to trying to drive fast in first gear.  In the other, there's lots of ambition, but not much seems to happen -- like trying to leave the driveway in fifth gear.  When I'm listening to an IT plan, I'm not only looking at the slides and the demos, I'm also feeling for the "gumption" of the authors, and where they are with respect to the "wave".  The best plans always seem to say something like, "Well, here's what we learned -- very specifically -- from the last 24 months' deployments, and here's what we think we need to do (and not) in the next 24 months as a result." They're simultaneously thoughtful and action-oriented.  Conversely, when I don't see this specifics-laden reflection, and instead get a generic look forward, and a squishy, over-hedged, non-committal roadmap for getting there, warning bells go off.

Pushing for the implications of the answer -- to downshift, or upshift, and how -- is incredibly valuable.  Above "1", pushing might sound like, "OK, so what pieces of this vision will you ship in each of the next 4 quarters, and what critical assumptions and dependencies are embedded in your answers?"  Below "1", the question might be, "So, what complementary capabilities, and security / usability / scalability enhancements do you anticipate needing to make these innovations commercially viable?"  The answers you get in that moment -- a "Blink"-style gumption test -- are more useful than any six-figure IT process or organizational audit will yield.

 

July 16, 2012

Congratulations @marissamayer on your new #Yahoo gig. Now what? Some ideas

Paul Simon wrote, "Every generation throws a hero at the pop charts."  Now it's Marissa Mayer's turn to try to make Yahoo!'s chart pop.  This will be hard because few tech companies are able to sustain value creation much past their IPOs.  

What strategic path for Yahoo! satisfies the following important requirements?

  • Solves a keenly felt customer / user / audience / human problem?
  • Fits within but doesn't totally overlap what other competitors provide?
  • Builds off things Yahoo! has / does well?
  • Fits Ms. Mayer's experiences, so she's playing from a position of strength and confidence?
  • As a consequence of all this, will bring advertisers back at premium prices?

Yahoo!'s company profile is a little buzzwordy but offers a potential point of departure.  What Yahoo! says:

"Our vision is to deliver your world, your way. We do that by using technology, insights, and intuition to create deeply personal digital experiences that keep more than half a billion people connected to what matters the most to them – across devices, on every continent, in more than 30 languages. And we connect advertisers to the consumers who matter to them most – the ones who will build their businesses – through our unique combination of Science + Art + Scale."

What Cesar infers:

Yahoo! is a filter.

Here are some big things the Internet helps us do:

  • Find
  • Connect
  • Share
  • Shop
  • Work
  • Learn
  • Argue
  • Relax
  • Filter

Every one of these functions has an 800 lb. gorilla, and a few aspirants, attached to it:

  • Find -- Google
  • Connect -- Facebook, LinkedIn
  • Share -- Facebook, Twitter, Yahoo!/Flickr (well, for the moment...)
  • Shop -- Amazon, eBay
  • Work -- Microsoft, Google, GitHub
  • Learn -- Wikipedia, Khan Academy
  • Argue -- Wordpress, Typepad, [insert major MSM digital presence here]
  • Relax -- Netflix, Hulu, Pandora, Spotify
  • Filter -- ...

Um, filter...  Filter.   There's a flood of information out there.  Who's doing a great job of filtering it for me?  Google alerts?  Useful but very crude.  Twitter?  I browse my followings for nuggets, but sometimes these are hard to parse from the droppings.  Facebook?  Sorry friends, but my inner sociopath complains it has to work too hard to sift the news I can use from the River of Life.

Filtering is still a tough, unsolved problem, arguably the problem of the age (or at least it was last year when I said so).  The best tool I've found for helping me build filters is Yahoo! Pipes.  (Example)

As far as I can tell, Pipes has remained this slightly wonky tool in Yahoo's bazaar suite of products.  Nerds like me get a lot of leverage from the service, but it's a bit hard to explain the concept, and the semi-programmatic interface is powerful but definitely not for the general public.

Now, what if Yahoo! were to embrace filtering as its core proposition, and build off the Pipes idea and experience under the guidance of Google's own UI guru -- the very same Ms. Mayer, hopefully applying the lessons of iGoogle's rise and fall -- to make it possible for its users to filter their worlds more effectively?  If you think about it, there are various services out there that tackle individual aspects of the filtering challenge: professional (e.g. NY Times, Vogue, Car and Driver), social (Facebook, subReddits), tribal (online communities extending from often offline affinities), algorithmic (Amazon-style collaborative filtering), sponsored (e.g., coupon sites).  No one is doing a good job of pulling these all together and allowing me to tailor their spews to my life.  Right now it's up to me to follow Gina Trapani's Lifehacker suggestion, which is to use Pipes.

OK so let's review:

  • Valuable unsolved problem for customers / users: check.
  • Fragmented, undominated competitive space: check.
  • Yahoo! has credibly assets / experience: check.
  • Marissa Mayer plays from position of strength and experience: check.
  • Advertisers willing to pay premium prices, in droves: ...

Well, let's look at this a bit.  I'd argue that a good filter is effectively a "passive search engine".  Basically through the filters people construct -- effectively "stored searches" -- they tell you what it is they are really interested in, and in what context and time they want it.  With cookie-based targeting under pressure on multiple fronts, advertisers will be looking for impression inventories that provide search-like value propositions without the tracking headaches.  Whoever can do this well could make major bank from advertisers looking for an alternative to the online ad biz Hydra (aka Google, Facebook, Apple, plus assorted minor others).

Savvy advertisers and publishers will pooh-pooh the idea that individual Pipemakers would be numerous enough or consistent enough on their own to provide the reach that is the reason Yahoo! is still in business.  But I think there's lots of ways around this.  For one, there's already plenty of precedent at other media companies for suggesting proto-Pipes -- usually called "channels", Yahoo! calls them "sites" (example), and they have RSS feeds.  Portals like Yahoo!, major media like the NYT, and universities like Harvard suggest categories, offer pre-packaged RSS feeds, and even give you the ability to roll your own feed out of their content.  The problem is that it's still marketed as RSS, which even in this day and age is still a bit beyond for most folks.  But if you find a more user-friendly way to "clone and extend" suggested Pipes, friends' Pipes, sponsored Pipes, etc., you've got a start.

Check?  Lots of hand-waving, I know.  But what's true is that Yahoo! has suffered from a loss of a clear identity.  And the path to re-growing its value starts with fixing that problem.

Good luck Marissa!

 

 

 

March 20, 2012

Organic Data Modeling in the Age of the Extrabase #analytics

Sorry for the buzzwordy title of this post, but hopefully you'll agree that sometimes they can be useful to communicating an important Zeitgeist.

I'm working with one of our clients right now to develop a new, advanced business intelligence capability that uses state-of-the art in-memory data visualization tools like Tableau and Spotfire that will ultimately connect multiple data sets to answer a range of important questions.  I've also been involved recently in a major analysis of advertising effectiveness that included a number of data sources that were either external to the organization, or non-traditional, or both.  In both cases, these efforts are likely to evolve toward predictive models of behavior to help prioritize efforts and allocate scarce resources.

Simultaneously, today's NYT carried an article about Clear Story, a Silicon Valley startup that aggregates APIs to public data sources about folks, and provides a highly simplified interface to those APIs for analysts and business execs.  I haven't yet tried their service, but I'll save that for a separate post.  The point here is that the emergence of services like this represent an important step in the evolution of Web 2.0 -- call it Web 2.2 -- that's very relevant for marketing analytics in enterprise contexts.

So, what's significant about these experiences?

Readers of Ralph Kimball's classic Data Warehouse Toolkit will appreciate both the wisdom of his advice, but also today, how the context for it has changed.  Kimball is absolutely an advocate for starting with a clear idea of the questions you'd like to answer and for making pragmatic choices about how to organize information to answer them.  However, the major editions of the book were written in a time when three things were true:

  • You needed to organize information more thoughtfully up front, because computing resources to compensate for poor initial organization were less capable and more expensive
  • The number of data sources you could integrate were far more limited, allowing you to be more definitive up front about the data structures you defined to answer your target questions
  • The questions themselves, or the range of possible answers to them, were more limited and less dynamic, because the market context was so as well

Together, these things made for business intelligence / data warehouse / data management efforts that were longer, and a bit more "waterfall" and episodic in execution.  However, over the past decade, many have critiqued such efforts for high failure rates, mostly in which they collapse of their own weight: too much investment, too much complexity, too few results.  Call this Planned Data Modeling.

Now back to the first experience I described above.  We're using the tools I mentioned to simultaneously hunt for valuable insights that will help pay the freight of the effort, define useful interfaces for users to keep using, and through these efforts, also determine the optimal data structures we need underneath to scale from the few million rows in one big flat file we've started with to something that will no doubt be larger, more multi-faceted, and thus more complex.  In particular, we're using the ability of these tools to calculate synthetic variables on the fly out of the raw data to point the way toward summaries and indeces we'll eventually have to develop in our data repository.  This will improve the likelihood that the way we architect that will directly support real reporting and analysis requirements, prioritized based on actual usage in initial pilots, rather than speculative requirements obtained through more conventional means.  Call this Organic Data Modeling.

Further, the work we've done anticipates that we will be weaving together a number of new sources of data, many of them externally provided, and that we'll likely swap sources in and out as we find that some are more useful than others.  It occurred to me that this large, heterogenous, and dynamic collection of  data sources would have characteristics sufficiently different in terms of their analytic and administrative implications that a different name altogether might be in order for the sum of the pieces.  Hence, the Extrabase.

These terms are not meant to cover up a cop-out.  In other words, some might say that mashing up a bunch of files in an in-memory visualization tool could reflect and further contribute to a lack of intellectual discipline and wherewithal to get it right.  In our case, we're hedging that risk, by having the data modelers responsible for figuring out the optimal data repository structure work extremely closely with the "front-end" analysts so that as potential data structure implications flow out of the rubber-meets-the-road analysis, we're able to sift them and decide which should stick and which we can ignore. 

But, as they say sometimes in software, "that's a feature, not a bug."  Meaning, mashing up files in these tools and seeing what's useful is a way of paying for and disciplining the back end data management process more rigorously, so that what gets built is based on what folks actually need, and gets delivered faster to boot.

March 12, 2012

#SXSW Trip Report Part 2: Being There

(See here for Part 1)

Here's one summary of the experience that's making the rounds:

 

Missing sxsw

 

I wasn't able to be there all that long, but my impression was different.  Men of all colors (especially if you count tattoos), and lots more women (many tattooed also, and extensively).   I had a chance to talk with Doc Searls (I'm a huge Cluetrain fan) briefly at the Digital Harvard reception at The Parish; he suggested (my words) the increased ratio of women is a good barometer for the evolution of the festival from narcissistic nerdiness toward more sensible substance.  Nonetheless, on the surface, it does remain a sweaty mosh pit of digital love and frenzied networking.  Picture Dumbo on spring break on 6th and San Jacinto.  With light sabers:

 

SXSW light sabers

 

Sight that will haunt my dreams for a while: VC-looking guy, blazer and dress shirt, in a pedicab piloted by skinny grungy student (?) Dude, learn Linux, and your next tip from The Man at SXSW might just be a term sheet.

So whom did I meet, and what did I learn:

I had a great time listening to PRX.org's John Barth.  The Public Radio Exchange aggregates independent content suitable for radio (think The Moth), adds valuable services like consistent content metadata and rights management, and then acts as a distribution hub for stations that want to use it.  We talked about how they're planning to analyze listenership patterns with that metadata and other stuff (maybe gleaning audience demographics via Quantcast) for shaping content and targeting listeners.  He related for example that stations seem to prefer either 1 hour programs they can use to fill standard-sized holes, or two- to seven- minute segments they can weave into pre-existing programs.  Documentary-style shows that weave music and informed commentary together are especially popular.  We explored whether production templates ("structured collaboration": think "Mad Libs" for digital media) might make sense.  Maybe later.

Paul Payack explained his Global Language Monitor service to me, and we explored its potential application as a complement if not a replacement for episodic brand trackers.  Think of it as a more sophisticated and source-ecumenical version of Google Insights for Search.

Kara Oehler's presentation on her Mapping Main Street project was great, and it made me want to try her Zeega.org service (a Harvard metaLAB project) as soon as it's available, to see how close I can get to replicating The Yellow Submarine for my son, with other family members spliced in for The Beatles.  Add it to my list of other cool projects I like, such as mrpicassohead.

Peter Boyce and Zach Hamed from Hack Harvard, nice to meet you. Here's a book that grew out of the class at MIT I mentioned -- maybe you guys could cobble together an O'Reilly deal out of your work!

Finally,  congrats to Perry Hewitt (here with Anne Cushing) and all her Harvard colleagues on a great evening!

 

Perry hewitt anne cushing

 

 

January 15, 2011

Lifetime Learning

A lovely Saturday:

Snow

A perfect day for some refreshment:

Howispentmyweekend2

Studying http://philip.greenspun.com/teaching/rdbms-iap-2011

Why?  (And, why now?)  Relational databases and SQL have been around for forty years.  Yet, no reasonable business person would disagree that:

1. it's useful to know how to use spreadsheet software, both to DIY and manage others who do;

2. there's much more information out there today;

3. harnessing this information is not only advantageous but essential;

4. more powerful tools like database management systems are necessary for this.

Therefore, business people should know a little bit about these more powerful tools, to continue to be considered reasonable.

January 06, 2011

#Google Search and The Limits of #Location

I broke my own rule earlier today and twitched (that's tweeted+*itched -- you read it here first) an impulsive complaint about how Google does not allow you to opt out of having it consider your location as a relevance factor in the search results it offers you:

Epic fail

I don't take it back.  But, I do think I owe a constructive suggestion for how this could be done, in a way that doesn't compromise the business logic I infer behind this regrettable choice.  Plus, I'll lay out what I infer this logic to be, and the drivers for it, in the hope that someone can improve my understanding.  Finally, I'll lay out some possible options for SEO in an ever-more-local digital business context.

OK, first, here's the problem.  In one client situation I'm involved with, we're designing an online strategy with SEO as a central objective.  There are a number of themes we're trying to optimize for.  One way you improve SEO is to identify the folks who rank / index highly on terms you care about, and cultivate a mutually valuable relationship in which they eventually may link to relevant content you have on a target theme.  To get a clean look at who indexes well on a particular theme and related terms, you can de-personalize your search.  You do this with a little url surgery:

Start with the search query:

http://www.google.com/search?q=[theme]

Then graft on a little string to depersonalize the query:

http://www.google.com/search?q=[theme]&pws=0

Now, when I did this, I noticed that Google was still showing me local results.  These usually seem less intrusive.  But now, like some invasive weed, they'd choked off my results, ranging all the way to the third position and clogging up most of the rest of the first page, for a relatively innocuous term ("law"; lots of local law firms, I guess).  

Then I realized that "&pws=0" tells Google to stop rummaging around in the cookies it's set on my browser, plus other information in my http requests, and won't help me prevent Google guessing / using my location, since that's based on the location of the ISP's router between my computer and the Google cloud.

 Annoyed, I poked around to see what else I could do about it.  Midway down the left-hand margin of the search results page, I noticed this:

Google Search Location Control

 

So naturally, my first thought was to specify "none", or "null", to see if I could turn this off.  No joy. 

Next, some homework to see if there's some way to configure my way out of this.  That led me to Rishi's post (see the third answer, dated 12/2/2010, to the question).  

Unbelieving that an organization with as fantastic a UI aesthetic -- that is to say, functional / usable in the extreme -- as Google would do this, I probed further. 

First stop: Web Search Help.  The critical part:

Q. Can I turn off location-based customization?

A. The customization of search results based on location is an important component of a consistent, high-quality search experience. Therefore, we haven't provided a way to turn off location customization, although we've made it easy for you to set your own location or to customize using a general location as broad as the country that matches your local domain...

Ah, so, "It's a feature, not a bug." :-)

...If you find that your results for a particular search are more local than what you're looking for, you can set your location to a broader geographical area (such as a country instead of a city, zip code, or street address). Please note that this will greatly reduce the amount of locally relevant results that you’ll see. [emphasis mine]

 Exactly!  So I tried to game the system:

Google Search Location Control world

Drat!  Foiled again.  Ironic, this "Location not recognized" -- from the people who bring us Google Earth!

Surely, I thought, some careful consideration must have gone into turning the Greatest Tool The World Has Ever Known into the local Yellow Pages.  So, I checked the Google blog.  A quick search there for "location", and presto, this. Note that at this point, February 26, 2010, it was still something you could add.  

Later, on October 18, 2010 -- where I have I been? -- this, which effectively makes "search nearby" non-optional:

We’ve always focused on offering people the most relevant results. Location is one important factor we’ve used for many years to customize the information that you find. For example, if you’re searching for great restaurants, you probably want to find ones near you, so we use location information to show you places nearby.

Today we’re moving your location setting to the left-hand panel of the results page to make it easier for you to see and control your preferences. With this new display you’re still getting the same locally relevant results as before, but now it’s much easier for you to see your location setting and make changes to it.

(BTW, is it just me, or is every Google product manager a farmer's-market-shopping, restaurant-hopping foodie?  Just sayin'... but I seriously wonder how much designers' own demographic biases end up influencing assumptions about users' needs and product execution.)

Now, why would Google care so much about "local" all of a sudden?  Is it because Marissa Mayer now carries a torch for location (and Foursquare especially)?  Maybe.  But it's also a pretty good bet that it's at least partly about the Benjamins.  From the February Google post, a link to a helpful post on SocialBeat, with some interesting snippets: 

"Location may get a central place in Google’s web search redesign"

Google has factored location into search results for awhile without explicitly telling the user that the company knows their whereabouts. It recently launched ‘Nearby’ search in February, returning results from local venues overlaid on top of a map.

Other companies also use your IP address to send you location-specific content. Facebook has long served location-sensitive advertising on its website while Twitter recently launched a feature letting users geotag where they are directly from the site. [emphasis mine]

Facebook's stolen a march on Google in the social realm (everywhere but Orkut-crazed Brazil; go figure).  Twitter's done the same to Google on the real-time front.  Now, Groupon's pay-only-for-real-sales-and-then-only-if-the-volumes-justify-the-discount threatens the down-market end of Google's pay-per-click business with a better mousetrap, from the small biz perspective.  (BTW, that's why Groupon's worth $6 billion all of a sudden.)  All of these have increasingly (and in Groupon's case, dominantly) local angles  where the value to both advertiser and publisher (Facebook / Twitter / Groupon) are presumably highest.

Ergo, Google gets more local.  But that's just playing defense, and Eric, Sergey, Larry, and Marissa are too smart (and, with $33 billion in cash on hand, too rich) to do just that.

Enter Android.  Hmm.  Just passed Apple's iOS and now is running the table in the mobile operating system market share game.  Why wouldn't I tune my search engine to emphasize local search results, if more and more of the searches are coming from mobile devices, and especially ones running my OS?  Yes, it's an open system, but surely dominating it at multiple layers means I can squeeze out more "rent", as the economists say?

The transcript of Google's Q3 earnings call is worth a read.

Now, back to my little problem.  What could Google do that would still serve its objective of global domination through local search optimization, while satisfying my nerdy need for "de-localized" results?  The answer's already outlined above -- just let me type in "world", and recognize it for the pathetic niche plea that it is.  Most folks will never do this, and this blog's not a bully-enough pulpit to change that. Yet.

The bigger question, though, is how to do SEO in a world where it's all location, location, location, or as SEOmoz writes

"Is Every Query Local Now?" 

Location-based results raise political debates, such as "this candidate is great" showing up as the result in one location while "this candidate is evil" in another.  Location-based queries may increase this debate.  I need only type in a candidate's name and Instant will tell me what is the prevailing opinion in my area.  I may not know if that area is the size of a city block or the entire world, but if I am easily influenced then the effect of the popular opinion has taken one step closer (from search result to search query) to the root of thought.   The philosphers among you can debate whether or not the words change the very nature of ideas.

Heavy.

OK, never leave without a recommendation.  Here are two:

First, consider that for any given theme, some keywords might be more "local" than others.  Under the theme "Law", the keyword "law" will dredge up a bunch of local law firms.  But another keyword, say "legal theory", is less likely to have that effect (until discussing that topic in local indie coffee shops becomes popular, anyway).  So you might explore re-optimizing for these less-local alternatives.  (Here's an idea: some enterprising young SEO expert might build a web service that would, for any "richly local" keyword, suggest less-local alternatives from a crowd-sourced database compiled by angry folks like me.  Sort of a "de-localization thesaurus".  Then, eventually, sell it to a big ad agency holding company.)

Second, as location kudzu crawls its way up Google's search results, there's another phenomenon happening in parallel.  These days, for virtually any major topic, the Wikipedia entry for it sits at or near the top of Google's results.  So, if as with politics, now too search and SEO are local, and much harder therefore to play, why not shift your optimization efforts to the place that the odds-on top Google result will take you, if theme leadership is a strategic objective?

 

PS Google I still love you.  Especially because you know where I am. 

 

March 13, 2010

Fly-By-Wire Marketing, Part II: The Limits Of Real Time Personalization

A few months ago I posted on what I called "Fly-By-Wire Marketing", or the emergence of the automation of marketing decisions -- and sometimes the automation of the development of rules for guiding those decisions.

More recently Brian Stein introduced me to Hunch, the new recommendation service founded by Caterina Fake of Flickr fame.  (Here's their description of how it works.  Here's my profile, I'm just getting going.)  When you register, you answer questions to help the system get to know you.  When you ask for a recommendation on a topic, the system not only considers what others have recommended under different conditions, but also what you've told it about you, and how you compare with others who have sought advice on the subject.

It's an ambitious service, both in terms of its potential business value (as an affiliate on steroids), but also in terms of its technical approach to "real time personalization".  Via Sim Simeonov's blog, I read this GigaOm post by Tom Pinckney, a Hunch co-founder and their VP of Engineering.  Sim's comment sparked an interesting comment thread on Tom's post.  They're useful to read to get a feel for the balance between pre-computation and on-the-fly computation, as well as the advantages of and limits to large pre-existing data sets about user preferences and behavior, that go into these services today.

One thing neither post mentions is that there may be diminishing returns to increasingly powerful recommendation logic if the set of things from which a recommendation can ultimately be selected is limited at a generic level.  For example, take a look at Hunch's recommendations for housewarming gifts.  The results more or less break down into wine, plants, media, and housewares.  Beyond this level, I'm not sure the answer is improved by "the wisdom of Hunch's crowd" or "Hunch's wisdom about me", as much as my specific wisdom about the person for whom I'm getting the gift, or maybe by what's available at a good price. (Perhaps this particular Hunch "topic" could be further improved by crossing recommendations against the intended beneficiary's Amazon wish list?)

My point isn't that Hunch isn't an interesting or potentially useful service.  Rather, as I argued several months ago,

The [next] question you ask yourself is, "How far down this road does it makes sense for me to go, by when?"  Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity.  The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.

For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit?  My experience has been that the answer is usually "yes".  But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.

At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data...)

Hunch is an interesting specific example of the increasingly broad RTP trend.  The NYT had an interesting article on real time bidding for display ads yesterday, for example.  The deeper issue in the trend I find interesting is the shift in power and profit toward specialized third parties who develop the capability to match the right cookie to the right ad unit (or, for humans, the right user to the right advertiser), and away from publishers with audiences.  In the case of Hunch, they're one and the same, but they're the exception.  How much of the increased value advertisers are willing to pay for better targeting goes to the specialized provider with the algorithm and the computing power, versus the publisher with the audience and the data about its members' behavior?  And for that matter, how can advertisers better optimize their investments across the continuum of targeting granularity?  Given the dollars now flooding into digital marketing, these questions aren't trivial.

March 12, 2010

#Adobe: Duct Tape for the "Splinternet"

(Previously titled: "Adobe: Up In The Air")

As folks line up for the iPad, SXSW rages, and the Splinternet splinters, if you own a smartphone or plan to own one, or a tablet, or if you're about to commission an app for one of these platforms, this post is for you.

A couple of years ago, Adobe seemed to have positioned itself smartly for global domination.  The simple logic:

  • Online experiences becoming richer
  • Adobe makes tools for rich experiences (Flash, Flex, Air)
  • Ergo, Adobe becomes richer

Or for you Mondrian fans, the visual version of Adobe's "All Mine!"

A1

Oh that it were that simple.  So, Apple, also vaguely interested in rich immersive experiences as its path out of the hip hardware niche toward intergalactic domination, plays the digital Soup Nazi: "No Flash support for you!"  Again, for the Visualistas:

A2
The nerve!  As if that weren't bad enough, there are those pesky evolving standards to stay ahead of.  HTML 5 now rides into town to save the Internet garden from the weedy assault of proprietary browser plugins (Flash, Gears, Silverlight) for supporting rich experiences (read as: need more client-side processing and storage than HTML 4 + browsers could offer).  Like any abstraction, it has performance compromises.  But, with powerful friends behind it with a shared interest in taking down the de facto rich experience standard -- Flash is on basically every non-mobile browser out there -- HTML 5 will get better, if like any standard, slowly.  The picture:

A3
For you conspiracy theorists, a Smoking Gun:

A5

Now, those Adobe folks are pretty smart too, and they aren't sitting still.  Basically, their strategy amounts to two things:

  • "Duct tape for the Splinternet", aka "Son Of Java" -- the ability to develop your app in their tools, and then compile them for whatever platform you'd like to publish to, including Apple's.  Remember "Write once, run anywhere?"  Of course you give a few things up -- some features, some performance.  But if you're a publisher, pretty tempting! 
  • Do what Microsoft did a few years ago -- "Embrace and Extend".  Basically, agree to play nicely with the non-threatening parts of the HTML 5 spec while continuing to extend the feature set and performance of Flash so it's preferred on the margin as the environment for the coolest rich experiences. For example, one way -- now that Adobe owns Omniture -- to extend the feature set might be to embed analytic tracking into the application layer.

Here's a good interview Rob Scoble did with the Adobe guys where they explain all this in 22 minutes.  Here's my graphic translation of the interview:

A4
 
A while ago I wrote a post on strategy in the software business that forms the frame for how I try to understand what's happening.  I think it still makes sense, but I'm eager to hear suggestions for improving it!

So what?  What does this mean for the publishers who are trying to figure out how to respond to the Splinternet?  I think it makes sense, as always, to start with The User.  Is what you are trying to do for him or her sufficiently exotic (and rewardably so) that you need the unique capabilities of each smartphone's / tablet's native OS / SDK?  Or is the idea sufficiently "genius" that you don't need to tart it up with whizziness, and can accept certain limitations in exchange for "Write once, run anywhere?"

I'd predict that Adobe will make common cause with some hardware manufacturer(s) -- HP, anyone?  It will be interesting to see what Adobe's willing to trade off for that support.

Where's Microsoft in all this?