About

Cesar A. Brea bio at Force Five Partners

     

Subscribe

Get new posts by email:

RSS

Subscribe via
Google Currents

16 posts categorized "Current Affairs"

October 13, 2013

Unpacking Healthcare.gov

So healthcare.gov launched, with problems.  I'm trying to understand why, so I can apply some lessons in my professional life.  Here are some ideas.

First, I think it helps to define some levels of the problem.  I can think of four:

1. Strategic / policy level -- what challenges do the goals we set create?  In this case, the objective, basically, is two-fold: first; reduce the costs of late-stage, high-cost uncompensated care by enrolling the people who ultimately use that (middle-aged poor folks and other unfortunates) in health insurance that will get them care earlier and reduce stress / improve outcomes (for them and for society) later; second; reduce the cost of this insurance through exchanges that drive competition.  So, basically, bring a bunch of folks from, in many cases, the wrong side of the Digital Divide, and expose them to a bunch of eligibility- and choice-driven complexity (proof:  need for "Navigators"). Hmm.  (Cue the folks who say that's why we need a simple single-payor model, but the obvious response would be that it simply wasn't politically feasible.  We need to play the cards we're dealt.)

2. Experience level -- In light of that need, let's examine what the government did do for each of the "Attract / Engage / Convert / Retain" phases of a Caveman User Experience.  It did promote ACA -- arguably insufficiently or not creatively enough to distinguish itself from opposing signal levels it should have anticipated (one take here).  But more problematically, from what I can tell, the program skips "Engage" and emphasizes "Convert": Healthcare.gov immediately asks you to "Apply Now" (see screenshot below, where "Apply Now" is prominently  featured over "Learn More", even on the "Learn" tab of the site). This is technically problematic (see #3 below), but also experientially lots to ask for when you don't yet know what's behind the curtain. 

Healthcaregov
3. Technical level -- Excellent piece in Washington Post by Timothy B. Lee. Basically, the system tries to do an eligibility check (for participation and subsidies) before sending you on to enrollment.  Doing this requires checking a bunch of other government systems.  The flowchart explains very clearly why this could be problematic.  There are some front end problems as well, described in rawest form by some of the chatter on Reddit, but from what I've seen these are more superficial, a function of poor process / time management, and fixable.

4. Organizational level -- Great article here in Slate by David Auerbach. Basically, poor coordination structure and execution by HHS of the front and back ends.

Second, here are some things HHS might do differently:

1. Strategic level: Sounds like some segmentation of the potential user base would have suggested a much greater investment in explanation / education, in advance of registration.  Since any responsible design effort starts with users and use cases, I'm sure they did this.  But what came out the other end doesn't seem to reflect that.  What bureaucratic or political considerations got in the way, and what can be revisited, to improve the result? Or, instead of allowing political hacks to infiltrate and dominate the ranks of engineers trying to design a service that works, why not embed competent technologists, perhaps drawn from the ranks of Chief Digital Officers, into the senior political ranks, to advise them on how to get things right online?

2. Experience level: Perhaps the first couple of levels of experience on healthcare.gov should have been explanatory?  "Here's what to expect, here's how this works..." Maybe video (could have used YouTube!)? Maybe also ask a couple of quick anonymous questions to determine whether the eligibility / subsidy check would be relevant, to spare the load on that engine, before seeing what plans might be available, at what price?  You could always re-ask / confirm that data later once the user's past the shopping /evaluation stage, before formally enrolling them into a plan.  In ecommerce, we don't ask untargeted shoppers to enter discount codes until they're about to check out, right?

Or, why not pre-process and cache the answer to the eligibility question the system currently tries to calculate on the fly?  After all, the government already has all our social security numbers and green card numbers, and our tax returns.  So by the time any of us go to the site, it could have pre-determined the size of any potential subsidy, if any, we'd be eligible for, and it could have used this *estimated* subsidy to calculate a *projected* premium we might pay.  We'd need a little registration / security, maybe "enter your last name and social security number, and if they match we'll tell you your estimated subsidy". (I suppose returning a subsidy answer would confirm for a crook who knows my last name that he had my correct SSN, but maybe we could prevent the brute force querying this requires with CAPTCHA. Security friends, please advise.  Naturally, I'd make sure the pre-chached lookup file stays server-side, and isn't exposed as an array in a client-side Javascript snippet!)

3. I see from viewing the page source they have Google Tag Manager running, so perhaps they also have Google Analytics running too, alongside whatever other things...  Since they've open-sourced the front end code and their content on Github, maybe they could also share what they're learning via GA, so we could evaluate ideas for improving the site in the context of that data?

4. It appears they are using Optimizely to test/ optimize their pages (javascript from page source here).  While the nice pictures with people smiling may be optimal, There's plenty of research that suggests that by pushing much of the links to site content below the fold, and forcing us to scroll to see it, they might be burying the very resources the "experience perspective" I've described suggests they need to highlight.  So maybe this layout is in fact what maximizes the results they're looking for -- pressing the "Apply Now" button -- but maybe that's the wrong question to be asking!

Postscript, November 1:

Food for thought (scroll to bottom).  How does this happen?  Software engineer friends, please weigh in!

 

June 12, 2013

Privacy vs. Security Survey Interim Results #prism #analytics

This week, one of the big news items is the disclosure of the NSA's Prism program that collects all sorts of our electronic communications, to help identify terrorists and prevent attacks.

I was struck by three things.  One is the recency bias in the outrage expressed by many people.  Not sixty days ago we were all horrified at the news of the Boston Marathon bombings.  Another is the polarization of the debate.  Consider the contrast the Hullabaloo blog draws between "insurrectionists" and "institutionalists".  The third was the superficial treatment of the tradeoffs folks would be willing to make.  Yesterday the New York Times Caucus blog published the results of a survey that suggested most folks are fence-sitters on the tradeoff between privacy and security, but left it more or less at that.  (The Onion wasn't far behind with a perfect send-up of the ambivalence we feel.)

In sum, biased decision-making based on excessively simplified choices using limited data.  Not helpful. Better would be a more nuanced examination of the tradeoff between the privacy you would be willing to give up for the potential lives saved.  I see this opportunity to improve decision making alot, and I thought this would be an interesting example to illustrate how framing and informing an issue differently can help.  So I posted this survey: https://t.co/et0Bs0OrKF

Here are some early results from twelve folks who kindly took it (please feel free to add your answers, if I get enough more I'll update the results):

Privacy vs security

(Each axis is a seven point scale, 1 at lowest and 7 at highest.  Bubble size = # of respondents who provided that tradeoff as their answer.  No bubble / just label = 1 respondent, biggest bubble at lower right = 3 respondents.)

Interesting distribution, tending slightly toward folks valuing (their own) privacy over (other people's) security.

Now my friend and business school classmate Sam Kinney suggested this tradeoff was a false choice.  I disagreed with him. But the exchange did get me to think a bit further.  More data isn't necessarily linear in its benefits.  It could have diminishing returns of course (as I argued in Pragmalytics) but it could also have increasing value as the incremental data might fill in a puzzle or help to make a connection.  While that relationship between data and safety is hard for me to process, the government might help its case by being less deceptive and more transparent about what it's collecting, and its relative benefits.  It might do this, if not for principle, then for the practical value of controlling the terms of the debate when, as David Brooks wrote so brilliantly this week, an increasingly anomic society cultivates Edward Snowdens at an accelerating clip.

I'm skeptical about the value of this data for identifying terrorists and preventing their attacks.  Any competent terrorist network will use burner phones, run its own email servers, and communicate in code.  But maybe the data surveillance program has value because it raises the bar to this level of infrastructure and process, and thus makes it harder for such networks to operate.

I'm not concerned about the use of my data for security purposes, especially not if it can save innocent boys and girls from losing limbs at the hands of sick whackos.  I am really concerned it might get reused for other purposes in ways I don't approve, or by folks whose motives I don't approve, so I'm sure we could improve oversight, not only for what data gets used how, but of the vast, outsourced, increasingly unaccountable government we have in place. But right now, against the broader backdrop of gridlock on essentially any important public issue, I just think the debate needs to get more utilitarian, and less political and ideological.  And, I think analytically-inclined folks can play a productive role in making this happen.

(Thanks to @zimbalist and @perryhewitt for steering me to some great links, and to Sam for pushing my thinking.)

May 19, 2013

@nathanheller #MOOCs in The New Yorker: You Don't Need A Weatherman

The May 20th 2013 edition of The New Yorker has an article by Vogue writer Nathan Heller on Massive Online Open Courses (MOOCs) titled "Laptop U: Has the future of college moved online?"  The author explores, or at least raises, a number of related questions.  How (well) does the traditional offline learning experience transfer online?  Is the online learning experience more or less effective than the traditional one? (By what standard? For what material?  What is gained and lost?)  What will MOOCs mean for different colleges and universities, and their faculties?  How will the MOOC revolution be funded?  (In particular, what revenue model will emerge?)

Having worked a lot in the sector, for both public and private university clients, developing everything from technology, to online-enabled programs themselves, to analytic approaches, and even on marketing and promotion, the article was a good prompt for me to try to boil out some ways to think about answering these questions.

The article focuses almost exclusively on Harvard and EdX, the 12-school joint venture through which it's pursuing MOOCs.  Obviously this skews the evaluation.  Heller writes:

Education is a curiously alchemical process. Its vicissitudes are hard to isolate.  Why do some students retain what they learned in a course for years, while others lose it through the other ear over their summer breaks?  Is the fact that Bill Gates and Mark Zuckerberg dropped out of Harvard to revolutionize the tech industry a sign that their Harvard educations worked, or that they failed?  The answer matters, because the mechanism by which conveyed knowledge blooms into an education is the standard by which MOOCs will either enrich teaching in this country or deplete it.

For me, the first step to boiling things out is to define what we mean by -- and want from -- an "education".  So, let's try to unpack why people go to college.  In most cases, Reason One is that you need a degree to get any sort of decent job.  Reason Two is to plug into a network of people -- fellow students, alumni, faculty -- that provide you a life-long community.  Of course you need a professional community for that Job thing, but also because in an otherwise anomic society you need an archipelago to seed friendships, companionships, and self-definition (or at least, as scaffolding for your personal brand: as one junior I heard on a recent college visit put it memorably, "Being here is part of the personal narrative I'm building.")  Reason Three -- firmly third -- is to get an "education" in the sense that Heller describes.  (Apropos: check this recording of David Foster Wallace's 2005 commencement address at Kenyon College.) 

Next, this hierarchy of needs then gives us a way to evaluate the prospects for MOOCs.

If organization X can produce graduates demonstrably better qualified (through objective testing, portfolios of work, and experience) to do job Y, at a lower cost, then it will thrive.  If organization X can do this better and cheaper by offering and/or curating/ aggregating MOOCs, then MOOCs will thrive.  If a MOOC can demonstrate an adequately superior result / contribution to the end outcome, and do it inexpensively enough to hold its place in the curriculum, and do it often enough that its edge becomes a self-fulfilling prophecy -- a brand, in other words -- then it will crowd out its competitors, as surely as one plant shuts out the sunlight to another.  Anyone care to bet against Georgia Tech's new $7K Master's in Computer Science?

If a MOOC-mediated social experience can connect you to a Club You Want To Be A Member Of, you will pay for that.  And if a Club That Would Have You As A Member can attract you to its clubhouse with MOOCs, then MOOCs will line the shelves of its bar.  The winning MOOC cocktails will be the ones that best produce the desired social outcomes, with the greatest number of satisfying connections.

Finally, learning is as much about the frame of mind of the student as it is about the quality of the teacher.  If through the MOOC the student is able to choose a better time to engage, and can manage better the pace of the delivery of the subject matter, then the MOOC wins.

Beyond general prospects, as you consider these principles, it becomes clear that it's less about whether MOOCs win, but which ones, for what and for whom, and how.  

The more objective and standardized -- and thus measurable and comparable -- the learning outcome and the standard of achievement, the greater the potential for a MOOC to dominate. My program either works, or it doesn't.  

If a MOOC facilitates the kinds of content exchanges that seed and stimulate offline social gatherings -- pitches to VCs, or mock interviewing, or poetry, or dance routines, or photography, or music, or historical tours, or bird-watching trips, or snowblower-maintenance workshops -- then it has a better chance of fulfilling the longings of its students for connection and belonging.  

And, the more well-developed the surrounding Internet ecosystem (Wikipedia, discussion groups, Quora forums, and beyond) is around a topic, the less I need a Harvard professor, or even a Harvard grad student, to help me, however nuanced and alchemical the experience I miss might otherwise have been.  The prospect of schlepping to class or office hours on a cold, rainy November night has a way of diluting the urge to be there live in case something serendipitous happens.

Understanding how MOOCs win then also becomes a clue to understanding potential revenue models.  

If you can get accredited to offer a degree based in part or whole on MOOCs, you can charge for that degree, and gets students or the government to pay for it (Exhibit A: University of Phoenix).  That's hard, but as a variant of this, you can get hired by an organization, or a syndicate of organizations you organize, to produce tailored degree programs -- think corporate training programs on steroids -- that use MOOCs to filter and train students.  (Think "You, Student, pay for the 101-level stuff; if you pass you get a certificate and an invitation to attend the 201-level stuff that we fund; if you pass that we give you a job.")  

Funding can come directly, or be subsidized by sponsors and advertisers, or both.  

You can try to charge for content: if you produce a MOOC that someone else wants to include in a degree-based program, you can try to license it, in part or in whole.  

You can make money via the service angle, the way self-publishing firms support authors, with a variety of best-practice based production services.  Delivery might be offered via a freemium model -- the content might be free, but access to premium groups, with teaching assistant support, might come at a price.  You can also promote MOOCs -- build awareness, drive distribution, even simply brand  -- for a cut of the action, the way publishers and event promoters do.  

Perhaps in the not-too-distant future we'll get the Academic Upfront, in which Universities front a semester's worth of classes in a MOOC, then pitch the class to sponsors, the way TV networks do today. Or, maybe the retail industry also offers a window into how MOOCs will be monetized.  Today's retail environment is dominated by global brands (think professors as fashion designers) and big-box (plus Amazon) firms that dominate supply chains and distrubution networks.  Together, Brands and Retailers effectively act as filters: we make assumptions that the products on their shelves are safe, effective, reasonably priced, acceptably stylish, well-supported.  In exchange, we'll pay their markup.  This logic sounds a cautionary note for many schools: boutiques can survive as part of or at the edges of the mega-retailers' ecosystems, but small-to-mid-size firms reselling commodities get crushed.

Of course, these are all generic, unoriginal (see Ecclesiastes 1:9) speculations.  Successful revenue models will blend careful attention to segmenting target markets and working back from their needs, resources, and processes (certain models might be friendlier to budgets and purchasing mechanisms than others) with thoughtful in-the-wild testing of the ideas.  Monolithic executions with Neolithic measurement plans ("Gee, the focus group loved it, I can't understand why no one's signing up for the paid version!") are unlikely to get very far.  Instead, be sure to design with testability in mind (make content modular enough to package or offer a la carte, for example).  Maybe even use Kickstarter as a lab for different models!

PS Heller's brilliant sendup of automated essay grading

Postscript:

The MOOC professor perspective, via the Chronicle, March 2013


October 31, 2012

Today's Data Exercise: The @fivethirtyeight / Intrade Presidential Election Arbitrage #Analytics

(Nerd alert!  You have been warned.)

Unoriginally, I'm a big fan of Nate Silver's fivethirtyeight blog.  I've learned a ton from him (currently also reading his book The Signal and the Noise).  For a little while now I've been puzzling over the relationship between his "Nowcast" on the presidential election and the price of Obama 2012 contracts at Intrade.  Take a look at this chart I made based on the data from each of these sources:

Obama - 538 vs Intrade October 2012

If we look past Obama's disastrous first debate, and look at the difference between the seven-day moving averages of the 538 Obama win probability and the Intrade Obama 2012 contract price, it looks to fluctuate roughly around 10-15 points, call it 12.  Also, looking at the volumes, it looks like the heaviest trading happens roughly around midweek, before Friday.  So if you trust Nate's projections, and unless you've got inside scoop about any big negative surprises to come, the logical thing to do is to buy Obama 2012s tomorrow, with an average probability of clearing $1.20 on each contract (about a 20% gain).

Now for the nerdy part:

First, the easy job: Intrade lets you download historical prices on its contracts.

Next, the harder job: Nate doesn't provide a .csv of his data.  But if you "view source" on his page, you'll see a file called:

"http://graphics8.nytimes.com/packages/html/1min/elections/2012/fivethirtyeight/fivethirtyeight-ccol-top.js"

right after a preceding description "Data URL".

If you take a look at this file, you'll notice it's Javascript-chart-friendly, but as far as for the kind of analysis above, not so much.  The first order of business was to cut out the stuff I didn't want, like the Senate race data, and the forecast part of the presidential polls.  Then, I further whacked out data before 10/1, because I thought examining trends in a more thinly-traded market would be less relevant.

For a little while I fiddled with the Stanford Visualization Group's Data Wrangler tool to reshape the remaining data into the .csv I needed.  It's a powerful tool, but it turned out to be easier in this case to wrangle the file structure I wanted manually:

"date","obama_votes","romney_votes","obama_win_pct","romney_win_pct","obama_pop_vote","romney_pop_vote"

"2012-10-30",298.8,239.2,79.5,20.5,50.4,48.6

"2012-10-29",294.4,243.6,75.2,24.8,50.2,48.8

etc.

Combining the Intrade and 538 data and then plotting the Intrade close and the "Obama win pct" series results in the chart above.

July 16, 2012

Congratulations @marissamayer on your new #Yahoo gig. Now what? Some ideas

Paul Simon wrote, "Every generation throws a hero at the pop charts."  Now it's Marissa Mayer's turn to try to make Yahoo!'s chart pop.  This will be hard because few tech companies are able to sustain value creation much past their IPOs.  

What strategic path for Yahoo! satisfies the following important requirements?

  • Solves a keenly felt customer / user / audience / human problem?
  • Fits within but doesn't totally overlap what other competitors provide?
  • Builds off things Yahoo! has / does well?
  • Fits Ms. Mayer's experiences, so she's playing from a position of strength and confidence?
  • As a consequence of all this, will bring advertisers back at premium prices?

Yahoo!'s company profile is a little buzzwordy but offers a potential point of departure.  What Yahoo! says:

"Our vision is to deliver your world, your way. We do that by using technology, insights, and intuition to create deeply personal digital experiences that keep more than half a billion people connected to what matters the most to them – across devices, on every continent, in more than 30 languages. And we connect advertisers to the consumers who matter to them most – the ones who will build their businesses – through our unique combination of Science + Art + Scale."

What Cesar infers:

Yahoo! is a filter.

Here are some big things the Internet helps us do:

  • Find
  • Connect
  • Share
  • Shop
  • Work
  • Learn
  • Argue
  • Relax
  • Filter

Every one of these functions has an 800 lb. gorilla, and a few aspirants, attached to it:

  • Find -- Google
  • Connect -- Facebook, LinkedIn
  • Share -- Facebook, Twitter, Yahoo!/Flickr (well, for the moment...)
  • Shop -- Amazon, eBay
  • Work -- Microsoft, Google, GitHub
  • Learn -- Wikipedia, Khan Academy
  • Argue -- Wordpress, Typepad, [insert major MSM digital presence here]
  • Relax -- Netflix, Hulu, Pandora, Spotify
  • Filter -- ...

Um, filter...  Filter.   There's a flood of information out there.  Who's doing a great job of filtering it for me?  Google alerts?  Useful but very crude.  Twitter?  I browse my followings for nuggets, but sometimes these are hard to parse from the droppings.  Facebook?  Sorry friends, but my inner sociopath complains it has to work too hard to sift the news I can use from the River of Life.

Filtering is still a tough, unsolved problem, arguably the problem of the age (or at least it was last year when I said so).  The best tool I've found for helping me build filters is Yahoo! Pipes.  (Example)

As far as I can tell, Pipes has remained this slightly wonky tool in Yahoo's bazaar suite of products.  Nerds like me get a lot of leverage from the service, but it's a bit hard to explain the concept, and the semi-programmatic interface is powerful but definitely not for the general public.

Now, what if Yahoo! were to embrace filtering as its core proposition, and build off the Pipes idea and experience under the guidance of Google's own UI guru -- the very same Ms. Mayer, hopefully applying the lessons of iGoogle's rise and fall -- to make it possible for its users to filter their worlds more effectively?  If you think about it, there are various services out there that tackle individual aspects of the filtering challenge: professional (e.g. NY Times, Vogue, Car and Driver), social (Facebook, subReddits), tribal (online communities extending from often offline affinities), algorithmic (Amazon-style collaborative filtering), sponsored (e.g., coupon sites).  No one is doing a good job of pulling these all together and allowing me to tailor their spews to my life.  Right now it's up to me to follow Gina Trapani's Lifehacker suggestion, which is to use Pipes.

OK so let's review:

  • Valuable unsolved problem for customers / users: check.
  • Fragmented, undominated competitive space: check.
  • Yahoo! has credibly assets / experience: check.
  • Marissa Mayer plays from position of strength and experience: check.
  • Advertisers willing to pay premium prices, in droves: ...

Well, let's look at this a bit.  I'd argue that a good filter is effectively a "passive search engine".  Basically through the filters people construct -- effectively "stored searches" -- they tell you what it is they are really interested in, and in what context and time they want it.  With cookie-based targeting under pressure on multiple fronts, advertisers will be looking for impression inventories that provide search-like value propositions without the tracking headaches.  Whoever can do this well could make major bank from advertisers looking for an alternative to the online ad biz Hydra (aka Google, Facebook, Apple, plus assorted minor others).

Savvy advertisers and publishers will pooh-pooh the idea that individual Pipemakers would be numerous enough or consistent enough on their own to provide the reach that is the reason Yahoo! is still in business.  But I think there's lots of ways around this.  For one, there's already plenty of precedent at other media companies for suggesting proto-Pipes -- usually called "channels", Yahoo! calls them "sites" (example), and they have RSS feeds.  Portals like Yahoo!, major media like the NYT, and universities like Harvard suggest categories, offer pre-packaged RSS feeds, and even give you the ability to roll your own feed out of their content.  The problem is that it's still marketed as RSS, which even in this day and age is still a bit beyond for most folks.  But if you find a more user-friendly way to "clone and extend" suggested Pipes, friends' Pipes, sponsored Pipes, etc., you've got a start.

Check?  Lots of hand-waving, I know.  But what's true is that Yahoo! has suffered from a loss of a clear identity.  And the path to re-growing its value starts with fixing that problem.

Good luck Marissa!

 

 

 

July 03, 2012

#Microsoft Writes Off #aQuantive. What Can We Learn?

In May 2007, Microsoft paid $6 billion to buy aQuantive.  Today, only five years later, they wrote off the whole investment.  Since I wrote about this a lot five years ago (herehere and here), it prompted me to think about what happened, and what I might learn.  Here are a few observations:

1. 2006 / 2007 was a frothy time in the ad network market, both for ads and for the firms themselves, reflecting the economy in general.

2. Microsoft came late to the party, chasing aQuantive (desperately) after Google had taken DoubleClick off the table.

3. So, Microsoft paid a 100% premium to aQuantive's market cap to get the firm.

4. Here's the way Microsoft might have been seeing things at the time:

a. "Thick client OS and productivity applications business in decline -- the future is in the cloud."

b. "Cloud business model uncertain, but certainly lower price point than our desktop franchise; must explore all options; maybe an ad-supported version of a cloud-based productivity suite?"

c. "We have MSN.  Why should someone else sit between us and our MSN advertisers and collect a toll on our non-premium, non-direct inventory?  In fact, if we had an ad network, we could sit between advertisers and other publishers and collect a toll!"

5. Here's the way things played out:

a. The economy crashed a year later.

b. When budgets came back, they went first to the most accountable digital ad spend: search.  

c. Microsoft had a new horse in that race: Bing (launched June 2009).  Discretionary investment naturally flowed there.

d. Meanwhile, "display" evolved:  video display, social display (aka Facebook), mobile display (Dadgurnit!  Google bought AdMob, Apple has iAd!  Scraps again for the rest of us...). (Good recent eMarketer presentation on trends here.)

e. Whatever's left of "traditional" display: Google / DoubleClick, as the category leader, eats first.

f. Specialized players do continue to grow in "traditional" display, through better targeting technologies (BT) and through facilitating more efficient buys (for example, DataXu, which I wrote about here).  But to grow you have to invest and innovate, and at Microsoft, by this point, as noted above, the money was going elsewhere.

g. So, if you're Microsoft, and you're getting left behind, what do you do?  Take 'em with you!  "Do not track by default" in IE 10 as of June 2012.  That's old school medieval, dressed up in hipster specs and a porkpie hat.  Steve Ballmer may be struggling strategically, but he's still as brutal as ever. 

6. Perspective

a. $6 Big Ones is only 2% of MSFT's market cap.  aQuantive may have come at  a 2x premium, but it was worth the hedge.  The rich are different from you and me.  

b. The bigger issue though is how does MSFT steal a march on Google, Apple, Facebook? Hmmm. video's hot.  Still bandwidth constrained, but that'll get better.  And there's interactive video. Folks will eventually spend lots of time there, and ads will follow them. Google's got Hangouts, Facebook's got Facetime, Apple's got iChat... and now MSFT has Skype, for $8B.   Hmm.

7. Postscripts:

a. Some of the smartest business guys I worked with at Bain in the late 90's (including Torrence Boone and Jason Trevisan) ended up at aQuantive and helped to build it into the success it was.  An interesting alumni diaspora to follow.

b. Some of the smartest folks I worked with at Razorfish in the early 2000's (including Bob Lord) ended up at aQuantive. The best part is that Microsoft may have gotten more value from buying and selling Razorfish (to Publicis) than from buying and writing off the rest of aQuantive.  Sweet, that.

c. Why not open-source Atlas?

November 11, 2011

Sponsored Occupations :-)

NYT.com had an interactive poll / visualization today taking readers' pulse on the Occupy protests.  Browsing through the usual left-right snarks and screeds, and on the heels of a recent stroll through one of the protest sites, it occurred to me that we're missing a chance to think beyond the politics of the movement, to the economic opportunity it represents.

Think of the protest sites as outdoor ad inventory.  This inventory is in great locations -- in the hearts of the world's financial districts, with lots of people with very high disposable incomes to see your ads every day, all day, right outside their windows -- the same people that fancy watchmakers pay the WSJ big bucks to reach.  

Photo (30)

Yet currently, this valuable inventory is currently filled with PSAs...

Occupation 2

...Or it goes begging altogether:

Agenda

So it dawned on me: "Sponsored Occupations" -- the outdoor ad network that monetizes protest movements!  This concept meets several needs simultaneously:

  • One stated objective of the movement is to "Make Them Pay".  The concept creates a practical mechanism for realizing this goal.


    Make them pay

 

  • Events and guerilla marketing in premium locations without a permitting process -- an advertiser's dream!
  • Plus, sponsors could negotiate special perks, like keeping the protesters from "Going all Oakland" (just heard that term) on their retail stores.
  • Cash-strapped municipalities can muscle a cut of the publishers' share, turning what's today a drag on public resources (police, etc.) into a money-maker.

Photo (32)

There's another important benefit.  This idea is a job creator.  After all, the network needs people to pitch the "publishers" at each location, and sales folks to recruit the advertisers, and staff to traffic the ads, keep the books, etc.  Politicians right and left could fold this into their platforms immediately.

Finally, for the entrepreneur who starts it all, there's the chance to Sell Out To The Man -- at a very attractive premium!  And, for the protesters who back the venture, and get options working for it, a chance to cash out too, just like the guys they're protesting.

Porky

After all, Don't Get Mad, Get Even.

 

Postscript in Rolling Stone: "How I Learned to Stop Worrying and Love the OWS Protests". Plus some thoughtful suggestions here.

 

 

 

January 04, 2011

Facebook at Fifty (Billion)

Is Facebook worth $50 billion?  Some caveman thoughts on this valuation:

1. It's worth $50 billion because Goldman Sachs says so, and they make the rules.

2. It's worth $50 billion because for an evanescent moment, some people are willing to trade a few shares at that price. (Always a dangerous way to value a firm.)

3.  Google's valuation provides an interesting benchmark:

a. Google's market cap is close to $200 billion.  Google makes (annualizing Q32010) $30 billion a year in revenue and $8 billion a year in profit (wow), for a price to earnings ratio of approximately 25x.

b. Facebook claims $2 billion a year in revenue for 2010, a number that's likely higher if we annualize latest quarters (I'm guessing, I haven't seen the books).   Google's clearing close to 30% of its revenue to the bottom line.  Let's assume Facebook's getting similar results, and let's say that annualized, they're at $3 billion in revenues, yielding a $1 billion annual profit (which they're re-investing in the business, but ignore that for the moment).  That means a "P/E" of about 50x, roughly twice Google's.  Facebook has half Google's uniques, but has passed Google in visits.  So, maybe this growth, and potential for more, justifies double the multiple.  Judge for yourself; here's a little data on historical P/E ratios (and interest rates, which are very low today, BTW), to give you some context.  Granted, these are for the market as a whole, and Facebook is a unique high-growth tech firm, but not every tree grows to the sky.

c. One factor to consider in favor of this valuation for Facebook is that its revenues are better diversified than Google's.  Google of course gets 99% of its revenue from search marketing. Facebook gets a piece of the action on all those Zynga et. al. games, in addition to its core display ad business.  You might argue that these game revenues are stable and recurring, and point the way to monetizing the Facebook API to very attractive utility-like economic levels (high fixed costs, but super-high marginal profits once revenues pass those, with equally high barriers to entry).

d. Further, since viral / referral marketing is every advertiser's holy grail, and Facebook effectively owns the Web's social graph at the moment, it should get some credit for the potential value of owning a better mousetrap.  (Though, despite Facebook's best attempts -- see Beacon -- to Hoover value out of your and my relationship networks, the jury's still out on whether and how they will do that.  For perspective, consider that a $50 billion valuation for Facebook means investors are counting on each of today's 500 million users to be good for $100, ignoring future user growth.)

e. On the other hand,  Facebook's dominant source of revenue (about 2/3 of it) is display ad revenue, and it doesn't dominate this market the way Google dominates the search ad market (market dominance means higher profit margins -- see Microsoft circa 1995 -- beyond their natural life).  Also, display ads are more focused on brand-building, and are more vulnerable in economic downturns.

4. In conclusion: if Facebook doubles revenues and profits off the numbers I suggested above, Facebook's valuation will more or less track Google's on a relative basis (~25x P/E).  If you think this scenario is a slam dunk, then the current price being paid for Facebook is "fair", using Google's as a benchmark.  If you think there's further upside beyond this doubling, with virtually no risk associated with this scenario, then Facebook begins to look cheap in comparison to Google.

Your move.

Who's got a better take?

Postscript:  my brother, the successful professional investor, does; see his comment below (click "Comments")

March 09, 2010

Filtering The Collective Preconscious: Darwin Ecosystem

More and more, people agree that filtering the flood of information that's coming at us is supplanting publishing, finding, and connecting as the problem of the Information Age.  Today, the state of the art for doing this includes several approaches:

  • Professional filters: we follow people whose jobs are to cover an area.  Tom Friedman covers international issues, Walt Mossberg covers personal technology.
  • Technical filters: we use services like Google Alerts to tell us when there's something new on a topic we're interested in
  • Social filters: we use services like Digg, Reddit, and Stumbleupon to point us to popular things
  • Tribal filters: we use Facebook, Twitter, LinkedIn, and (Google hopes) Buzz to get pointed to things folks we know and trust think are important

In addition to what gets through, there's how it's presented.  RSS readers for example offer a huge productivity boost to anyone trying to keep up with more than a few sources of information.  However, once you get several hundreds items in your RSS reader, unsorted by anything other than "last in", it's back to information overload.  To solve this, innovative services like Newsmap provide multi-dimensional visual displays to try to push your information awareness productivity even further.  But so far, they've seen only modest adoption.

One limitation of today's filtering and productivity tools is that they pick items up either too early, before it's clear they represent something meaningful, or too late, once the advantages of recognizing a trend have passed.

Yesterday, I visited the team behind a new service called Darwin Ecosystem  that takes a different and potentially more powerful and useful approach to helping you "filter the collective preconscious"  -- that is, to identify emergent signals in the vast noise of the Internet (or any other body of information you might point to -- say, for example, customer service call logs).  Co-founder and CEO Thierry Hubert is a veteran of the knowledge management world going back to senior technical roles at Lotus and IBM; his partner Frederic Deriot shares similar experiences; and, my friend Bill Ives -- formerly head of Accenture's KM client practice -- is also involved as VP Marketing.

Briefly, the service presents a tag cloud of topics that it thinks represent emergent themes to pay attention to in the "corpus" filled by sources you point it to (in the demo, sources run to hundreds of news sources and social media).  The bigger the font, the more important the theme.  Hover your mouse over a theme, and it highlights other related themes to put them all into a collective context.  The service also provides a dynamic view of what's hot / not with a stock-ticker-style ribbon running at the top of the page.  You can view the cloud of emergent themes either in an "unfiltered view", or more usefully, filtered with "attractor" keywords you can specify.

This interface, while interesting, will likely not be the eventual "user/use-case" packaging of the service.  I can see this as a built-in "front page" for an RSS reader, for example, or, minus the tag cloud, as the basis for a more conventional looking email alert service.

The service is based on the math behind Chaos Theory.  This is the math that helps us understand how the proverbial beating of a butterfly's wings in China might become a massive storm.  (Math nerds will appreciate the Lorenz-attractor-plot-as-butterfly-wings logo.)  The service uses this math to tell you not only what individual topics are gaining or losing momentum, but also to highlight relationships between and among different topics to put them into context  -- like why "underwear" and "bomber" might be related. 

Now in beta, with a few large organizations (including large media firms) as early adopters, the service has had some early wins that demonstrate its potential.  It told users, for example, that Lou Dobbs might be on his way out at CNN a week before his departure was reported in the mainstream press.  It also picked up news of UCLA's planned tuition hikes 48 hours in advance of this getting reported in popular mainstream or social media.

It strikes me that a service like Darwin is complementary to that of  Crimson Hexagon, a sentiment analysis firm based on Prof. Gary King's work at Harvard (here's the software that came out of that work), with a variety of marketing, media, and customer support applications.  Darwin helps tell you what to pay attention to -- suggests emergent themes and their context; Crimson Hexagon can then tell you how people feel about these issues in a nuanced way, beyond simple positive / negative buzz.

The current business model has Darwin pursuing enterprise licensing deals with major firms, but depending on partners that emerge, that may not be the last stop on the adoption / monetization express.  For example, it seems to me that a user's interaction with a tool like Darwin represents highly intentional behavior that would be useful data for ad / offer targeting, or personalization of content generally.  This potential use as a marketing analytics input makes it especially interesting to me.

Bottom line: if you are responsible for syndicating and helping users usefully navigate a highly dynamic information set collected through a multitude of sources -- say, a news organization, a university, a large consumer products or services firm -- and are evaluating monitoring technologies, Darwin is worth a look.

January 26, 2010

What's NYT.com Worth To You, Part II

OK, with the response curve for my survey tailing off, I'm calling it.  Here, dear readers, is what you said (click on the image to enlarge it):

Octavianworld nyt com paid content survey

(First, stats: with ~40 responses -- there are fewer points because of some duplicate answers -- you can be 95% sure that answers from the rest of the ~20M people that read the NYT online would be +/- 16% from what's here.)

90% of respondents would pay at least $1/month, and several would pay as much as $10/month. And, folks are ready to start paying after only ~2 articles a day.  Pretty interesting!  More latent value than I would have guessed.  At the same time, it's also interesting to note that no one went as high as the $14 / month Amazon wants to deliver the Times on the Kindle. (I wonder how many Kindle NYT subs are also paper subs getting the Kindle as a freebie tossed in?)

Only a very few online publishers aiming at "the general public" will be able to charge for content on the web as we have known it, or through other newer channels.  Aside from highly-focused publishers whose readers can charge subscriptions to expense accounts, the rest of the world will scrape by on pennies from AdSense et al

But, you say, what about the Apple Tablet (announcement tomorrow! details yesterday), and certain publishers' plans for it?  I see several issues:

  • First, there's the wrestling match to be had over who controls the customer relationship in Tabletmediaworld. 
  • Second, I expect the rich, chocolatey content (see also this description of what's going in R&D at the Times) planned for this platform and others like it to be more expensive to produce than what we see on the web today, both because a) a greater proportion of it will be interactive (must be, to be worth paying for), but also because b) producing for multiple proprietary platforms will also drive costs up (see for example today's good article in Ad Age by Josh Bernoff on the "Splinternet"). 
  • Third, driving content behind pay walls lowers traffic, and advertising dollars with it, raising the break-even point for subscription-based business models. 
  • Fourth, last time I checked, the economy isn't so great. 
The most creative argument I've seen "for" so far is that pushing today's print readers/ subscribers to tablets will save so much in printing costs that it's almost worth giving readers tablets (well, Kindles anyway) for free -- yet another edition of the razor-and-blade strategy, in "green" wrapping perhaps.

The future of paid content is in filtering information and increasing its utility.  Media firms that deliver superior filtering and utility at fair prices will survive and thrive.  Among its innovations in visual displays of information (which though creative, I'd guess have a limited monetization impact) is evidence that the Times agrees with this, at least in part (from the article on Times R&D linked to above):

When Bilton swipes his Times key card, the screen pulls up a personalized version of the paper, his interests highlighted. He clicks a button, opens the kiosk door, and inside I see an ordinary office printer, which releases a physical printout with just the articles he wants. As it prints, a second copy is sent to his phone.

The futuristic kiosk may be a plaything, but it captures the essence of R&D’s vision, in which the New York Times is less a newspaper and more an informative virus—hopping from host to host, personalizing itself to any environment.

Aside from my curiosity about the answers to the survey questions themselves, I had another reason for doing this survey.  All the articles I saw on the Times' announcement that it would start charging had the usual free-text commenting going.  Sprinkled through the comments were occasional suggestions from readers about what they might pay, but it was virtually impossible to take any sort of quantified pulse on this issue in this format.  Following "structured collaboration" principles, I took five minutes to throw up the survey to make it easy to contribute and consume answers.  Hopefully I've made it easier for readers to filter / process the Times' announcement, and made the analysis useful as well -- for example, feel free to stick the chart in your business plan for a subscription-based online content business ;-)  If anyone can point me to other, larger, more rigorous surveys on the topic, I'd be much obliged.

The broader utility of structuring the data capture this way is perhaps greatest to media firms themselves:  indirectly for ad and content targeting value, and perhaps because once you have lots of simple databases like this, it becomes possible to weave more complex queries across them, and out of these queries, some interesting, original editorial possibilities.

Briefly considered, then rejected for its avarice and stupidity: personalized pricing offers to subscribe to the NYT online based on how you respond to the survey :-)

Postscript: via my friend Thomas Macauley, NY (Long Island) Newsday is up to 35 paid online subs.