About

Cesar A. Brea bio at Force Five Partners

     

Subscribe

Get new posts by email:

RSS

Subscribe via
Google Currents

11 posts categorized "Open Source"

May 19, 2013

@nathanheller #MOOCs in The New Yorker: You Don't Need A Weatherman

The May 20th 2013 edition of The New Yorker has an article by Vogue writer Nathan Heller on Massive Online Open Courses (MOOCs) titled "Laptop U: Has the future of college moved online?"  The author explores, or at least raises, a number of related questions.  How (well) does the traditional offline learning experience transfer online?  Is the online learning experience more or less effective than the traditional one? (By what standard? For what material?  What is gained and lost?)  What will MOOCs mean for different colleges and universities, and their faculties?  How will the MOOC revolution be funded?  (In particular, what revenue model will emerge?)

Having worked a lot in the sector, for both public and private university clients, developing everything from technology, to online-enabled programs themselves, to analytic approaches, and even on marketing and promotion, the article was a good prompt for me to try to boil out some ways to think about answering these questions.

The article focuses almost exclusively on Harvard and EdX, the 12-school joint venture through which it's pursuing MOOCs.  Obviously this skews the evaluation.  Heller writes:

Education is a curiously alchemical process. Its vicissitudes are hard to isolate.  Why do some students retain what they learned in a course for years, while others lose it through the other ear over their summer breaks?  Is the fact that Bill Gates and Mark Zuckerberg dropped out of Harvard to revolutionize the tech industry a sign that their Harvard educations worked, or that they failed?  The answer matters, because the mechanism by which conveyed knowledge blooms into an education is the standard by which MOOCs will either enrich teaching in this country or deplete it.

For me, the first step to boiling things out is to define what we mean by -- and want from -- an "education".  So, let's try to unpack why people go to college.  In most cases, Reason One is that you need a degree to get any sort of decent job.  Reason Two is to plug into a network of people -- fellow students, alumni, faculty -- that provide you a life-long community.  Of course you need a professional community for that Job thing, but also because in an otherwise anomic society you need an archipelago to seed friendships, companionships, and self-definition (or at least, as scaffolding for your personal brand: as one junior I heard on a recent college visit put it memorably, "Being here is part of the personal narrative I'm building.")  Reason Three -- firmly third -- is to get an "education" in the sense that Heller describes.  (Apropos: check this recording of David Foster Wallace's 2005 commencement address at Kenyon College.) 

Next, this hierarchy of needs then gives us a way to evaluate the prospects for MOOCs.

If organization X can produce graduates demonstrably better qualified (through objective testing, portfolios of work, and experience) to do job Y, at a lower cost, then it will thrive.  If organization X can do this better and cheaper by offering and/or curating/ aggregating MOOCs, then MOOCs will thrive.  If a MOOC can demonstrate an adequately superior result / contribution to the end outcome, and do it inexpensively enough to hold its place in the curriculum, and do it often enough that its edge becomes a self-fulfilling prophecy -- a brand, in other words -- then it will crowd out its competitors, as surely as one plant shuts out the sunlight to another.  Anyone care to bet against Georgia Tech's new $7K Master's in Computer Science?

If a MOOC-mediated social experience can connect you to a Club You Want To Be A Member Of, you will pay for that.  And if a Club That Would Have You As A Member can attract you to its clubhouse with MOOCs, then MOOCs will line the shelves of its bar.  The winning MOOC cocktails will be the ones that best produce the desired social outcomes, with the greatest number of satisfying connections.

Finally, learning is as much about the frame of mind of the student as it is about the quality of the teacher.  If through the MOOC the student is able to choose a better time to engage, and can manage better the pace of the delivery of the subject matter, then the MOOC wins.

Beyond general prospects, as you consider these principles, it becomes clear that it's less about whether MOOCs win, but which ones, for what and for whom, and how.  

The more objective and standardized -- and thus measurable and comparable -- the learning outcome and the standard of achievement, the greater the potential for a MOOC to dominate. My program either works, or it doesn't.  

If a MOOC facilitates the kinds of content exchanges that seed and stimulate offline social gatherings -- pitches to VCs, or mock interviewing, or poetry, or dance routines, or photography, or music, or historical tours, or bird-watching trips, or snowblower-maintenance workshops -- then it has a better chance of fulfilling the longings of its students for connection and belonging.  

And, the more well-developed the surrounding Internet ecosystem (Wikipedia, discussion groups, Quora forums, and beyond) is around a topic, the less I need a Harvard professor, or even a Harvard grad student, to help me, however nuanced and alchemical the experience I miss might otherwise have been.  The prospect of schlepping to class or office hours on a cold, rainy November night has a way of diluting the urge to be there live in case something serendipitous happens.

Understanding how MOOCs win then also becomes a clue to understanding potential revenue models.  

If you can get accredited to offer a degree based in part or whole on MOOCs, you can charge for that degree, and gets students or the government to pay for it (Exhibit A: University of Phoenix).  That's hard, but as a variant of this, you can get hired by an organization, or a syndicate of organizations you organize, to produce tailored degree programs -- think corporate training programs on steroids -- that use MOOCs to filter and train students.  (Think "You, Student, pay for the 101-level stuff; if you pass you get a certificate and an invitation to attend the 201-level stuff that we fund; if you pass that we give you a job.")  

Funding can come directly, or be subsidized by sponsors and advertisers, or both.  

You can try to charge for content: if you produce a MOOC that someone else wants to include in a degree-based program, you can try to license it, in part or in whole.  

You can make money via the service angle, the way self-publishing firms support authors, with a variety of best-practice based production services.  Delivery might be offered via a freemium model -- the content might be free, but access to premium groups, with teaching assistant support, might come at a price.  You can also promote MOOCs -- build awareness, drive distribution, even simply brand  -- for a cut of the action, the way publishers and event promoters do.  

Perhaps in the not-too-distant future we'll get the Academic Upfront, in which Universities front a semester's worth of classes in a MOOC, then pitch the class to sponsors, the way TV networks do today. Or, maybe the retail industry also offers a window into how MOOCs will be monetized.  Today's retail environment is dominated by global brands (think professors as fashion designers) and big-box (plus Amazon) firms that dominate supply chains and distrubution networks.  Together, Brands and Retailers effectively act as filters: we make assumptions that the products on their shelves are safe, effective, reasonably priced, acceptably stylish, well-supported.  In exchange, we'll pay their markup.  This logic sounds a cautionary note for many schools: boutiques can survive as part of or at the edges of the mega-retailers' ecosystems, but small-to-mid-size firms reselling commodities get crushed.

Of course, these are all generic, unoriginal (see Ecclesiastes 1:9) speculations.  Successful revenue models will blend careful attention to segmenting target markets and working back from their needs, resources, and processes (certain models might be friendlier to budgets and purchasing mechanisms than others) with thoughtful in-the-wild testing of the ideas.  Monolithic executions with Neolithic measurement plans ("Gee, the focus group loved it, I can't understand why no one's signing up for the paid version!") are unlikely to get very far.  Instead, be sure to design with testability in mind (make content modular enough to package or offer a la carte, for example).  Maybe even use Kickstarter as a lab for different models!

PS Heller's brilliant sendup of automated essay grading

Postscript:

The MOOC professor perspective, via the Chronicle, March 2013


May 10, 2013

Book Review: Converge by @rwlord and @rvelez #convergebook

I just finished reading Converge, the new book on integrating technology, creativity, and media by Razorfish CEO Bob Lord and his colleague Ray Velez, the firm’s CTO.  (Full disclosure: I’ve known Bob as a colleague, former boss, and friend for more than twenty years and I’m a proud Razorfish alum from a decade ago.)

Reflecting on the book I’m reminded of the novelist William Gibson’s famous comment in a 2003 Economist interview that “The future’s already here, it’s just not evenly distributed.”  In this case, the near-perfect perch that two already-smart guys have on the Digital Revolution and its impact on global brands has provided them a view of a new reality most of the rest of us perceive only dimly.

So what is this emerging reality?  Somewhere along the line in my business education I heard the phrase, “A brand is a promise.”  Bob and Ray now say, “The brand is a service.”  In virtually all businesses that touch end consumers, and extending well into relevant supply chains, information technology has now made it possible to turn what used to be communication media into elements of the actual fulfillment of whatever product or service the firm provides.  

One example they point to is Tesco’s virtual store format, in which images of stocked store shelves are projected on the wall of, say, a train station, and commuters can snap the QR codes on the yogurt or quarts of milk displayed and have their order delivered to their homes by the time they arrive there: Tesco’s turned the billboard into your cupboard.  Another example they cite is Audi City, the Kinnect-powered configurator experience through which you can explore and order the Audi of your dreams.  As the authors say, “marketing is commerce, and commerce is marketing.”

But Bob and Ray don’t just describe, they also prescribe.  I’ll leave you to read the specific suggestions, which aren’t necessarily new.  What is fresh here is the compelling case they make for them; for example, their point-by-point case for leveraging the public cloud is very persuasive, even for the most security-conscious CIO.  Also useful is their summary of the Agile method, and of how they’ve applied it for their clients.

Looking more deeply, the book isn’t just another surf on the zeitgeist, but is theoretically well-grounded.  At one point early on, they say, “The villain in this book is the silo.”  On reading this (nicely turned phrase), I was reminded of the “experience curve” business strategy concept I learned at Bain & Company many years ago.  The experience curve, based on the idea that the more you make and sell of something, the better you (should) get at it, describes a fairly predictable mathematical relationship between experience and cost, and therefore between relative market share and profit margins.  One of the ways you can maximize experience is through functional specialization, which of course has the side effect of encouraging the development of organizational silos.  A hidden assumption in this strategy is that customer needs and associated attention spans stay pinned down and stable long enough to achieve experience-driven profitable ways to serve them.  But in today’s super-fragmented, hyper-connected, kaleidoscopic marketplace, this assumption breaks down, and the way to compete shifts from capturing experience through specialization, to generating experience “at-bats” through speedy iteration, innovation, and execution.  And this latter competitive mode relies more on the kind of cross-disciplinary integration that Bob and Ray describe so richly.

The book is a quick, engaging read, full of good stories drawn from their extensive experiences with blue-chip brands and interesting upstarts, and with some useful bits of historical analysis that frame their arguments well (in particular, I Iiked their exposition of the television upfront).  But maybe the best thing I can say about it is that it encouraged me to push harder and faster to stay in front of the future that’s already here.  Or, as a friend says, “We gotta get with the ‘90’s, they’re almost over!”

(See this review and buy the book on Amazon.com)


April 10, 2013

Fooling Around With Google App Engine @googlecloud

A simple experiment: the "Influence Reach Factor" Calculator. (Um, it just multiplies two numbers together.  But that's beside the point, which was to sort out what it's like to build and deploy an app to Google's App Engine, their cloud computing service.)

Answer: pretty easy.  Download the App Engine SDK.  Write your program (mine's in Python, code here, be kind, props and thanks to Bukhantsov.org for a good model to work from).  Deploy to GAE with a single click.

By contrast, let's go back to 1999.  As part of getting up to speed at ArsDigita, I wanted to install the ArsDigita Community System (ACS), an open-source application toolkit and collection of modules for online communities.  So I dredged up an old PC from my basement, installed Linux, then Postgres, then AOLServer, then configured all of them so they'd welcome ACS when I spooled it up (oh so many hours RTFM-ing to get various drivers to work).  Then once I had it at "Hello World!" on localhost, I had to get it networked to the Web so I could show it to friends elsewhere (this being back in the days before the cable company shut down home-served websites).  

At which point, cue the Dawn Of Man.

Later, I rented servers from co-los. But I still had to worry about whether they were up, whether I had configured the stack properly, whether I was virus-free or enrolled as a bot in some army of darkness, or whether demand from the adoring masses was going to blow the capacity I'd signed up for. (Real Soon Now, surely!)

Now, Real Engineers will say that all of this served to educate me about how it all works, and they'd be right.  But unfortunately it also crowded out the time I had to learn about how to program at the top of the stack, to make things that people would actually use.  Now Google's given me that time back.

Why should you care?  Well, isn't it the case that you read everywhere about how you, or at least certainly your kids, need to learn to program to be literate and effective in the Digital Age?  And yet, like Kubrick's monolith, it all seems so opaque and impenetrable.  Where do you start?  One of the great gifts I received in the last 15 years was to work with engineers who taught me to peel it back one layer at a time.  My weak effort to pay it forward is this small, unoriginal advice: start by learning to program using a high-level interpreted language like Python, and by letting Google take care of the underlying "stack" of technology needed to show your work to your friends via the Web.  Then, as your functional or performance needs demand (which for most of us will be rarely), you can push to lower-level "more powerful" (flexible but harder to learn) languages, and deeper into the stack.

April 06, 2013

Dazed and Confused #opensource @perryhewitt @oreillymedia @roughtype @thebafflermag @evgenymorozov

Earlier today, my friend Perry Hewitt pointed me to a very thoughtful essay by Evgeny Morozov in the latest issue of The Baffler, titled "The Meme Hustler: Tim O'Reilly's Crazy Talk".  

A while back I worked at a free software firm (ArsDigita, where early versions of the ArsDigita Community System were licensed under GPL) and was deeply involved in developing  an "open source" license that balanced our needs, interests, and objectives with our clients' (the ArsDigita Public License, or ADPL, which was closely based on the Mozilla Public License, or MPL).  I've been to O'Reilly's conferences (<shameless> I remember a ~20-person 2001 Birds-of-a-Feather session in San Diego with Mitch Kapor and pre-Google Eric Schmidt on commercializing open source </shameless>).  Also, I'm a user of O'Reilly's books (currently have Charles Severance's Using Google App Engine in my bag).  So I figured I should read this carefully and have a point of view about the essay.  And despite having recently read Nicholas Carr's excellent and disturbing  2011 book The Shallows about how dumb the Internet has made me, I thought nonetheless that I should brave at least a superficial review of Morozov's sixteen-thousand-word piece.

To summarize: Morozov describes O'Reilly as a self-promoting manipulator who wraps and justifies his evangelizing of Internet-centered open innovation in software, and more recently government, in a Randian cloak sequined with Silicon Valley rhinestones.  My main reaction: "So, your point would be...?" More closely:

First, there's what Theodore Roosevelt had to say about critics. (Accordingly, I fully cop to the recursive hypocrisy of this post.) If, as Morozov says of O'Reilly, "For all his economistic outlook, he was not one to talk externalities..." then Morozov (as most of my fellow liberals do) ignores the utility of motivation.  I accept and embrace that with self-interest and the energy to pursue it, more (ahem, taxable) wealth is created.  So when O'Reilly says something, I don't reflexively reject it because it might be self-promoting; rather, I first try to make sure I understand how that benefits him, so I can better filter for what might benefit me. For example, Morozov writes:

In his 2007 bestseller Words That Work, the Republican operative Frank Luntz lists ten rules of effective communication: simplicity, brevity, credibility, consistency, novelty, sound, aspiration, visualization, questioning, and context. O’Reilly, while employing most of them, has a few unique rules of his own. Clever use of visualization, for example, helps him craft his message in a way that is both sharp and open-ended. Thus, O’Reilly’s meme-engineering efforts usually result in “meme maps,” where the meme to be defined—whether it’s “open source” or “Web 2.0”—is put at the center, while other blob-like terms are drawn as connected to it.
Where Morozov offers a warning, I see a manual! I just have to remember my obligation to apply it honestly and ethically.

Second, Morozov chooses not to observe that if O'Reilly and others hadn't broadened the free software movement into an "open source" one that ultimately offered more options for balancing the needs and rights of software developers with those of users (who themselves might also be developers), we might all still be in deeper thrall to proprietary vendors.  I know from first-hand experience that the world simply was not and is still not ready to accept GPL as the only option.

Nonetheless, good on Morozov for offering this critique of O'Reilly.  Essays like this help keep guys like O'Reilly honest, as far as that's necessary.  They also force us to think hard about what O'Reilly's peddling -- a responsibility that should be ours.  I used to get frustrated by folks who slapped the 2.0 label on everything, to the point of meaninglessness, until I appreciated that the meme and its overuse drove me to think and presented me with an opportunity to riff on it.  I think O'Reilly and others like him do us a great service when they try to boil down complexities into memes.  The trick for us is to make sure the memes are the start of our understanding, not the end of it.

July 03, 2012

#Microsoft Writes Off #aQuantive. What Can We Learn?

In May 2007, Microsoft paid $6 billion to buy aQuantive.  Today, only five years later, they wrote off the whole investment.  Since I wrote about this a lot five years ago (herehere and here), it prompted me to think about what happened, and what I might learn.  Here are a few observations:

1. 2006 / 2007 was a frothy time in the ad network market, both for ads and for the firms themselves, reflecting the economy in general.

2. Microsoft came late to the party, chasing aQuantive (desperately) after Google had taken DoubleClick off the table.

3. So, Microsoft paid a 100% premium to aQuantive's market cap to get the firm.

4. Here's the way Microsoft might have been seeing things at the time:

a. "Thick client OS and productivity applications business in decline -- the future is in the cloud."

b. "Cloud business model uncertain, but certainly lower price point than our desktop franchise; must explore all options; maybe an ad-supported version of a cloud-based productivity suite?"

c. "We have MSN.  Why should someone else sit between us and our MSN advertisers and collect a toll on our non-premium, non-direct inventory?  In fact, if we had an ad network, we could sit between advertisers and other publishers and collect a toll!"

5. Here's the way things played out:

a. The economy crashed a year later.

b. When budgets came back, they went first to the most accountable digital ad spend: search.  

c. Microsoft had a new horse in that race: Bing (launched June 2009).  Discretionary investment naturally flowed there.

d. Meanwhile, "display" evolved:  video display, social display (aka Facebook), mobile display (Dadgurnit!  Google bought AdMob, Apple has iAd!  Scraps again for the rest of us...). (Good recent eMarketer presentation on trends here.)

e. Whatever's left of "traditional" display: Google / DoubleClick, as the category leader, eats first.

f. Specialized players do continue to grow in "traditional" display, through better targeting technologies (BT) and through facilitating more efficient buys (for example, DataXu, which I wrote about here).  But to grow you have to invest and innovate, and at Microsoft, by this point, as noted above, the money was going elsewhere.

g. So, if you're Microsoft, and you're getting left behind, what do you do?  Take 'em with you!  "Do not track by default" in IE 10 as of June 2012.  That's old school medieval, dressed up in hipster specs and a porkpie hat.  Steve Ballmer may be struggling strategically, but he's still as brutal as ever. 

6. Perspective

a. $6 Big Ones is only 2% of MSFT's market cap.  aQuantive may have come at  a 2x premium, but it was worth the hedge.  The rich are different from you and me.  

b. The bigger issue though is how does MSFT steal a march on Google, Apple, Facebook? Hmmm. video's hot.  Still bandwidth constrained, but that'll get better.  And there's interactive video. Folks will eventually spend lots of time there, and ads will follow them. Google's got Hangouts, Facebook's got Facetime, Apple's got iChat... and now MSFT has Skype, for $8B.   Hmm.

7. Postscripts:

a. Some of the smartest business guys I worked with at Bain in the late 90's (including Torrence Boone and Jason Trevisan) ended up at aQuantive and helped to build it into the success it was.  An interesting alumni diaspora to follow.

b. Some of the smartest folks I worked with at Razorfish in the early 2000's (including Bob Lord) ended up at aQuantive. The best part is that Microsoft may have gotten more value from buying and selling Razorfish (to Publicis) than from buying and writing off the rest of aQuantive.  Sweet, that.

c. Why not open-source Atlas?

March 12, 2012

#SXSW Trip Report Part 2: Being There

(See here for Part 1)

Here's one summary of the experience that's making the rounds:

 

Missing sxsw

 

I wasn't able to be there all that long, but my impression was different.  Men of all colors (especially if you count tattoos), and lots more women (many tattooed also, and extensively).   I had a chance to talk with Doc Searls (I'm a huge Cluetrain fan) briefly at the Digital Harvard reception at The Parish; he suggested (my words) the increased ratio of women is a good barometer for the evolution of the festival from narcissistic nerdiness toward more sensible substance.  Nonetheless, on the surface, it does remain a sweaty mosh pit of digital love and frenzied networking.  Picture Dumbo on spring break on 6th and San Jacinto.  With light sabers:

 

SXSW light sabers

 

Sight that will haunt my dreams for a while: VC-looking guy, blazer and dress shirt, in a pedicab piloted by skinny grungy student (?) Dude, learn Linux, and your next tip from The Man at SXSW might just be a term sheet.

So whom did I meet, and what did I learn:

I had a great time listening to PRX.org's John Barth.  The Public Radio Exchange aggregates independent content suitable for radio (think The Moth), adds valuable services like consistent content metadata and rights management, and then acts as a distribution hub for stations that want to use it.  We talked about how they're planning to analyze listenership patterns with that metadata and other stuff (maybe gleaning audience demographics via Quantcast) for shaping content and targeting listeners.  He related for example that stations seem to prefer either 1 hour programs they can use to fill standard-sized holes, or two- to seven- minute segments they can weave into pre-existing programs.  Documentary-style shows that weave music and informed commentary together are especially popular.  We explored whether production templates ("structured collaboration": think "Mad Libs" for digital media) might make sense.  Maybe later.

Paul Payack explained his Global Language Monitor service to me, and we explored its potential application as a complement if not a replacement for episodic brand trackers.  Think of it as a more sophisticated and source-ecumenical version of Google Insights for Search.

Kara Oehler's presentation on her Mapping Main Street project was great, and it made me want to try her Zeega.org service (a Harvard metaLAB project) as soon as it's available, to see how close I can get to replicating The Yellow Submarine for my son, with other family members spliced in for The Beatles.  Add it to my list of other cool projects I like, such as mrpicassohead.

Peter Boyce and Zach Hamed from Hack Harvard, nice to meet you. Here's a book that grew out of the class at MIT I mentioned -- maybe you guys could cobble together an O'Reilly deal out of your work!

Finally,  congrats to Perry Hewitt (here with Anne Cushing) and all her Harvard colleagues on a great evening!

 

Perry hewitt anne cushing

 

 

October 18, 2010

Analytics Commons Post in Google Analytics Blog Today @analyticscommns @linchen @perryhewitt #analytics

Our Analytics Commons project (which I previously wrote about here) got written up in a post on the Google Analytics blog today. ( Thanks to Nick Mihailovski at Google, and to Perry Hewitt at Harvard!  And of course to my partners Lin and Kehan at New Circle Consulting!)

July 21, 2009

Facebook at 250 (Million): What's Next? And What Will Your Share Be?

Facebook announced last week that it had passed 250 million members.  Since no social network grows to the sky (as MySpace experienced before it), it's useful to reflect on the enablers and constraints to that growth, and on the challenges and opportunities those constraints present to other major media franchises (old and new) that are groping for a way ahead.

"Structured Collaboration" principles say social media empires survive and thrive based on how well they support value, affinity, and simplicity.  That is,

  • how useful (rationally and emotionally) are the exchanges of information they support?
  • how well do they support group structures that maximize trust and lower information vetting costs for members? 
  • how easy do they make it for users to contribute and consume information? 
(There are of course additional, necessary "means to these ends" factors, like "liquidity" -- the seed content and membership necessary to prime the pump -- and "extensibility" -- the degree to which members can adapt the service to their needs -- but that's for another post.)

My own experience with Facebook as a user, as well as my professional experience with it in client marketing efforts, has been:
  • Facebook focuses on broad, mostly generic emotional exchanges -- pictures, birthday reminders, pokes.  I get the first two, and I admire the economy of meaning in the third.  The service leaves it to you to figure out what else to share or swap.  As a result, it is (for me anyway) <linkbait> only sometimes  relevant as an element in a B2C campaign, and rarely relevant in a B2B campaign </linkbait>
  • Facebook roared past MySpace because it got affinity right -- initially.  That is, Facebook's structure was originally constrained -- you had to have an email address from the school whose Facebook group you sought to join.  Essentially, there had to be some pre-existing basis for affinity, and Facebook just helped (re-)build this connective tissue.  Then, Facebook allowed anyone to join, and made identifying the nature of relationships established or reinforced there optional.  Since most of us including me are some combination of busy and lazy, we haven't used this feature consistently to describe the origins and nature  of these relationships.  And, it's cumbersome and awkward to have to go back and re-categorize "friends". (An expedient hack on this might be to allow you to organize your friends into groups, and then ask you which groups you want to publish items to, as you go.)
  • Facebook is a mixed bag as a UI.  On one hand, by allowing folks to syndicate blogs and tweets into Facebook, they've made our life easier.  On the other, the popular unstructured communications vehicles -- like the "Wall" -- have created real problems for some marketers.  Structured forms of interaction that would have created less risky choices for marketers, like polls, have come later than they should have and are still problematic ( for example, you can't add polls to groups yet, which would be killer).  And, interacting with Facebook through my email client -- on my PC and on my smartphone -- is still painful.  To their credit, Facebook opened up a great API to enable others to build specialized forms of structured interaction on its social graph. But in doing so it's ceded an opportunity to own the data associated with potentially promising ones.  (Like prediction markets; Inkling Markets, for example, lets you syndicate notices of your trades to Facebook, but the cupboard's pretty bare still for pm apps running against Facebook directly.)
The big picture: Facebook's optimizing size of the pie versus share of the pie.  It can't be all things to all people, so it's let others extend it and share in the revenue and create streams of their own.  Estimates of the revenues to be earned this year by the ecosystem of third party app developers running on Facebook and MySpace run to $300-500 million, growing at 35% annually.  
Them's not "digital dimes", especially in the context of steep declines in September ad page trends in, say, revenues of leading magazine franchises, as well as stalled television network upfronts. But, folks might argue, "Do I want to live in thrall to the fickle Facebook API, and rent their social graph at a premium?"  The answer isn't binary -- how much of an app's functionality lives in Facebook, versus living on a publisher's own server, is a choice.  Plus, there's ways to keep Facebook honest, like getting behind projects like OpenSocial, as other social networks have done. (OpenSocial is trying to become to Facebook's social graph as Linux is to Windows.  Engineer friends, I know -- only sort of.)  And, for Old Media types who don't feel they are up to the engineering tasks necessary, there are modern-day Levi Strausses out there selling jeans to the miners -- like Ning, which just today raised more money at a high valuation.  Still too risky? Old Media could farm out app development to their own third party developer networks, improving viral prospects by branding and promoting (to their suscriber lists) the ones they like in exchange for a cut of any revenues.  In this scenario, content gets added as an ingredient, not the whole main course.  

What is true in the new environment is that reach-based ad network plays surfing on aggregated content won't pay any more.  Rather we have to think about services that would generate more revenue from narrower audiences.  The third-party games created by Facebook app developers referenced above demonstrate how those revenues might stem from value through entertainment.  As we speak, Apple and its developers are earning non-trivial sums from apps.  Phonetag has its hands in folks' pockets (mine included) for $10/month for its superuseful -- albeit non-social -- transcription service.  Filtering for relevant content is a big challenge and opportunity.  Might someone aggregate audiences with similar interests and offer a retail version sourced at wholesale from filtering service firms like Crimson Hexagon?  Looks like Porter Novelli may already be thinking down these lines...

Let's push the math: a winner service by anyone's measure would earn, say, $50M a year. Four bucks a month from each person is roughly $50/ year.  You'd then need a million folks, 1/250th of Facebook's user base to sign up.  Reasonability check -- consider the US circulation of some major magazine titles

If your application service is especially useful, maybe you can get $2/month directly from each person.  Maybe you can make the rest up in ecommerce affiliate commissions (a 10% commission on $125 in annual purchases by each person gets you ~$1/month) and ad revenue (the $12 million/year nut would require one dollar per member per month; a $10 CPM would mean getting each of your million users to account for one impression on your service a couple of times per week, more or less, to cover that nut.)

We also have to be prepared to live in a world where the premiums such services earn are as evanescent as mayflies, especially if we build them on open social graphs.  But that's ok -- just as Old Media winners built empires on excellent, timely editorial taste in content, New Media winners will build their franchises on "editorial noses" for function-du-jour, and function-based insights relevant to their advertisers.  And last time I checked, function and elegance were not mutually exclusive.

So, even as we salute the Facebook juggernaut as it steams past Media Beach, it's time to light some design workshop campfires, and think application services that have "Value, Affinity, Simplicity."

February 16, 2009

Facebook's New TOS: What About Syndicated Content?

Al Essa wonders how you can retain some control over your Facebook content given its new TOS.  


I license my posts here under the Creative Commons 3.0 license.  I syndicate these posts automatically to my Facebook profile through Facebook Notes.  

Which governs Facebook's rights to my syndicated content, CC3 or Facebook's TOS?

October 23, 2008

Social Media Talk at CMO Club of Boston Dinner November 11

My friend Perry Hewitt, who is the CMO at Crimson Hexagon, kindly asked me to join her in presenting a talk titled "From Communications to Conversations: Understanding, Managing, and Embracing UGC" at the CMO Club of Boston dinner on November 11, 2008.