About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

38 posts categorized "Structured Collaboration"

May 19, 2013

@nathanheller #MOOCs in The New Yorker: You Don't Need A Weatherman

The May 20th 2013 edition of The New Yorker has an article by Vogue writer Nathan Heller on Massive Online Open Courses (MOOCs) titled "Laptop U: Has the future of college moved online?"  The author explores, or at least raises, a number of related questions.  How (well) does the traditional offline learning experience transfer online?  Is the online learning experience more or less effective than the traditional one? (By what standard? For what material?  What is gained and lost?)  What will MOOCs mean for different colleges and universities, and their faculties?  How will the MOOC revolution be funded?  (In particular, what revenue model will emerge?)

Having worked a lot in the sector, for both public and private university clients, developing everything from technology, to online-enabled programs themselves, to analytic approaches, and even on marketing and promotion, the article was a good prompt for me to try to boil out some ways to think about answering these questions.

The article focuses almost exclusively on Harvard and EdX, the 12-school joint venture through which it's pursuing MOOCs.  Obviously this skews the evaluation.  Heller writes:

Education is a curiously alchemical process. Its vicissitudes are hard to isolate.  Why do some students retain what they learned in a course for years, while others lose it through the other ear over their summer breaks?  Is the fact that Bill Gates and Mark Zuckerberg dropped out of Harvard to revolutionize the tech industry a sign that their Harvard educations worked, or that they failed?  The answer matters, because the mechanism by which conveyed knowledge blooms into an education is the standard by which MOOCs will either enrich teaching in this country or deplete it.

For me, the first step to boiling things out is to define what we mean by -- and want from -- an "education".  So, let's try to unpack why people go to college.  In most cases, Reason One is that you need a degree to get any sort of decent job.  Reason Two is to plug into a network of people -- fellow students, alumni, faculty -- that provide you a life-long community.  Of course you need a professional community for that Job thing, but also because in an otherwise anomic society you need an archipelago to seed friendships, companionships, and self-definition (or at least, as scaffolding for your personal brand: as one junior I heard on a recent college visit put it memorably, "Being here is part of the personal narrative I'm building.")  Reason Three -- firmly third -- is to get an "education" in the sense that Heller describes.  (Apropos: check this recording of David Foster Wallace's 2005 commencement address at Kenyon College.) 

Next, this hierarchy of needs then gives us a way to evaluate the prospects for MOOCs.

If organization X can produce graduates demonstrably better qualified (through objective testing, portfolios of work, and experience) to do job Y, at a lower cost, then it will thrive.  If organization X can do this better and cheaper by offering and/or curating/ aggregating MOOCs, then MOOCs will thrive.  If a MOOC can demonstrate an adequately superior result / contribution to the end outcome, and do it inexpensively enough to hold its place in the curriculum, and do it often enough that its edge becomes a self-fulfilling prophecy -- a brand, in other words -- then it will crowd out its competitors, as surely as one plant shuts out the sunlight to another.  Anyone care to bet against Georgia Tech's new $7K Master's in Computer Science?

If a MOOC-mediated social experience can connect you to a Club You Want To Be A Member Of, you will pay for that.  And if a Club That Would Have You As A Member can attract you to its clubhouse with MOOCs, then MOOCs will line the shelves of its bar.  The winning MOOC cocktails will be the ones that best produce the desired social outcomes, with the greatest number of satisfying connections.

Finally, learning is as much about the frame of mind of the student as it is about the quality of the teacher.  If through the MOOC the student is able to choose a better time to engage, and can manage better the pace of the delivery of the subject matter, then the MOOC wins.

Beyond general prospects, as you consider these principles, it becomes clear that it's less about whether MOOCs win, but which ones, for what and for whom, and how.  

The more objective and standardized -- and thus measurable and comparable -- the learning outcome and the standard of achievement, the greater the potential for a MOOC to dominate. My program either works, or it doesn't.  

If a MOOC facilitates the kinds of content exchanges that seed and stimulate offline social gatherings -- pitches to VCs, or mock interviewing, or poetry, or dance routines, or photography, or music, or historical tours, or bird-watching trips, or snowblower-maintenance workshops -- then it has a better chance of fulfilling the longings of its students for connection and belonging.  

And, the more well-developed the surrounding Internet ecosystem (Wikipedia, discussion groups, Quora forums, and beyond) is around a topic, the less I need a Harvard professor, or even a Harvard grad student, to help me, however nuanced and alchemical the experience I miss might otherwise have been.  The prospect of schlepping to class or office hours on a cold, rainy November night has a way of diluting the urge to be there live in case something serendipitous happens.

Understanding how MOOCs win then also becomes a clue to understanding potential revenue models.  

If you can get accredited to offer a degree based in part or whole on MOOCs, you can charge for that degree, and gets students or the government to pay for it (Exhibit A: University of Phoenix).  That's hard, but as a variant of this, you can get hired by an organization, or a syndicate of organizations you organize, to produce tailored degree programs -- think corporate training programs on steroids -- that use MOOCs to filter and train students.  (Think "You, Student, pay for the 101-level stuff; if you pass you get a certificate and an invitation to attend the 201-level stuff that we fund; if you pass that we give you a job.")  

Funding can come directly, or be subsidized by sponsors and advertisers, or both.  

You can try to charge for content: if you produce a MOOC that someone else wants to include in a degree-based program, you can try to license it, in part or in whole.  

You can make money via the service angle, the way self-publishing firms support authors, with a variety of best-practice based production services.  Delivery might be offered via a freemium model -- the content might be free, but access to premium groups, with teaching assistant support, might come at a price.  You can also promote MOOCs -- build awareness, drive distribution, even simply brand  -- for a cut of the action, the way publishers and event promoters do.  

Perhaps in the not-too-distant future we'll get the Academic Upfront, in which Universities front a semester's worth of classes in a MOOC, then pitch the class to sponsors, the way TV networks do today. Or, maybe the retail industry also offers a window into how MOOCs will be monetized.  Today's retail environment is dominated by global brands (think professors as fashion designers) and big-box (plus Amazon) firms that dominate supply chains and distrubution networks.  Together, Brands and Retailers effectively act as filters: we make assumptions that the products on their shelves are safe, effective, reasonably priced, acceptably stylish, well-supported.  In exchange, we'll pay their markup.  This logic sounds a cautionary note for many schools: boutiques can survive as part of or at the edges of the mega-retailers' ecosystems, but small-to-mid-size firms reselling commodities get crushed.

Of course, these are all generic, unoriginal (see Ecclesiastes 1:9) speculations.  Successful revenue models will blend careful attention to segmenting target markets and working back from their needs, resources, and processes (certain models might be friendlier to budgets and purchasing mechanisms than others) with thoughtful in-the-wild testing of the ideas.  Monolithic executions with Neolithic measurement plans ("Gee, the focus group loved it, I can't understand why no one's signing up for the paid version!") are unlikely to get very far.  Instead, be sure to design with testability in mind (make content modular enough to package or offer a la carte, for example).  Maybe even use Kickstarter as a lab for different models!

PS Heller's brilliant sendup of automated essay grading

Postscript:

The MOOC professor perspective, via the Chronicle, March 2013


March 12, 2012

#SXSW Trip Report Part 2: Being There

(See here for Part 1)

Here's one summary of the experience that's making the rounds:

 

Missing sxsw

 

I wasn't able to be there all that long, but my impression was different.  Men of all colors (especially if you count tattoos), and lots more women (many tattooed also, and extensively).   I had a chance to talk with Doc Searls (I'm a huge Cluetrain fan) briefly at the Digital Harvard reception at The Parish; he suggested (my words) the increased ratio of women is a good barometer for the evolution of the festival from narcissistic nerdiness toward more sensible substance.  Nonetheless, on the surface, it does remain a sweaty mosh pit of digital love and frenzied networking.  Picture Dumbo on spring break on 6th and San Jacinto.  With light sabers:

 

SXSW light sabers

 

Sight that will haunt my dreams for a while: VC-looking guy, blazer and dress shirt, in a pedicab piloted by skinny grungy student (?) Dude, learn Linux, and your next tip from The Man at SXSW might just be a term sheet.

So whom did I meet, and what did I learn:

I had a great time listening to PRX.org's John Barth.  The Public Radio Exchange aggregates independent content suitable for radio (think The Moth), adds valuable services like consistent content metadata and rights management, and then acts as a distribution hub for stations that want to use it.  We talked about how they're planning to analyze listenership patterns with that metadata and other stuff (maybe gleaning audience demographics via Quantcast) for shaping content and targeting listeners.  He related for example that stations seem to prefer either 1 hour programs they can use to fill standard-sized holes, or two- to seven- minute segments they can weave into pre-existing programs.  Documentary-style shows that weave music and informed commentary together are especially popular.  We explored whether production templates ("structured collaboration": think "Mad Libs" for digital media) might make sense.  Maybe later.

Paul Payack explained his Global Language Monitor service to me, and we explored its potential application as a complement if not a replacement for episodic brand trackers.  Think of it as a more sophisticated and source-ecumenical version of Google Insights for Search.

Kara Oehler's presentation on her Mapping Main Street project was great, and it made me want to try her Zeega.org service (a Harvard metaLAB project) as soon as it's available, to see how close I can get to replicating The Yellow Submarine for my son, with other family members spliced in for The Beatles.  Add it to my list of other cool projects I like, such as mrpicassohead.

Peter Boyce and Zach Hamed from Hack Harvard, nice to meet you. Here's a book that grew out of the class at MIT I mentioned -- maybe you guys could cobble together an O'Reilly deal out of your work!

Finally,  congrats to Perry Hewitt (here with Anne Cushing) and all her Harvard colleagues on a great evening!

 

Perry hewitt anne cushing

 

 

January 15, 2011

Lifetime Learning

A lovely Saturday:

Snow

A perfect day for some refreshment:

Howispentmyweekend2

Studying http://philip.greenspun.com/teaching/rdbms-iap-2011

Why?  (And, why now?)  Relational databases and SQL have been around for forty years.  Yet, no reasonable business person would disagree that:

1. it's useful to know how to use spreadsheet software, both to DIY and manage others who do;

2. there's much more information out there today;

3. harnessing this information is not only advantageous but essential;

4. more powerful tools like database management systems are necessary for this.

Therefore, business people should know a little bit about these more powerful tools, to continue to be considered reasonable.

October 18, 2010

Analytics Commons Post in Google Analytics Blog Today @analyticscommns @linchen @perryhewitt #analytics

Our Analytics Commons project (which I previously wrote about here) got written up in a post on the Google Analytics blog today. ( Thanks to Nick Mihailovski at Google, and to Perry Hewitt at Harvard!  And of course to my partners Lin and Kehan at New Circle Consulting!)

October 15, 2010

Extending Marketing Integration to Agencies: The "Agency API" @rwlord #rzcs

My friends at Razorfish kindly invited me to their client summit in Boston this week.  It was a great event; they and their clients are working on some pretty cool stuff.  Social is front and center.  Close behind: lots of interesting touch / surface computing innovations (Pranav Mistry from the MIT Media Lab really blew our minds). 

In his opening comments Wednesday, Razorfish CEO Bob Lord made the point that for modern marketing to be effective it's got to be integrated across silos of course; but, further, that this has to extend to having agencies work together effectively on behalf of their clients, as much as clients responsible for different channels and functions need to themselves.

I've been wondering about this as well recently, and Bob's comments prompted me to write up some notes.

One observation is that *if* marketers are addressing agency collaboration, they usually start with an *organizational* solution that brings agencies together from time to time.   While this is great, it's insufficient.  To make entities like these effective, it helps to lay a foundation that *registers* and *reconciles* different aspects of what agencies have and do for their clients.  This foundation, realized through a simple, cheap tool like Basecamp if not the marketer's own intranet system, could include:

  • a data registry.  Agencies executing campaigns on behalf of their clients end up controlling data sets (display ad impressions and clicks, email opens, TV ratings, focus group / panel results) that are crucial to understanding the performance of marketing investments, but are typically beyond the scope of what IT's "enterprise data architectures" encompass.  I'm not suggesting that agencies need to ship this data to their clients; rather, that they simply register what's collected, who's got it, and how a client can get it if needed.
  • an insight registry.  Agency folks crunch data into fancy powerpoints bearing the insights on which campaign decisions get made.  It would be very helpful if these decks were tagged and linked from the appropriately-permissioned online workspace.
  • a campaign registry.  Think http://adverblog.com, only for the marketer's own (and perhaps direct competitors') campaigns of any stripe.  A place to put creative briefs and link to campaign assets / executions that implement them.

These approaches are simple, cheap, and "many hands make light work".  Implementing them collectively as a marketer's "Agency API" would help marketers and agencies to "reconcile" their work, in the following ways:

  • discover campaign conflicts and integration opportunities
  • unpack possible disagreements into manageable bites to resolve -- data conflicts, insight conflicts, analytic technique conflicts, creative brief conflicts
  • better prepare in advance of "interagency council" meetings

Of course the registries are not a panacea for inter-agency "issues" that a marketer needs to sort through, but they help to make any problematic issues more transparent and straightforward to focus on.  

Please take the poll below.  Reactions / suggestions welcome from folks with relevant experience!

 

July 16, 2010

Analytic Commons Project

With inspiration and encouragement from @perryhewitt, New Circle Consulting and Force Five Partners have launched the Analytics Commons Project (http://analyticscommons.com).  Here's the pitch:

Web analytics is a relatively new field that is evolving very quickly. Fortunately, it's been our experience that the community of web analysts is welcoming, vibrant, and very willing to share. The Web Analytics forum on Yahoo! is a wonderful example of this.

Analytics Commons is an effort to improve on this sharing by structuring it a bit. With structure, we can make relevant knowledge a little easier to find, and we can also make it easier to vet the expertise and reliability of the source of that knowledge. (The new Web Analytics Association Certification program is another good step in this direction.)

In designing Analytics Commons, we also decided to start by focusing on a specific form of analytics knowledge, rather than trying now to architect some general information architecture about the field that could capture all its (quickly changing) variety. In particular, we noticed:

  • Google Analytics is ubiquitous.
  • We're happy users of it.
  • It recently added the ability to share Advanced Segments and Custom Reports.
  • While GA has an Apps Gallery that features third-party creations, there is currently no public registry of such shared reports that we're aware of.
  • But, there does seem to be pent-up demand for sharing reports.
  • And, we had a specific itch we needed to scratch ("Target Towns", more on that below) that would help us Keep It Real.

We also figured we would start with something that would be within our ability to actually get done. Our ambition for this initiative doesn't stop here, however. So, the service also provides a way for visitors and users to suggest feedback to shape the vision and path for getting there.

So how does it work? If we've done our job well, it's hopefully self-evident. You register on the Analytics Commons site, and tell us a little about yourself, ideally through links to places where you keep your description up to date (e.g., LinkedIn, Twitter, etc.). If you've got a report to contribute, you get the url for it by clicking on the "Share button" in Google Analytics' Custom Reports or Advanced Segments sections from a GA profile in which you have access to them. Then, you add the url to our service and tag and describe what you've shared. If you need a report, you search for it on our service. If you find and try a report, all we ask is that you rate and comment on it to tell us how well it matched what you needed. Hopefully, discussions about each report will happen on our service, but if you want to connect privately with a report contributor, we've made room in our registered user profiles for folks to provide contact information if they wish. If you don't find what you were looking for, we let you store the search on our service, and if something matches in the future, we'll send you an email with the search results. If you want, you can subscribe to a weekly email listing new reports that have been added to our service, or get an RSS feed of the same.

The service is free to its users. Our privacy policy is simple: everything here is public, except your registration email if you choose not to share that. We won't share that with anyone, period. If you share a report, we assume you have the authority to do that. If you comment on a report, please be polite and constructive. We reserve the right to moderate comments, and to ban anyone who posts material we deem to be inappropriate or offensive

We saved some space on our pages for advertising / sponsorship, to help cover the server bills. If you're interested, please contact us.

Questions? Suggestions? contact us if you wish at [email protected].

About "Target Towns"

In our work for a client, we observed the following:

  • They target wealthy customers.
  • Wealth is highly concentrated in the US.
  • Wealthy people are highly concentrated in a few towns.

Therefore, we thought it would be useful to track traffic and behavior from these "Target Towns".

We tried to construct an Advanced Segment for "Target Towns" through the GA UI. It didn't appear to support what we had in mind. So we asked for help. Avinash Kaushik, Nick Mihailovski, Judah Phillips, and Justin Cutroni all helped us with a piece of the puzzle (Thank You all!). In the end, the answer turned out that we needed to use the GA API. But the API also had limits on how much information you could hit it with in a single query. So we figured we needed a service that would pass the towns ("Dimensions") about which you wanted information ("Metrics") past the API sequentially, and then would aggregate and present the results in a usable form.

Then we thought: "This is a report many people are likely to need!" So, the "Target Towns" service seemed like it would be a good candidate to help seed our Analytic Commons initiative.

March 13, 2010

Fly-By-Wire Marketing, Part II: The Limits Of Real Time Personalization

A few months ago I posted on what I called "Fly-By-Wire Marketing", or the emergence of the automation of marketing decisions -- and sometimes the automation of the development of rules for guiding those decisions.

More recently Brian Stein introduced me to Hunch, the new recommendation service founded by Caterina Fake of Flickr fame.  (Here's their description of how it works.  Here's my profile, I'm just getting going.)  When you register, you answer questions to help the system get to know you.  When you ask for a recommendation on a topic, the system not only considers what others have recommended under different conditions, but also what you've told it about you, and how you compare with others who have sought advice on the subject.

It's an ambitious service, both in terms of its potential business value (as an affiliate on steroids), but also in terms of its technical approach to "real time personalization".  Via Sim Simeonov's blog, I read this GigaOm post by Tom Pinckney, a Hunch co-founder and their VP of Engineering.  Sim's comment sparked an interesting comment thread on Tom's post.  They're useful to read to get a feel for the balance between pre-computation and on-the-fly computation, as well as the advantages of and limits to large pre-existing data sets about user preferences and behavior, that go into these services today.

One thing neither post mentions is that there may be diminishing returns to increasingly powerful recommendation logic if the set of things from which a recommendation can ultimately be selected is limited at a generic level.  For example, take a look at Hunch's recommendations for housewarming gifts.  The results more or less break down into wine, plants, media, and housewares.  Beyond this level, I'm not sure the answer is improved by "the wisdom of Hunch's crowd" or "Hunch's wisdom about me", as much as my specific wisdom about the person for whom I'm getting the gift, or maybe by what's available at a good price. (Perhaps this particular Hunch "topic" could be further improved by crossing recommendations against the intended beneficiary's Amazon wish list?)

My point isn't that Hunch isn't an interesting or potentially useful service.  Rather, as I argued several months ago,

The [next] question you ask yourself is, "How far down this road does it makes sense for me to go, by when?"  Up until recently, I thought about this with the fairly simplistic idea that there are single curves that describe exponentially decreasing returns and exponentially increasing complexity.  The reality is that there are different relationships between complexity and returns at different points -- what my old boss George Bennett used to call "step-function" change.

For me, the practical question-within-a-question this raises is, for each of these "step-functions", is there an version of the algorithm that's only 20% as complex, that gets me 80% of the benefit?  My experience has been that the answer is usually "yes".  But even if that weren't the case, my approach in jumping into the uncharted territory of a "step-function" change in process, with new supporting technology and people roles, would be to start simple and see where that goes.

At minimum, given the "step-function" economics demonstrated by the Demand Medias of the world, I think senior marketing executives should be asking themselves, "What does the next 'step-function' look like?", and "What's the simplest version of it we should be exploring?" (Naturally, marketing efforts in different channels might proceed down this road at different paces, depending on a variety of factors, including the volume of business through that channel, the maturity of the technology involved, and the quality of the available data...)

Hunch is an interesting specific example of the increasingly broad RTP trend.  The NYT had an interesting article on real time bidding for display ads yesterday, for example.  The deeper issue in the trend I find interesting is the shift in power and profit toward specialized third parties who develop the capability to match the right cookie to the right ad unit (or, for humans, the right user to the right advertiser), and away from publishers with audiences.  In the case of Hunch, they're one and the same, but they're the exception.  How much of the increased value advertisers are willing to pay for better targeting goes to the specialized provider with the algorithm and the computing power, versus the publisher with the audience and the data about its members' behavior?  And for that matter, how can advertisers better optimize their investments across the continuum of targeting granularity?  Given the dollars now flooding into digital marketing, these questions aren't trivial.

January 26, 2010

What's NYT.com Worth To You, Part II

OK, with the response curve for my survey tailing off, I'm calling it.  Here, dear readers, is what you said (click on the image to enlarge it):

Octavianworld nyt com paid content survey

(First, stats: with ~40 responses -- there are fewer points because of some duplicate answers -- you can be 95% sure that answers from the rest of the ~20M people that read the NYT online would be +/- 16% from what's here.)

90% of respondents would pay at least $1/month, and several would pay as much as $10/month. And, folks are ready to start paying after only ~2 articles a day.  Pretty interesting!  More latent value than I would have guessed.  At the same time, it's also interesting to note that no one went as high as the $14 / month Amazon wants to deliver the Times on the Kindle. (I wonder how many Kindle NYT subs are also paper subs getting the Kindle as a freebie tossed in?)

Only a very few online publishers aiming at "the general public" will be able to charge for content on the web as we have known it, or through other newer channels.  Aside from highly-focused publishers whose readers can charge subscriptions to expense accounts, the rest of the world will scrape by on pennies from AdSense et al

But, you say, what about the Apple Tablet (announcement tomorrow! details yesterday), and certain publishers' plans for it?  I see several issues:

  • First, there's the wrestling match to be had over who controls the customer relationship in Tabletmediaworld. 
  • Second, I expect the rich, chocolatey content (see also this description of what's going in R&D at the Times) planned for this platform and others like it to be more expensive to produce than what we see on the web today, both because a) a greater proportion of it will be interactive (must be, to be worth paying for), but also because b) producing for multiple proprietary platforms will also drive costs up (see for example today's good article in Ad Age by Josh Bernoff on the "Splinternet"). 
  • Third, driving content behind pay walls lowers traffic, and advertising dollars with it, raising the break-even point for subscription-based business models. 
  • Fourth, last time I checked, the economy isn't so great. 
The most creative argument I've seen "for" so far is that pushing today's print readers/ subscribers to tablets will save so much in printing costs that it's almost worth giving readers tablets (well, Kindles anyway) for free -- yet another edition of the razor-and-blade strategy, in "green" wrapping perhaps.

The future of paid content is in filtering information and increasing its utility.  Media firms that deliver superior filtering and utility at fair prices will survive and thrive.  Among its innovations in visual displays of information (which though creative, I'd guess have a limited monetization impact) is evidence that the Times agrees with this, at least in part (from the article on Times R&D linked to above):

When Bilton swipes his Times key card, the screen pulls up a personalized version of the paper, his interests highlighted. He clicks a button, opens the kiosk door, and inside I see an ordinary office printer, which releases a physical printout with just the articles he wants. As it prints, a second copy is sent to his phone.

The futuristic kiosk may be a plaything, but it captures the essence of R&D’s vision, in which the New York Times is less a newspaper and more an informative virus—hopping from host to host, personalizing itself to any environment.

Aside from my curiosity about the answers to the survey questions themselves, I had another reason for doing this survey.  All the articles I saw on the Times' announcement that it would start charging had the usual free-text commenting going.  Sprinkled through the comments were occasional suggestions from readers about what they might pay, but it was virtually impossible to take any sort of quantified pulse on this issue in this format.  Following "structured collaboration" principles, I took five minutes to throw up the survey to make it easy to contribute and consume answers.  Hopefully I've made it easier for readers to filter / process the Times' announcement, and made the analysis useful as well -- for example, feel free to stick the chart in your business plan for a subscription-based online content business ;-)  If anyone can point me to other, larger, more rigorous surveys on the topic, I'd be much obliged.

The broader utility of structuring the data capture this way is perhaps greatest to media firms themselves:  indirectly for ad and content targeting value, and perhaps because once you have lots of simple databases like this, it becomes possible to weave more complex queries across them, and out of these queries, some interesting, original editorial possibilities.

Briefly considered, then rejected for its avarice and stupidity: personalized pricing offers to subscribe to the NYT online based on how you respond to the survey :-)

Postscript: via my friend Thomas Macauley, NY (Long Island) Newsday is up to 35 paid online subs.

January 01, 2010

Grokking Google Wave: The Homeland Security Use Case (And Why You Should Care)

A few people asked me recently what I thought of Google WaveLike others, I've struggled to answer this.

In the past few days I've been following the news about the failed attempt to blow up Northwest 253 on Christmas Day, and the finger-pointing among various agencies that's followed it.  More particularly, I've been thinking less about whose fault it is and more about how social media / collaboration tools might be applied to reduce the chance of a Missed Connection like this.

A lot of the comments by folks in these agencies went something like, "Well, they didn't tell us that they knew X," or "We didn't think we needed to pass this information on."  What most of these comments have in common is that they're rooted in a model of person-to-person (or point-to-point) communication, which creates the possibility that one might "be left out of the loop" or "not get the memo".

For me, this created a helpful context for understanding how Google Wave is different from email and IM, and why the difference is important.  Google Wave's issue isn't that the fundamental concept's not a good idea.  It is.  Rather, its problem is that it's paradigmatically foreign to how most people (excepting the wikifringe) still think.

Put simply, Google Wave makes conversations ("Waves") primary, and who's participating secondary.  Email, in contrast, makes participants primary, and the subjects of conversations secondary.  In Google Wave, with the right permissions, folks can opt into reading and participating in conversations, and they can invite others.  The onus for awareness shifts from the initiator of a conversation to folks who have the permission and responsibility to be aware of the conversation.  (Here's a good video from the Wave team that explains the difference right up front.)  If the conversation about Mr. Abdulmutallab's activities had been primary, the focus today would be about who read the memo, rather than who got it.  That would be good.  I'd rather we had a filtering problem than an information access / integration problem.

You may well ask, "Isn't the emperor scantily clad -- how is this different from a threaded bboard?"  Great question.   One answer might be that "Bboards typically exist either independently, or as features of separate purpose-specific web sites.  Google Wave is to threaded bboard discussions as Google Reader is to RSS feeds -- a site-independent conversation aggregator, just as Google Reader is a site-independent content aggregator."   Nice!  Almost: one problem of course is that Google Wave today only supports conversations that start natively in Google Wave.  And, of course, that you can (sometimes) subscribe to RSS feeds of bboard posts, as in Google Groups, or by following conversations by subscribing to RSS feeds for Twitter hashtags.  Another question: "How is Google Wave different from chat rooms?"  In general, most chats are more evanescent, while Waves appear (to me) to support both synchronous chat and asynchronous exchanges equally well.

Now the Big Question: "Why should I care?  No one is using Google Wave anyway."  True (only 1 million invitation-only beta accounts as of mid-November, active number unknown) -- but at least 146 million people use Gmail.  Others already expect Google Wave eventually will be introduced as a feature for Gmail: instead of / in addition to sending a message, you'll be able to start a "Wave".  It's one of the top requests for the Wave team.  (Gmail already approximates Wave by organizing its list of messages into threads, and by supporting labeling and filtering.)  Facebook, with groups and fan pages, appears to have stolen a march on Google for now, but for the vast bulk of the world that still lives in email, it's clunky to switch back and forth.  The killer social media / collaboration app is one that tightly integrates conversations and collaboration with messaging, and the prospect of Google-Wave-in-Gmail is the closest solution with any realistic adoption prospects that I can imagine right now.

So while it's absurdly early, marketers, you read it here first: Sponsored Google Waves :-)  And for you developers, it's not too early to get started hacking the Google Wave API and planning how to monetize your apps.

Oh, and Happy New Year!

Postscript: It was the software's fault...

Postscript #2: Beware the echo chamber

October 06, 2009

Twitter Idea Of The Day

I just read Clive Owen's piece in on wired.com describing the rise of search engines focused on real-time trend monitoring, as opposed to indexing based on authority.  It's good, short, and I recommend it.

Building on ideas I had a while back, it provoked an idea for a web service that would allow a group sponsor to register Twitter feeds (or, for that matter, any kind of feed) from members of the group, do a word-frequency analysis on those feeds (with appropriate filters of course), and then display snapshots (perhaps with a word cloud) of popularity, and trend analysis (fastest-rising, fastest-falling).  You could also have specialized content variants: most popular URLs, most popular tags.  Clicking through from any particular word (or url or tag) you could do a network analysis: which member of the group first mentioned the item, who re-tweeted him or her, either with attribution or without.

The builder of a service like this would construct it as a platform that would allow group sponsors to set up individual accounts with one or more groups, and it would allow these sponsors to aggregate groups up or drill down from an aggregate cross-group view down to individual ones, perhaps with some comparative analysis -- "show me the relative popularity of any given word / content item across my groups", for example.

Twitter already has trending topics, as do others, but the lack of grouping for folks relevant to me makes it (judging by the typical results) barely interesting and generally useless to me.  There are visual views of news, like Newsmap, but they pre-filter content by focusing on published news stories.

An additional layer of sophistication based on semantic analysis technology like, say, Crimson Hexagon's, would translate individual key words into broader categories of meaning from all this, so you could, at a glance, in what ways and proportions your group members were feeling about different things: "Well, it's Monday morning, and 2/3 of my users are feeling 'anxious' about work, while 1/3 are feeling 'inspired'  on vacation."

As for making money, buzz-tracking services are already bought by / licensed by / subscribed to by a number of organizations.  I could see a two-stage model here where group sponsors who aggregate and process their members' feeds could then re-syndicate fine-grained analysis of those feeds to media and other organizations to whom that aggregated view would be useful. "What are alumni of university X / readers of magazine Y focused on right now?"  The high-level cuts would be free, perhaps used to drive traffic.