About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

June 22, 2014

Data and Disruption

In the June 23 edition of The New Yorker, in an article titled "The Disruption Machine", Harvard history professor Jill Lepore critiques the theories of Harvard Business School professor Clayton Christensen.  Her article is wonderfully written of course, but curiously sharp in tone (or is it me?). Re-reading it, I have the impression she's sick of the Manifest Destiny - like over-simplification and over-application of Christensen's theories:

Most big ideas have loud critics. Not disruption. Disruptive innovation as the explanation for how change happens has been subject to little serious criticism, partly because it’s headlong, while critical inquiry is unhurried; partly because disrupters ridicule doubters by charging them with fogyism, as if to criticize a theory of change were identical to decrying change; and partly because, in its modern usage, innovation is the idea of progress jammed into a criticism-proof jack-in-the-box.

But unfortunately the baby appears to go out with the bathwater. Lepore suggests -- accuses -- Christensen of cherry-picking his data, and describes a dynamic in business scholarship (as distinguished from practices in other fields) that over-promotes findings. In taking aim, fairly or not, at Christensen (who beyond his reputation as a scholar is even more respected for his integrity), Lepore goes for the jugular of the academic-industrial complex. 

Professor Christensen replied at length to Lepore's piece in an interview he gave last Friday to Bloomberg Businessweek reporter Drake Bennett, taking significant issues with her points -- maybe too personally?

Well, in the first two or three pages, it seems that her motivation is to try to rein in this almost random use of the word “disruption.” The word is used to justify whatever anybody—an entrepreneur or a college student—wants to do. And as I read that, I was delighted that somebody with her standing would join me in trying to bring discipline and understanding around a very useful theory. I’ve been trying to do it for 20 years.

And then in a stunning reversal, she starts instead to try to discredit Clay Christensen, in a really mean way. And mean is fine, but in order to discredit me, Jill had to break all of the rules of scholarship that she accused me of breaking—in just egregious ways, truly egregious ways. In fact, every one—every one—of those points that she attempted to make [about The Innovator’s Dilemma] has been addressed in a subsequent book or article. Every one! And if she was truly a scholar as she pretends, she would have read [those]. I hope you can understand why I am mad that a woman of her stature could perform such a criminal act of dishonesty—at Harvard, of all places.

Why am I interested, and why should you care?

Christensen's research into innovation, and his books like The Innovator's Dilemma and its successors have pretty much dominated the landscape on the subject for nearly the last twenty years. If you're a senior executive at a big established firm, he's made you paranoid.  If you're an entrepreneur at a small, ambitious firm, you're telling your investors, customers, and employees you're one of the "disruptive innovations"  making those executives paranoid. There's even a slim chance you might be right! Lepore is a Pulitzer Prize and National Book Award winner (as well as a department chair at Harvard).  Scholars of her prominence usually don't take on other scholars, particularly outside their field, without good reason or care. Christensen's reply suggests she had neither, and prior critiques of her work suggest this might be the latest in a pattern of occasional trip-ups. In any event, the critique's attracted some attention. If you follow either of these folks for professional reasons (or simply are drawn to MMA-style clashes between academic titans) you might be interested to track this story too.

I'm interested because the controversy highlights a pattern I often see, and as I describe in my new book Marketing and Sales Analytics, I use a variety of techniques to compensate for.

The pattern is to treat ideas and supporting evidence like Christensen's as the last word on a subject.  After all, he teaches at Harvard Business School, right? But even Christensen says (in his Bloomberg Businessweek interview) that models and supporting stories like his are at best just partly predictive  points of departure for any specific "truth" I need to be tracking, and hedging against.

The psychologist Daniel Kahneman, who wrote the best-seller Thinking, Fast and Slow about his research into cognitive bias, talks about the power of stories, and in particular about our susceptibility to them and to other biases when we're tired, frazzled, juggling a thousand things. I think Christensen's theory of disruptive innovations invites criticisms like Lepore's because it mirrors,  purposefully or not, an especially powerful archetypal story, in this case of the hero who battles long odds but ultimately triumphs grandly.  

Recent history and scholarship has reinforced the power of this particular theme.  The Internet revolution has created a number of poster children for it (I'm reminded of Paul Simon's line "Every generation thows a hero up the pop charts", and in our generation we've had several), and created conditions for accelerating the pace of disruption. Nassim Taleb's scholarship, in particular his own best-selling book The Black Swan, has sensitized us to the existence of outlier possibilities our models often miss (and even explicitly discard to improve fit).

So the combination of a story model we're especially tuned for, and recent examples that magnify its power, position it uniquely in our consciousness. The implication for executives is to be mindful of the bias this creates, and to have appropriate mechanisms in place for managing it.

In my book I describe a portfolio-driven approach to managing analytic efforts that can help with this.  In brief, the idea is to manage analytic projects not just as a list of questions to be checked off as you answer them, but as a venture capitalist's portfolio of investments that need to generate target returns appropriate to their size and riskiness.  In the governance of such portfolios we set up in our work with clients, we manage to a rule we call "3-2-1".  In any given quarter, we aim to have the collection of analytic initiatives we're pursuing yield (notionally) three "news you can use" insights, two experiments based on those insights, and one outcome we're putting into production at scale to help pay the freight for the overall investment in the portfolio (including us).  The portfolios are constructed to reflect priorities across a grid that itself is a collection of different business opportunities ("Need X for Customer Y") we're targeting and different purchase funnel or customer journey stages for these opportunities.  This grid helps you judge how concentrated you are on existing versus new opportunities, and on whether your investments are appropriately focused on bottlenecks in the relevant funnels or journeys.

The main point of using a grid like this, as opposed to just a list of individual projects, is that it forces you to think backwards in a data-driven way from the customer-defined strategies, goals, and objective performance of the business, and about whether you've got sufficient attention on the hot spots thereof. In the parallel world of innovation, I believe the opportunity for subscribers to Christensen's ideas is to neither accept nor reject "disruption dogma", but to ask themselves whether they've got sufficient "3-2-1" style attention and results on the hot spots in their business.  "What customers are important to us?"  "What needs are important to them?" "How well are we serving those needs?" "What's / who's out there trying to serve these same folks and their needs?" "What kind of progress are they making, and what's contributing or impeding that progress?" "What research, analysis, or testing should we be doing to stay close to potentially meaningful threats?" "Are the hedges these efforts represent probabilistically - in likely magnitude and yield - a good match for the risk the potential competition out there represents to our business?"

A "Clash Of The Titans" around business theories, while good sport for some, can cause a lot of anxiety for many others trying to build and run their businesses. Framing your approach specifically and objectively using the techniques for managing analytic efforts can help you get past these concerns in a practical, tailored - and maybe even heroic - way.

 

May 29, 2014

Mary Meeker's @KPCB #InternetTrends Report: Critiquing The "Share of Time, Share of Money" Analysis

Mary Meeker's annual Internet Trends report is out.  It's a very helpful survey and synthesis of what's going on, as ever, all 164 pages of it. But for the past few years it's contained a bit of analysis that's bugged me.

Page 15 of the report (embedded below) is titled "Remain Optimistic About Mobile Ad Spend Growth... Print Remains Way Over-Indexed."  The main chart on the page compares the percentage of time people spend in different media with the percentage of advertising budgets that are spent in those media.  The assumption is that percentage of time and percentage of budget should roughly be equal for each medium.  Thus Meeker concludes that if -- as is the case for mobile -- the percentage of user time spent is greater than budget going there, then more ad dollars (as a percent of total) will flow to that medium, and vice versa (hence her point about print).

I can think of  demand-side, supply-side, and market maturity reasons that this equivalency thesis would break down, which also suggest directions for improving the analysis.  

On the demand side, different media may have different mixes of people, with different demographic characteristics.  For financial services advertisers, print users skew older -- and thus have more money, on average, making the potential value to advertisers of each minute of time spent by the average user there more valuable.  Different media may also have different advertising engagement power.  For example, in mobile, in either highly task-focused use cases or in distracted, skimming/ snacking ones, ads may be either invisible or intrusive, diminishing their relative impact (either in terms of direct interaction or view-through stimulation). By contrast, deeper lean-back-style engagement with TV, with more room for an ad to maneuver, might if the ad is good make a bigger impression. I wonder if there's also a reach premium at work.  Advertisers like to find the most efficient medium, but they also need to reach a large enough number of folks to execute campaigns effectively.  TV and print are more reach-oriented media, in general.

On the supply side, different media have different power distributions of the content they can offer, and different barriers to entry that can affect pricing.  On TV and in print, prime ad spots are more limited, so simple supply and demand dynamics drive up prices for the best spots beyond what the equivalency idea might suggest.  

In favor of Meeker's thesis, though representing another short term brake on it, is yet another factor she doesn't speak to directly. This is the relative maturity of the markets and buying processes for different media, and the experience of the participants in those markets.  A more mature, well-trafficked market, with well-understood dynamics, and lots of liquidity (think the ability for agencies and media brokers to resell time in TV's spot markets, for example), will, at the margin, attract and retain dollars, in particular while the true value of different media remain elusive. (This of course is one reason why attribution analysis is so hot, as evidenced by Google's and AOL Platform's recent acquisitions in this space.)  I say in favor, because as mobile ad markets mature over time, this disadvantage will erode.

So for advertisers, agency and media execs, entrepreneurs, and investors looking to play the arbitrage game at the edges of Meeker's observation, the question is, what adjustment factors for demand, supply, and market maturity would you apply this year and next?  It's not an idle question: tons of advertisers' media plans and publishers' business plans ride on these assumptions about how much money is going to come to or go away from them, and Meeker's report is an influential input into these plans in many cases.

A tactical limitation of Meeker's analysis is that while she suggests the overall potential shift in relative allocation of ad dollars (her slide suggests a "~$30B+" digital advertising growth opportunity in the USA alone - up from $20B last year*), she doesn't suggest a timescale and trendline for the pace with which we'll get there. One way to come at this is to look at the last 3-4 annual presentations she's made, and see how the relationships she's observed have changed over time.  Interestingly, in her 2013 report using 2012 data, on page 5, 12% of time is spent on mobile devices, and 3% of ad dollars are going there, for a 4x difference in percentages. In the 2014 report using 2013 data, 20% of time is spent on mobile, and 5% of media dollars are going there -- again, a 4x relationship.  

So, if the equivalency zeitgeist is at work, for the moment it may be stuck in a phone booth. But in the end I'm reminded of the futurist Roy Amara's saying: "We tend to overestimate the effect of a technology in the short term and underestimate its effect in the long term."  Plus let's not forget new technologies (Glass, Occulus Rift, both portable and  large/immersive) that will further jumble relevant media categories in years to come.

(*Emarketer seems to think we'll hit the $30B mobile advertising run rate sometime during 2016-2017.)

 

April 16, 2014

Book Review: "Big Data @ Work", by Tom Davenport

I've just finished Big Data @ Work: Dispelling The Myths, Uncovering The Opportunities, by Tom Davenport, the author of Competing On Analytics.  

The book marks a watershed moment in the Big Data zeitgeist. Much of the literature on the topic to this point has been more evangelical, telling us how analytics will make us all taller, smarter, and more handsome.  But the general sense for me has been of stories that are "way out there" for most organizations.  This latest book is much more about how to realize these visions with tactical, practical prescriptions across a range of issues.

Perhaps the most important of these dimensions is having a clear idea of the challenges or opportunities for which Big Data might be a part of the solution.  In Chapter Two, Davenport presents a very helpful series of use cases for using Big Data in several industry applications, including business travel, energy management, retail, and home education. He pushes further to examine the relative readiness of a number of different industries and business functions, including marketing and sales (which are the particular focus of my own upcoming book, Marketing and Sales Analytics). In Chapter Three he builds on these examples and sector assessments to offer a framework for shaping business strategies that leverage Big Data.  He suggests cost reduction, time reduction, new offerings, and decision support as broad objectives for focusing Big Data initiatives, and then further suggests a useful distinction between discovery-oriented application of Big Data (say, for sorting out emergent patterns of behavior to address) and production-oriented usage (applying Big Data to personalize experiences based on which emergent patterns might be worth the effort).

This "ends" focused approach to applying Big Data, in contrast to an "If I build it (my giant Hadoop Cluster) they will come" is an extremely valuable perspective to have introduced at this point in the evolution of this trend, and Davenport has wrapped it in a clean, well-organized package of specific advice executives interested in this space can profit from.

My New Book: #Marketing and #Sales #Analytics

I've written a second book.  It's called Marketing and Sales Analytics: Proven Techniques and Powerful Applications From Industry Leaders (so named for SEO purposes).  Pearson is publishing it (special thanks to Judah Phillips, author of Building A Digital Analytics Organization, for introducing me to Jeanne Glasser at Pearson).  The ebook version will be available on May 23, and the print version will come out June 23.

The book examines how to focus, build, and manage analytics capabilities related to sales and marketing.  It's aimed at C-level executives who are trying to take advantage of these capabilities, as well as other senior executives directly responsible for building and running these groups. It synthesizes interviews with 15 senior executives at a variety of firms across a number of industries, including Abbott, La-Z-Boy, HSN, Condé Nast, Harrah's, Aetna, The Hartford, Bed Bath & Beyond, Paramount Pictures, Wayfair, Harvard University, TIAA-CREF, Talbots, and Lenovo. My friend and former boss Bob Lord, author of Converge was kind enough to write the foreword.

I'm in the final editing stages. More to follow soon, including content, excerpts, nice things people have said about it, slideshows, articles, lunch talk...

January 17, 2014

Culturelytics

I'm working on a book. It will be titled Marketing and Sales Analytics: Powerful Lessons from Leading Practitioners. My first book, Pragmalytics, described some lessons I'd learned; this book extends those lessons with interviews with more than a dozen senior executives grappling with building and applying analytics capabilities in their companies. Pearson's agreed to publish it, and it will be out this spring. Right now I'm in the middle of the agony of writing it. Thank you Stephen Pressfield (and thanks to my wife Nan for introducing us).

A common denominator in the conversations I've been having is the importance of culture. Culture makes building an analytics capability possible. In some cases, pressure for culture change comes outside-in: external conditions become so dire that a firm must embrace data-driven objectivity. In others, the pressure comes top-down: senior leadership embodies it, leads by example, and is willing to re-staff the firm in its image. But what do you do when the wolf's not quite at the door, or when it makes more sense (hopefully, your situation) to try to build the capability largely within the team you have than to make wholesale changes?

There are a lot of models for understanding culture and how to change it. Here's a caveman version (informed by behavioral psychology principles, and small enough to remember). Culture is a collection of values -- beliefs -- about what works, and doesn't: what behaviors lead to good outcomes for customers, shareholders, and employees; and, what behaviors are either ignored or punished.

Photo (16)

Values, in turn, are developed through chances individuals have to try target behaviors, the consequences of those experiences, and how effectively those chances and their consequences are communicated to other people working in the organization.

Photo (15)

Chances are to  culture change as reps (repetitions) are to sports. If you want to drive change, to get better, you need more of them. Remember that not all reps come in games. Test programs can support culture change the same way practices work for teams. Also, courage is a muscle: to bench press 500 pounds once, start with one pushup, then ten, and so on. If you want your marketing team to get comfortable conceiving and executing bigger and bolder bets, start by carving out, frequently, many small test cells in your programs. Then, add weight: define and bound dimensions and ranges for experimentation within those cells that don't just have limits, but also minimums for departure from the norm. If you can't agree on exactly what part of your marketing mix needs the most attention, don't study it forever. A few pushups won't hurt, even if it's your belly that needs the attention. A habit is easier to re-focus than it is to start.

Consequences need to be both visible and meaningful. Visible means good feedback loops to understand the outcome of the chance taken. Meaningful can run to more pay and promotion of course, but also to opportunity and recognition. And don't forget: a sense of impact and accomplishment -- of making a difference -- can be the most powerful reinforcer of all. For this reason, a high density of chances with short, visible feedback loops becomes really important to your change strategy.

Communication magnifies and sustains the impact of chances taken and their consequences. If you speak up at a sales meeting, the client says Good Point, and I later praise you for that, the culture change impact is X. If I then relate that story to everyone at the next sales team meeting, the impact is X * 10 others there. If we write down that behavior in the firm's sales training program as a good model to follow, the impact is X * 100 others who will go through that program.

Summing up, here's a simple set of questions to ask for managing culture change:

  • What specific values does our culture consist of?
  • How strongly held are these values: how well-reinforced have they been by chances, consequences, and communication?
  • What values do I need to keep / change / drop / add?
  • In light of the pre-existing value topology -- fancy way of saying, the values already out there and their relative strength -- what specific chances, consequences, communication program will I need to effect the necessary keeps / changes / drops / adds to the value set?
  • How can my marketing and sales programs incorporate a greater number of formal and informal tests? How quickly and frequently can we execute them?
  • What dimensions (for example, pricing, visual design, messaging style and content, etc.) and "min-max" ranges on those dimensions should I set? 
  • How clearly and quickly can we see the results of these tests?
  • What pay, promotion, opportunity, and recognition implications can I associate with each test?
  • What mechanisms are available / should I use to communicate tests and results?

Ask these questions daily, tote up the score -- chances taken, consequences realized, communications executed -- weekly or monthly. Track the trend, slice the numbers by the behaviors and people you're trying to influence, and the consequences and communications that apply. Don't forget to keep culture change in context: frame it with the business results culture is supposed to serve. Re-focus, then wash, rinse, repeat.  Very soon you'll have a clear view of and strong grip on culture change in your organization.

November 23, 2013

Book Review: "The Human Brand"

October 13, 2013

Unpacking Healthcare.gov

So healthcare.gov launched, with problems.  I'm trying to understand why, so I can apply some lessons in my professional life.  Here are some ideas.

First, I think it helps to define some levels of the problem.  I can think of four:

1. Strategic / policy level -- what challenges do the goals we set create?  In this case, the objective, basically, is two-fold: first; reduce the costs of late-stage, high-cost uncompensated care by enrolling the people who ultimately use that (middle-aged poor folks and other unfortunates) in health insurance that will get them care earlier and reduce stress / improve outcomes (for them and for society) later; second; reduce the cost of this insurance through exchanges that drive competition.  So, basically, bring a bunch of folks from, in many cases, the wrong side of the Digital Divide, and expose them to a bunch of eligibility- and choice-driven complexity (proof:  need for "Navigators"). Hmm.  (Cue the folks who say that's why we need a simple single-payor model, but the obvious response would be that it simply wasn't politically feasible.  We need to play the cards we're dealt.)

2. Experience level -- In light of that need, let's examine what the government did do for each of the "Attract / Engage / Convert / Retain" phases of a Caveman User Experience.  It did promote ACA -- arguably insufficiently or not creatively enough to distinguish itself from opposing signal levels it should have anticipated (one take here).  But more problematically, from what I can tell, the program skips "Engage" and emphasizes "Convert": Healthcare.gov immediately asks you to "Apply Now" (see screenshot below, where "Apply Now" is prominently  featured over "Learn More", even on the "Learn" tab of the site). This is technically problematic (see #3 below), but also experientially lots to ask for when you don't yet know what's behind the curtain. 

Healthcaregov
3. Technical level -- Excellent piece in Washington Post by Timothy B. Lee. Basically, the system tries to do an eligibility check (for participation and subsidies) before sending you on to enrollment.  Doing this requires checking a bunch of other government systems.  The flowchart explains very clearly why this could be problematic.  There are some front end problems as well, described in rawest form by some of the chatter on Reddit, but from what I've seen these are more superficial, a function of poor process / time management, and fixable.

4. Organizational level -- Great article here in Slate by David Auerbach. Basically, poor coordination structure and execution by HHS of the front and back ends.

Second, here are some things HHS might do differently:

1. Strategic level: Sounds like some segmentation of the potential user base would have suggested a much greater investment in explanation / education, in advance of registration.  Since any responsible design effort starts with users and use cases, I'm sure they did this.  But what came out the other end doesn't seem to reflect that.  What bureaucratic or political considerations got in the way, and what can be revisited, to improve the result? Or, instead of allowing political hacks to infiltrate and dominate the ranks of engineers trying to design a service that works, why not embed competent technologists, perhaps drawn from the ranks of Chief Digital Officers, into the senior political ranks, to advise them on how to get things right online?

2. Experience level: Perhaps the first couple of levels of experience on healthcare.gov should have been explanatory?  "Here's what to expect, here's how this works..." Maybe video (could have used YouTube!)? Maybe also ask a couple of quick anonymous questions to determine whether the eligibility / subsidy check would be relevant, to spare the load on that engine, before seeing what plans might be available, at what price?  You could always re-ask / confirm that data later once the user's past the shopping /evaluation stage, before formally enrolling them into a plan.  In ecommerce, we don't ask untargeted shoppers to enter discount codes until they're about to check out, right?

Or, why not pre-process and cache the answer to the eligibility question the system currently tries to calculate on the fly?  After all, the government already has all our social security numbers and green card numbers, and our tax returns.  So by the time any of us go to the site, it could have pre-determined the size of any potential subsidy, if any, we'd be eligible for, and it could have used this *estimated* subsidy to calculate a *projected* premium we might pay.  We'd need a little registration / security, maybe "enter your last name and social security number, and if they match we'll tell you your estimated subsidy". (I suppose returning a subsidy answer would confirm for a crook who knows my last name that he had my correct SSN, but maybe we could prevent the brute force querying this requires with CAPTCHA. Security friends, please advise.  Naturally, I'd make sure the pre-chached lookup file stays server-side, and isn't exposed as an array in a client-side Javascript snippet!)

3. I see from viewing the page source they have Google Tag Manager running, so perhaps they also have Google Analytics running too, alongside whatever other things...  Since they've open-sourced the front end code and their content on Github, maybe they could also share what they're learning via GA, so we could evaluate ideas for improving the site in the context of that data?

4. It appears they are using Optimizely to test/ optimize their pages (javascript from page source here).  While the nice pictures with people smiling may be optimal, There's plenty of research that suggests that by pushing much of the links to site content below the fold, and forcing us to scroll to see it, they might be burying the very resources the "experience perspective" I've described suggests they need to highlight.  So maybe this layout is in fact what maximizes the results they're looking for -- pressing the "Apply Now" button -- but maybe that's the wrong question to be asking!

Postscript, November 1:

Food for thought (scroll to bottom).  How does this happen?  Software engineer friends, please weigh in!

 

September 11, 2013

Book Review: "Building A Digital Analytics Organization" by @Judah Phillips #analytics

I originally got to know Judah Phillips through Web Analytics Wednesdays events he organized, and in recent years he's kindly participated on panels I've moderated and has been helpful to my own writing and publishing efforts. I've even partnered with some of the excellent professionals who have worked for him. So while I'm biased as the beneficiary of his wisdom and support, I can also vouch first-hand for the depth and credibility of his advice. In short, in an increasingly hype-filled category, Judah is the real deal, and this makes "Building The Digital Analytics Organization" a book to take seriously.

For me the book was useful on three levels. One, it's a foundational text for framing how to come at business analysis and reporting. Specifically, he presents an Analytics Value Chain that reminds us to bookend our analytic efforts per se with a clear set of objectives and actions, an orientation that's sadly missing in many balkanized corporate environments. Two, it's a blueprint for your own organization-building efforts. He really covers the waterfront, from how to approach analysis, to different kinds of analysis you can pursue, to how to organize the function and manage its relationships with other groups that play important supporting roles. For me, Chapter 6, "Defining, Planning, Collecting, and Governing Data in Digital Analytics" is an especially useful section. In it, he presents a very clear, straightforward structure for how you should set up and run these crucial functions. Finally, three, Judah offers a strong point of view on certain decisions. For example, I read him to advocate for a strongly centralized digital analytics function, rooted in the "business" side of the house, to make sure that you have both critical mass for these crucial skills, as well as proximity to the decisions they need to support.

These three uses had me scribbling in the margins and dog-earing extensively. But if you still need one more reason to pull the trigger, it helps that the book is very up-to-date and has a final chapter that looks forward very thoughtfully into how Judah expects what he describes as the "Analytical Economy" to evolve. This section is both a helpful survey of the different capabilities that will shape this future as well as an exploration of the issues these capabilities and associated trends will raise, in particular as they relate to privacy. It's a valuable checklist, to make sure you're not just building for today, but for the next few years to come.

Here's the book and the review on Amazon.

September 01, 2013

#MITX Panel: Analytically Aligned Decision Making in the Multi-Agency Context

I moderated this panel at the Massachusetts Innovation and Technology Exchange's (mitx.org)"The Science of Marketing: Using Data & Analytics for Winning" summit on August 1, 2013.  Thanks to T. Rowe Price's Paul Musante, Visual IQ's Manu Mathew, iKnowtion's Don Ryan, and Google's Sonia Chung for participating!

 

July 16, 2013

Please sponsor my 2013 NLG #autism ride: 2007 Ride Recap

On July 27, I'll be riding once again in the annual Nashoba Learning Group bike-a-thon, and I'd really appreciate your support:

http://www.crowdrise.com/nlgbikecesar2013

(Note: please also Like / Retweet / forward to friends, etc. using links at bottom!)

This is a great cause, and an incredibly effective and well-run school.  Your contribution will make a big difference. (And thank you to everyone who'd been so generous so far!)

For kicks, here's my recap of my 2007 ride:

"Friends,

Thank you all for being so generous on such short notice!   

Fresh off a flight from London that arrived in Boston at midnight on Friday, I wheeled myself onto the starting line Saturday morning a few minutes after eight 
.  Herewith, a few journal entries from the ride:

Mile 2:  The 
peloton drops me like a stone.  DopeursNever mind; this breakaway is but  le petit setback.  Where are my domestiques to bring me back to the pack?

Mile 3:  Reality intrudes.  No 
domestiques.  Facing 47 miles' worth of solo quality time, I plot my comeback... 

Mile 10: 1st major climb, L'Alpe de Bolton (MA), a steep, nasty little "beyond classification" grade.  I curse at the crowds pressing in.  'AllezAllez!' they call, like wolves.  A farmer in a Superman cape runs alongside.

Mile 10.25: Mirages disappear in the 95-degree heat.  (First time I've seen the Superman dude, though.  Moral of this story: lay off the British Airways dessert wines the night before a big ride.) 

Mile 10.5: Descending L'Alpe de Bolton, feeling airborne at 35 MPH

Mile 10.50125: Realizing after hitting bump that I am, in fact, airborne.   AAAAARRH!!!

Mile 14: I smell sweet victory in the morning air!

Mile 15:  Realize the smell is actually the Bolton dump

Mile 27: Col d'Harvard (MA).  Mis-shift on steep climb, drop chain off granny ring.  Barely click out of pedal to avoid keeling over, disappointing two buzzards circling overhead. 

Mile 33:  Whip out Blackberry, Googling 'Michael Rasmussen 
soigneurto see if can score some surplus EPO

Mile 40:  I see dead people

Mile 50:  I am, ahem... outsprinted at the finish.  Ride organizers generously grant me 'same time' when they realize no one noticed exactly when I got back."