About

I'm a partner in the advanced analytics group at Bain & Company, the global management consulting firm. My primary focus is on marketing analytics (bio). I've been writing here (views my own) about marketing, technology, e-business, and analytics since 2003 (blog name explained).

Email or follow me:

-->

9 posts categorized "e-government"

October 13, 2013

Unpacking Healthcare.gov

So healthcare.gov launched, with problems.  I'm trying to understand why, so I can apply some lessons in my professional life.  Here are some ideas.

First, I think it helps to define some levels of the problem.  I can think of four:

1. Strategic / policy level -- what challenges do the goals we set create?  In this case, the objective, basically, is two-fold: first; reduce the costs of late-stage, high-cost uncompensated care by enrolling the people who ultimately use that (middle-aged poor folks and other unfortunates) in health insurance that will get them care earlier and reduce stress / improve outcomes (for them and for society) later; second; reduce the cost of this insurance through exchanges that drive competition.  So, basically, bring a bunch of folks from, in many cases, the wrong side of the Digital Divide, and expose them to a bunch of eligibility- and choice-driven complexity (proof:  need for "Navigators"). Hmm.  (Cue the folks who say that's why we need a simple single-payor model, but the obvious response would be that it simply wasn't politically feasible.  We need to play the cards we're dealt.)

2. Experience level -- In light of that need, let's examine what the government did do for each of the "Attract / Engage / Convert / Retain" phases of a Caveman User Experience.  It did promote ACA -- arguably insufficiently or not creatively enough to distinguish itself from opposing signal levels it should have anticipated (one take here).  But more problematically, from what I can tell, the program skips "Engage" and emphasizes "Convert": Healthcare.gov immediately asks you to "Apply Now" (see screenshot below, where "Apply Now" is prominently  featured over "Learn More", even on the "Learn" tab of the site). This is technically problematic (see #3 below), but also experientially lots to ask for when you don't yet know what's behind the curtain. 

Healthcaregov
3. Technical level -- Excellent piece in Washington Post by Timothy B. Lee. Basically, the system tries to do an eligibility check (for participation and subsidies) before sending you on to enrollment.  Doing this requires checking a bunch of other government systems.  The flowchart explains very clearly why this could be problematic.  There are some front end problems as well, described in rawest form by some of the chatter on Reddit, but from what I've seen these are more superficial, a function of poor process / time management, and fixable.

4. Organizational level -- Great article here in Slate by David Auerbach. Basically, poor coordination structure and execution by HHS of the front and back ends.

Second, here are some things HHS might do differently:

1. Strategic level: Sounds like some segmentation of the potential user base would have suggested a much greater investment in explanation / education, in advance of registration.  Since any responsible design effort starts with users and use cases, I'm sure they did this.  But what came out the other end doesn't seem to reflect that.  What bureaucratic or political considerations got in the way, and what can be revisited, to improve the result? Or, instead of allowing political hacks to infiltrate and dominate the ranks of engineers trying to design a service that works, why not embed competent technologists, perhaps drawn from the ranks of Chief Digital Officers, into the senior political ranks, to advise them on how to get things right online?

2. Experience level: Perhaps the first couple of levels of experience on healthcare.gov should have been explanatory?  "Here's what to expect, here's how this works..." Maybe video (could have used YouTube!)? Maybe also ask a couple of quick anonymous questions to determine whether the eligibility / subsidy check would be relevant, to spare the load on that engine, before seeing what plans might be available, at what price?  You could always re-ask / confirm that data later once the user's past the shopping /evaluation stage, before formally enrolling them into a plan.  In ecommerce, we don't ask untargeted shoppers to enter discount codes until they're about to check out, right?

Or, why not pre-process and cache the answer to the eligibility question the system currently tries to calculate on the fly?  After all, the government already has all our social security numbers and green card numbers, and our tax returns.  So by the time any of us go to the site, it could have pre-determined the size of any potential subsidy, if any, we'd be eligible for, and it could have used this *estimated* subsidy to calculate a *projected* premium we might pay.  We'd need a little registration / security, maybe "enter your last name and social security number, and if they match we'll tell you your estimated subsidy". (I suppose returning a subsidy answer would confirm for a crook who knows my last name that he had my correct SSN, but maybe we could prevent the brute force querying this requires with CAPTCHA. Security friends, please advise.  Naturally, I'd make sure the pre-chached lookup file stays server-side, and isn't exposed as an array in a client-side Javascript snippet!)

3. I see from viewing the page source they have Google Tag Manager running, so perhaps they also have Google Analytics running too, alongside whatever other things...  Since they've open-sourced the front end code and their content on Github, maybe they could also share what they're learning via GA, so we could evaluate ideas for improving the site in the context of that data?

4. It appears they are using Optimizely to test/ optimize their pages (javascript from page source here).  While the nice pictures with people smiling may be optimal, There's plenty of research that suggests that by pushing much of the links to site content below the fold, and forcing us to scroll to see it, they might be burying the very resources the "experience perspective" I've described suggests they need to highlight.  So maybe this layout is in fact what maximizes the results they're looking for -- pressing the "Apply Now" button -- but maybe that's the wrong question to be asking!

Postscript, November 1:

Food for thought (scroll to bottom).  How does this happen?  Software engineer friends, please weigh in!

 

June 12, 2013

Privacy vs. Security Survey Interim Results #prism #analytics

This week, one of the big news items is the disclosure of the NSA's Prism program that collects all sorts of our electronic communications, to help identify terrorists and prevent attacks.

I was struck by three things.  One is the recency bias in the outrage expressed by many people.  Not sixty days ago we were all horrified at the news of the Boston Marathon bombings.  Another is the polarization of the debate.  Consider the contrast the Hullabaloo blog draws between "insurrectionists" and "institutionalists".  The third was the superficial treatment of the tradeoffs folks would be willing to make.  Yesterday the New York Times Caucus blog published the results of a survey that suggested most folks are fence-sitters on the tradeoff between privacy and security, but left it more or less at that.  (The Onion wasn't far behind with a perfect send-up of the ambivalence we feel.)

In sum, biased decision-making based on excessively simplified choices using limited data.  Not helpful. Better would be a more nuanced examination of the tradeoff between the privacy you would be willing to give up for the potential lives saved.  I see this opportunity to improve decision making alot, and I thought this would be an interesting example to illustrate how framing and informing an issue differently can help.  So I posted this survey: https://t.co/et0Bs0OrKF

Here are some early results from twelve folks who kindly took it (please feel free to add your answers, if I get enough more I'll update the results):

Privacy vs security

(Each axis is a seven point scale, 1 at lowest and 7 at highest.  Bubble size = # of respondents who provided that tradeoff as their answer.  No bubble / just label = 1 respondent, biggest bubble at lower right = 3 respondents.)

Interesting distribution, tending slightly toward folks valuing (their own) privacy over (other people's) security.

Now my friend and business school classmate Sam Kinney suggested this tradeoff was a false choice.  I disagreed with him. But the exchange did get me to think a bit further.  More data isn't necessarily linear in its benefits.  It could have diminishing returns of course (as I argued in Pragmalytics) but it could also have increasing value as the incremental data might fill in a puzzle or help to make a connection.  While that relationship between data and safety is hard for me to process, the government might help its case by being less deceptive and more transparent about what it's collecting, and its relative benefits.  It might do this, if not for principle, then for the practical value of controlling the terms of the debate when, as David Brooks wrote so brilliantly this week, an increasingly anomic society cultivates Edward Snowdens at an accelerating clip.

I'm skeptical about the value of this data for identifying terrorists and preventing their attacks.  Any competent terrorist network will use burner phones, run its own email servers, and communicate in code.  But maybe the data surveillance program has value because it raises the bar to this level of infrastructure and process, and thus makes it harder for such networks to operate.

I'm not concerned about the use of my data for security purposes, especially not if it can save innocent boys and girls from losing limbs at the hands of sick whackos.  I am really concerned it might get reused for other purposes in ways I don't approve, or by folks whose motives I don't approve, so I'm sure we could improve oversight, not only for what data gets used how, but of the vast, outsourced, increasingly unaccountable government we have in place. But right now, against the broader backdrop of gridlock on essentially any important public issue, I just think the debate needs to get more utilitarian, and less political and ideological.  And, I think analytically-inclined folks can play a productive role in making this happen.

(Thanks to @zimbalist and @perryhewitt for steering me to some great links, and to Sam for pushing my thinking.)

April 06, 2013

Dazed and Confused #opensource @perryhewitt @oreillymedia @roughtype @thebafflermag @evgenymorozov

Earlier today, my friend Perry Hewitt pointed me to a very thoughtful essay by Evgeny Morozov in the latest issue of The Baffler, titled "The Meme Hustler: Tim O'Reilly's Crazy Talk".  

A while back I worked at a free software firm (ArsDigita, where early versions of the ArsDigita Community System were licensed under GPL) and was deeply involved in developing  an "open source" license that balanced our needs, interests, and objectives with our clients' (the ArsDigita Public License, or ADPL, which was closely based on the Mozilla Public License, or MPL).  I've been to O'Reilly's conferences (<shameless> I remember a ~20-person 2001 Birds-of-a-Feather session in San Diego with Mitch Kapor and pre-Google Eric Schmidt on commercializing open source </shameless>).  Also, I'm a user of O'Reilly's books (currently have Charles Severance's Using Google App Engine in my bag).  So I figured I should read this carefully and have a point of view about the essay.  And despite having recently read Nicholas Carr's excellent and disturbing  2011 book The Shallows about how dumb the Internet has made me, I thought nonetheless that I should brave at least a superficial review of Morozov's sixteen-thousand-word piece.

To summarize: Morozov describes O'Reilly as a self-promoting manipulator who wraps and justifies his evangelizing of Internet-centered open innovation in software, and more recently government, in a Randian cloak sequined with Silicon Valley rhinestones.  My main reaction: "So, your point would be...?" More closely:

First, there's what Theodore Roosevelt had to say about critics. (Accordingly, I fully cop to the recursive hypocrisy of this post.) If, as Morozov says of O'Reilly, "For all his economistic outlook, he was not one to talk externalities..." then Morozov (as most of my fellow liberals do) ignores the utility of motivation.  I accept and embrace that with self-interest and the energy to pursue it, more (ahem, taxable) wealth is created.  So when O'Reilly says something, I don't reflexively reject it because it might be self-promoting; rather, I first try to make sure I understand how that benefits him, so I can better filter for what might benefit me. For example, Morozov writes:

In his 2007 bestseller Words That Work, the Republican operative Frank Luntz lists ten rules of effective communication: simplicity, brevity, credibility, consistency, novelty, sound, aspiration, visualization, questioning, and context. O’Reilly, while employing most of them, has a few unique rules of his own. Clever use of visualization, for example, helps him craft his message in a way that is both sharp and open-ended. Thus, O’Reilly’s meme-engineering efforts usually result in “meme maps,” where the meme to be defined—whether it’s “open source” or “Web 2.0”—is put at the center, while other blob-like terms are drawn as connected to it.
Where Morozov offers a warning, I see a manual! I just have to remember my obligation to apply it honestly and ethically.

Second, Morozov chooses not to observe that if O'Reilly and others hadn't broadened the free software movement into an "open source" one that ultimately offered more options for balancing the needs and rights of software developers with those of users (who themselves might also be developers), we might all still be in deeper thrall to proprietary vendors.  I know from first-hand experience that the world simply was not and is still not ready to accept GPL as the only option.

Nonetheless, good on Morozov for offering this critique of O'Reilly.  Essays like this help keep guys like O'Reilly honest, as far as that's necessary.  They also force us to think hard about what O'Reilly's peddling -- a responsibility that should be ours.  I used to get frustrated by folks who slapped the 2.0 label on everything, to the point of meaninglessness, until I appreciated that the meme and its overuse drove me to think and presented me with an opportunity to riff on it.  I think O'Reilly and others like him do us a great service when they try to boil down complexities into memes.  The trick for us is to make sure the memes are the start of our understanding, not the end of it.

July 31, 2009

Clunkalytics

This afternoon I listened to an NPR segment on the government's "Cash for Clunkers" program.  It sounds like quite the  goat rodeo.  

A senior researcher at JD Power & Associates in Detroit interviewed for the segment noted that although the program provided incentives sufficient to fund sales of 250,000 cars ($1Bn in incentives at ~$4k/car traded in / retired), his firm's estimates are that only 40,000 sales will be incremental (over and above) what would have otherwise have been sold anyway.  Hmm.  If they're right, a billion dollars to lift sales by 40k cars.  That's $25k for each new, fuel-efficient car sold!  (In fairness, that's actually a lot less porky than a lot of things we hear about.)

Would the government have done better to simply buy 40,000 cars, perhaps for a little less than $25k apiece?  Then it could have run a contest where Americans could enter their most egregious gas guzzlers (via online video, natch) in the hope of winning a replacement, which would have been more fun, and bought the government lots more -- and more positive -- coverage of the program (perhaps giving a whole new meaning to "cap and trade").  

Of course this ignores the benefit of getting the other ~200k gas guzzlers off the road.  But treated as an independent objective, surely there would have been a better mechanism for encouraging drivers who were going to buy anyway to buy north of 20 MPG?

Cynically, one might say that the real beneficiaries of this program aren't auto workers, but the dealers whose glutted lots get cleared (especially since the program can be used to buy "foreign" as well as "domestic" cars -- ironically the folks interviewed on NPR sounded a common refrain: "Trading in my old GMC / Ford for a new Toyota pickup!").  It's hard to believe that under current conditions the dealers will order replacement inventories from the plants sufficient to replace what they sold.  

It's easy to understand why this program was so popular in Congress, since there are dealers in every state.  But if you wanted to make the most of a billion dollar stimulus for the present and the future of the nation, would putting it into the pockets of auto salesmen nationwide have been the best way to go? (It's a serious question, since dealers are often at the center of their communities and do spread a lot of money around, so maybe the multiplier is significant.)

Notwithstanding, I'm not coming at this from an ideological position.  I get the need for a stimulus to ameliorate the recession.  I'm thinking about this in the context of other retail promotion programs we see, many of which have the same inefficient dynamics -- subsidizing sales that would have happened anyway, motivating sales of the wrong products, and making the channel happy for a little while rather than encouraging more lasting customer loyalty.  And since "you manage what you measure", I'm also thinking about how I might have set up an analytic framework to execute the program more effectively.

There is a web site for this program (FWIW, it's using Google Analytics).  To measure how well the program attracts possible customers, perhaps the government could have channeled prospective users of the program through it, to gather information (e.g., pre-existing purchase intent, perhaps in combination with data from a BT network) in exchange for the "coupon".  To measure engagement and conversion, surely -- I can imagine a number of options -- consumers could have been tracked through to dealers.

Then, providing open access to the data and crowd-sourcing suggestions for improving the program would have been cool, and good practice for aspiring web analytics professionals (free job training in a growth category!).  Sadly, but more likely, we'd be filing FOIA requests.  Oh well.

October 31, 2008

November 3: "Technology, Campaigns, and Civic Engagement" at Harvard's Kennedy School of Government

Jerry Mechling's invited me to present to his Leadership For A Networked World class (STM-480) on Monday at Harvard's Kennedy School of Government on Monday.  Here's the abstract for the session:

"As the election season draws to a close, some have noted that the campaigns, and in particular the Obama campaign, have assembled enormously powerful assets in the groups they have assembled and facilitated through their web operations. (It’s interesting to note that they haven’t used particularly exotic technology to do this, but rather have focused their efforts and resources on facilitating a ‘think global, act local’ approach to presidential politics that harkens back to the old ward system.)

As a leader in the public sector, one question this raises for you is how you too should move beyond “online, not in line” applications of information technology in government, toward applications that support higher “civic engagement” after the voting is over. Let’s define the objective: by “civic engagement”, we mean involvement in civic affairs by an informed polity, where “informed” means an awareness and understanding of individual issues and of the trade-offs among them, and where “involvement” means constructive evaluation and advocacy in ways that make the ultimate actions of government better and more fair. Assuming you aren’t a closet monarchist, fascist, anarchist, etc. and think this is a good idea, what should your agenda be?

One way to think about this is to have a simple framework for the mechanisms you need. What should you do to get good civic engagement? Elements might include:

The objective of this class will be to help you be aware of what's going on and begin to assemble your own agenda. Questions each of you may face will include:

  • What’s worth tracking, collaborating on, mobilizing for?
  • What small steps can "entice" people into deeper and more meaningful engagement?
  • What’s already out there supporting this?
  • How aligned are these resources with what I think needs to happen?
  • How do I act—cooperate, compete, and how?
  • How do I sustain these resources—what’s the “business model”?
  • How do I promote and manage them?
  • How do I “govern” them to make sure they aren’t mis-used?

Here's one of my favorites -- New Zealand's Police Act Wiki.  (Congratulations to Simon Heep, Michael Utton, William Ool, and Susan Hanks, for their leadership on reducing speed limits on South Island roads, and to other major contributors!)  Have any other examples you feel are instructive?  Please let me know.

September 14, 2008

Electoralmap.net: Pragmalytics and the Presidential Election

Lately we've been asked a lot about what metrics to pay attention to in digital marketing channels. A central piece of this is finding, at any given point in time, those few places in your business where stakes, uncertainty, and degrees of freedom for action are highest, and then focusing your reporting and analytics improvement efforts on those places, while tuning out all the other places that call for attention but don't have the same leverage.

A good public-sector example of excellent reporting on the right issue, relevant to all of us right now, is electoralmap.net. This service uses state-by-state contract prices from Intrade, the world's largest public prediction market, to predict the outcome of the Electoral College vote.

Reading Dailykos and Michelle Malkin have me convinced that regardless of how any of the candidates perform over the last month and a half of the campaign, 99.9% of the voters have already made up their minds. And right now, according to electoralmap.net the election appears to be a dead heat, with only Colorado's 9 electoral votes hanging in the balance. Looks like the DNC was fairly prescient in choosing Denver for its convention!

Should we believe it? This analysis suggests we can. Further, in prediction markets, market volume is a proxy for sample size. A closer look at the trading in Colorado (in the left-hand nav, go to "politics", then "US Election by State", then expand "Alabama-Florida" and look at the Colorado contracts, where it currently looks like Obama trades at $5.30 for a $10 payoff, and McCain trades at $4.70) indicates a total of about 2600 contracts in the market, for a total contract value traded of $26,000. That's not too much, but with a 3-point bid-asked price spread on the Obama and McCain Colorado contracts, it's enough I'd think to begin to attract trading by folks with inside local knowledge away from the main contracts ("2008 US Election" in the left hand nav), which collectively have $12 million in contract value traded, but where the bid-asked spreads are only 10-20% what they are in the Colorado contracts.

So, this is a fancy way of saying that, if you follow the money, "It's Colorado, stupid!" It will be interesting to see if the national election coverage in major online outlets begins to highlight places where things are really tight and selectively aggregate news and opinion from them.

And congratulations to the electoralmap.net guys for creating one of the most imaginative, useful, and usable mashups for following and filtering the election. But may I suggest a widget, or a FB app, to boost traffic? At only 6k uniques a day, you're really missing a big opportunity! Then maybe call Disney about a sponsorship to support its release of Swing Vote on DVD.

June 25, 2008

Sunlight Is The Best Disinfectant, 2008 Version

Two years ago, I proposed (surely not originally), the "Legi-Wiki".  Now, with a much better name, we have OpenCongress. Almost, but not quite: we can't yet draft legislation collaboratively online, but we can keep much better track of it.   In general, I think Louis Brandeis would approve. 

One concern: services like this lose a bigger picture of "I'll give you this, you give me that" so important to reaching consensus and driving progress from an evolving middle.  To be fair, OpenCongress has tried to address this somewhat by showing how lawmakers are aligned in their voting.  However, you have to work hard to figure out possible horse-trading, since it's difficult to grade any individual piece of legislation on a single dimension. 

OpenCongress has written a really good "about" page that describes its present state and future direction.  It would be interesting, perhaps, to crowd-source an assessment of how economically and socially liberal or conservative a bill is.  Or, to associate new bills with past legislation.  Certainly, to allow users to tag legislation (as they've suggested).  All together: "Senator X and Senator Y frequently vote together on health mildly progressive health care legislation but part ways on radical defense policy".  This would allow us to see what's got a chance and what doesn't -- ultimately the crucial question if we want action, not just ideological posturing and gridlock.  Also, it will be interesting to see if in the future, services like these allow users to track -- or project -- legislators' stances on bills (based on how they do or don't announce support or opposition), and what effect that has on the process.

Postscript: Marshall Kirkpatrick's review on ReadWriteWeb

June 23, 2008

Qik+Twitter+Summize+(Spinvision): We Have Met Big Brother, And He Is Us

Imagine if you could sit above the world, at whatever altitude you wish,  and see anything through anyone and everyone's eyes, in real time, filtering these streams to let through only those things you're actually interested in. 

Today, we have real-time video streaming (now -- though not always practically -- in 3G) via folks with Nokia N95's and Qik.  Qik lets people know you are streaming via Twitter, and you can filter these "tweets" with Summize (which I wrote about yesterday)  You can also get your Qik streams onto YouTube automaticallySpinvision, a brother to Twittervision and Flickrvision, lets you see videos as they are uploaded to YouTube -- superimposed on a map of the Earth.

Now let's roll ahead 12-18 months.  N95's won't be the only devices with high quality camera/ video capture and GPS capabilities -- so, many more people will have this capability.  3G will be more widely available and adopted.  Twitter and Summize will be features of much larger players' services, so they too will move from the fringe to the mainstream as more people inevitable discover the utility of microblogging for different purposes, and the utility of filtering all that microblogging (and microvlogging).   Presumably, you'll be able to stream simultaneously on Qik and YouTube.   Google's just announced the availability of Google Earth running in a browser (though strangely, they didn't keep in sync with the release of Firefox 3.0), so we'll be able to  make our mashups even more dynamic and accessible.  Throw in a little facial recognition to boot, while you're at it.

What does all this add up to? A crowd-sourced, global/hyper-local, digital video, roll-your-own-channel, keep-your-friends-close-and-your-enemies-closer news network. 

What does that make you?

Postscript:

Imagine if rather than turning over a videotape to the authorities, she had streamed this.  Or if Zimbabwe, Darfur, Afghanistan, Iraq, or New Orleans for that matter, were live and unedited, 24/7, from a thousand sources each.   How will that change us?

February 17, 2008

At Harvard KSG With Tim Berners-Lee

Jerry Mechling, who teaches at Harvard's Kennedy School, and runs the "Leadership for a Networked World" (formerly E-Government Executive Education) program there, invited me to a talk by Sir Tim Berners-Lee last Wednesday evening.  The audience included ~50 current and former senior public-sector information technology officials attending one of Jerry's sessions.

Sir Tim's comments included:

  • a discussion of how the WWW came to be
  • an examination of some of the risks that could have killed it early on, and how those were overcome
  • an exploration of some of the possibilities of the Semantic Web
  • an exhortation to members of the audience to "set their data free"

Continue reading "At Harvard KSG With Tim Berners-Lee" »