This week, one of the big news items is the disclosure of the NSA's Prism program that collects all sorts of our electronic communications, to help identify terrorists and prevent attacks.
I was struck by three things. One is the recency bias in the outrage expressed by many people. Not sixty days ago we were all horrified at the news of the Boston Marathon bombings. Another is the polarization of the debate. Consider the contrast the Hullabaloo blog draws between "insurrectionists" and "institutionalists". The third was the superficial treatment of the tradeoffs folks would be willing to make. Yesterday the New York Times Caucus blog published the results of a survey that suggested most folks are fence-sitters on the tradeoff between privacy and security, but left it more or less at that. (The Onion wasn't far behind with a perfect send-up of the ambivalence we feel.)
In sum, biased decision-making based on excessively simplified choices using limited data. Not helpful. Better would be a more nuanced examination of the tradeoff between the privacy you would be willing to give up for the potential lives saved. I see this opportunity to improve decision making alot, and I thought this would be an interesting example to illustrate how framing and informing an issue differently can help. So I posted this survey: https://t.co/et0Bs0OrKF
Here are some early results from twelve folks who kindly took it (please feel free to add your answers, if I get enough more I'll update the results):
(Each axis is a seven point scale, 1 at lowest and 7 at highest. Bubble size = # of respondents who provided that tradeoff as their answer. No bubble / just label = 1 respondent, biggest bubble at lower right = 3 respondents.)
Interesting distribution, tending slightly toward folks valuing (their own) privacy over (other people's) security.
Now my friend and business school classmate Sam Kinney suggested this tradeoff was a false choice. I disagreed with him. But the exchange did get me to think a bit further. More data isn't necessarily linear in its benefits. It could have diminishing returns of course (as I argued in Pragmalytics) but it could also have increasing value as the incremental data might fill in a puzzle or help to make a connection. While that relationship between data and safety is hard for me to process, the government might help its case by being less deceptive and more transparent about what it's collecting, and its relative benefits. It might do this, if not for principle, then for the practical value of controlling the terms of the debate when, as David Brooks wrote so brilliantly this week, an increasingly anomic society cultivates Edward Snowdens at an accelerating clip.
I'm skeptical about the value of this data for identifying terrorists and preventing their attacks. Any competent terrorist network will use burner phones, run its own email servers, and communicate in code. But maybe the data surveillance program has value because it raises the bar to this level of infrastructure and process, and thus makes it harder for such networks to operate.
I'm not concerned about the use of my data for security purposes, especially not if it can save innocent boys and girls from losing limbs at the hands of sick whackos. I am really concerned it might get reused for other purposes in ways I don't approve, or by folks whose motives I don't approve, so I'm sure we could improve oversight, not only for what data gets used how, but of the vast, outsourced, increasingly unaccountable government we have in place. But right now, against the broader backdrop of gridlock on essentially any important public issue, I just think the debate needs to get more utilitarian, and less political and ideological. And, I think analytically-inclined folks can play a productive role in making this happen.
(Thanks to @zimbalist and @perryhewitt for steering me to some great links, and to Sam for pushing my thinking.)