And, Also, It Doesn’t Work
I alluded to this in my first piece of the week on the NSA’s data mining, but even leaving aside civil liberties concerns it’s also important not to take for granted that violations of privacy do, in fact, mean increased “national security.” Rachel Levinson-Waldman has a terrific piece on the subject:
First, intelligence and law enforcement agencies are increasingly drowning in data; the more that comes in, the harder it is to stay afloat. Most recently, the failure of the intelligence community to intercept the 2009 “underwear bomber” was blamed in large part on a surfeit of information: according to an official White House review, a significant amount of critical information was “embedded in a large volume of other data.” Similarly, the independent investigation of the alleged shootings by U.S. Army Major Nidal Hasan at Fort Hood concluded that the “crushing volume” of information was one of the factors that hampered the FBI’s analysis before the attack.
[…]
Credit card companies are held up as the data-mining paradigm. But the companies’ success in detecting fraud is due to factors that don’t exist in the counterterrorism context: the massive volume of transactions, the high rate of fraud, the existence of identifiable patterns (for instance, if a thief tests a stolen card at a gas station to check if it works, and then immediately purchases more expensive items), and the relatively low cost of a false positive: a call to the card’s owner and, at worst, premature closure of a legitimate account.
By contrast, there have been a relatively small number of attempted or successful terrorist attacks, which means that there are no reliable “signatures” to use for pattern modeling. Even in the highly improbable and undesirable circumstance that the number of attacks rises significantly, they are unlikely to share enough characteristics to create reliable patterns.
Read the whole etc.