This post by Greg Djerejian on the now legendary O’Hanlon/Pollack op-ed is really indispensable; Greg takes them to task on their lack of hard evidence, reliance on impression, and apparent belief that the Iraq they saw on tour is representative of Iraq as a whole. He also mentions something that I touched on earlier:
Yes, O’Hanlon is using some other metric, ostensibly allowing him to feel comfortable stating that civilian fatality rates are “down roughly a third since the surge began” (the official date of the surge’s commencement, of course, a matter shrouded in rather a lot of obfuscation and spin).
Indeed, Surge advocates can be counted on, almost to an individual, to give only the most convenient date for the start of the operation, such that any statistical evaluation paints only the most positive possible picture. In some sense, it’s true that the Surge began only a month ago; that’s roughly when the full five brigades completed their deployment to Iraq and all became involved in operations. Unfortunately, it’s also deeply (and intentionally) deceptive, since the buildup began in February and preparatory Surge operations began then. If the Surge only began on July 1, or June 1, or whatever date you prefer, then it has to acknowledged that operations necessary to make the Surge happen began well before that time, and consequently that the Surge should be evaluated in the context not only of its direct effect but also of the effects of its preparation. Obviously, this places the Surge in a far darker (statistical) light, given that the operation has been accompanied by the highest coalition casualties of the war, extremely high Iraqi casualties, and no noticeable improvement in services or political situation. See, for example, this BBC report on the availability of electricity in Baghdad.
At some point, of course, you have to wonder why Surge advocates rely on such obfuscatory tactics as either ignoring statistics in favor of anecdote or manipulating statistics in order to put the situation in the best light. On the former, Tami Biddle summarizes some of the psychological work quite nicely:
In general, we also prioritize incoming information according to its emotional vividness. Emotionally remote information, such as written memoranda, statistics, or second and third-hand reports, carries less impact than first-hand personal experience, especially when the latter is unusually painful, strikingly positive, or uniquely formative. The medium influences receptivity, independent of the analytical merit of the information per se.
This provides a handy, nutshell explanation for why, in spite of the statistical evidence available to (and, indeed, produced by) Michael O’Hanlon, he nevertheless felt capable of concluding that the war could be won after touring the battlefield.
Still, motivation has to be taken seriously, and I’m forced to wonder why Surge advocates employ statistics in only the most deceptive of ways. Let’s take, for example, the claim that O’Hanlon and Pollack made about Iraqi casualty rates. To put it bluntly, any moron knows that the death toll calculated by the Iraqi government is worthless. The data collection methods are so poor that the Iraqi numbers aren’t useful even as a proxy; that the given toll in one month is higher than the toll in another gives no hint whatsoever as to which was the deadlier month. Were I a Surge advocate given to statistical honesty, I would urge caution regarding numbers that indicated that the death rate has remained high during the Surge, and exercise caution in arguing that the death rate has declined. Whatever it may show, I would say, those numbers are simply not a reasonable way to measure the effectiveness of the operation. I certainly wouldn’t argue that “Iraqi casualty rates have dropped by a third since the Surge began”, because while that may be true, I certainly have no reliable evidence to back the claim up. And I certainly, certainly, certainly (that’s very certain) wouldn’t baldly manipulate data that I knew was bad in the first place in order to produce the outcome I wanted.
Much the same can be said of Coalition casualty rates. Coalition rates have a big advantage over Iraqi rates as a metric because we know they’re more or less accurate. If Iraq Coalition Casualty Monitor says 108 soldiers died this month, that’s probably pretty close to being right. What the numbers don’t reveal, however, is any particular meaning. Since preparations for the Surge began, Coalition casualty rates have skyrocketed. This does not necessarily mean that the Surge is a failure; it’s not good, as it reveals that the insurgents continue to have the capacity to launch lots of deadly attacks, but it could simply be a consequence of increased Coalition operational tempo. If Coalition casualty rates remain high for a prolonged period (and, for my money, they have), then we can conclude that, at the very least, the Surge has failed to reduce the ability of the insurgency to launch attacks. We can’t, however, say that because casualty rates went down in July, the Surge is working (O’Hanlon and Pollack don’t do this, but other advocates have), especially when even the statistical assertion collapses under the most trifling scrutiny.
And here’s the problem; Surge advocates, instead of urging caution on the use of statistics and (quite reasonably) pointing out that the statistics we have may not give a clear picture of what’s going on, have repeatedly used the worst measures of effectiveness in the worst ways. When O’Hanlon or Pollack or a CENTCOM spokesman comes on your TV and tells you that reduced Iraqi casualties mean the Surge is working, you’re safe in concluding that a) he’s lying, and b) he knows he’s lying. The next obvious step, then, is to wonder why these people use statistics that they know are bad in order to advance their case. And finally, that has to make us wonder why, if the Surge is so great and can bring us victory, people have to make stuff up in order to argue for its success.