I had no idea this study has had such a long life in the blogosphere. The Continental Op, after following up on my post yesterday, read the paper (Prof. Groseclose has posted it on his website, so we can all do that if we’re so inclined). He gets to the heart of why NAACP and Heritage mentions are counted and compared as if they are comparable:
To come up with their measure of bias, the authors “count the times that a media outlet cites various think tanks and other policy groups.” They note that their “sample includes policy groups that are not usually called think tanks, such as the NAACP, NRA, and Sierra Club.” Their universe consists of the 200 groups identified as “Major Think Tanks and Policy Groups” at the web site www.wheretodoresearch.com. However, that web site does not provide any information about the criteria for inclusion in its database. The posted resume of Saguee Saraf, who runs www.wheretodoresearch, notes that he authored an “internal Cato Institute policy paper” entitled “Tearing Down Big Government Brick by Brick”, served as a research assistant to Renewing the Dream, a sequel to Newt Gingrich’s Contract with America, and was selected 1992 “Man of the Year” by the Cheshire (CT) Republican Town Committee–credentials that hardly mark him as an unbiased resource.
To my surprise, a similar criticism, amongst others, was noted by Geoff Numberg at the Language Log over a year ago. Given that this paper has been been through several permutations and seen many readers up to this point, it strikes me as odd that the current version contains no explanation or justification for the use of this list. Groseclose and Milyo offered a defensive response to Numberg (they take him to task for his unprofessional tone, and then call him a liar a few lines down). The bulk of their response revolves around methodological issues I cheerfully admit are well over my head, and to the extent that it’s not over my head, their response seems correct. However, their defense of the list of 200 misses the point–and the problem–by a wide margin:
Nunberg finds fault with our list of think tanks and advocacy groups used to rate media outlets. But even if our sample of think tanks is skewed left or right, this will not bias our results.
When we began our study, Milyo, while searching the internet, found a list of think tanks that seemed to be a good place to start to look for data. This is the list created by Saraf. We have never met Saraf, nor do we know anything about him except what he lists on his web site. Further, when we first downloaded the list, we had not even read any other parts of his web site. In short, we knew nothing about Saraf or how his list was created. We chose the list simply because (i) it listed many think tanks, (ii) it seemed to include all the major ones, and (iii) it seemed to include a healthy balance of far-right, right-leaning moderate, moderate, left-leaning moderate, and far-left think tanks.
I’ll accept the claim that the methodology they use neutralizes any problems with right-wing shift in the list. I’m not entirely comfortable with this, but I don’t have the methodological chops to engage on this point. I’ll further stipulate that it doesn’t matter whether Saraf is a right-winger or not. The problem is that Saraf composed this list for entirely different purposes than Groseclose and Milyo use it. The problem is one of comparing apples and oranges. Think tanks and issue advocacy groups are cited for very different reasons in the media. Think tanks, in mainstream media stories, are almost always cited as authorities, whereas advocacy groups are cited at times as authorities, at times as objects of criticism, and at other times as neither, but news in itself. Think tank cites plausibly tell us something about bias in a way that a majority of advocacy group cites wouldn’t. Furthermore, if you accept the basic ADA left-right quantitative ranking system (I’m far from certain I do, given radically shifting nature of the political center and the increasingly radicalized and non-convservative nature of the modern Republican party, but I’ll assume it has some analytical value for the sake of argument), it makes a certain amount of sense to give think-tanks a number, in a way that it doesn’t make sense for groups that advocate for a particular cause. There have been a number of Republican politicians who’ve been strong allies of the environmental movement over the years, but it doesn’t make the Sierra club and more right-wing when they praise them, nor does it make NARAL right-wing to ally with a pro-choice Republican, or the NRA with a strong 2nd amendment Democrat. This is not to say that these groups aren’t ideological, but if those ADA numbers are capturing anything, they’re capturing a ideological location based on a host of issues, rather than one particular issue.
Of course, simply comparing think tank cites would hardly solve all problems. As everyone who’s paying attention knows, some think tanks have a pretty good reputation for honest, serious work, whereas others are notorious propaganda mills. There’s no good reason for moderately intelligient journalists to not know this. It might be plausibly argued that stories that cite serious think tanks (left, right or center) are trying to get the story right, while those who cite propaganda mill think tanks as authorities are at best playing the faux objectivity game Jon Stewart has skewered so effectively, and at worst are outright hacks. Treating all think tanks the same isn’t going to pick that up.
I haven’t finished reading the paper (I will, soon) But I’ve read enough to have grave concerns about the way they formulate the issue of bias/slant. From page 16-17:
Instead, for every sin of commission, such as those by Glass or Blair, we believe that there are hundreds, and maybe thousands, of sins of omission—cases where a journalist chose facts or stories that only one side of the political spectrum is likely to mention. For instance, in a story printed on March 1, 2002, the New York Times reported that (i) the IRS increased its audit rate on the “working poor” (a phrase that the article defines as any taxpayer who claimed an earned income tax credit); while (ii) the agency decreased its audit rate on taxpayers who earn more than $100,000; and (iii) more than half of all IRS audits involve the working poor. The article also notes that (iv) “The roughly 5 percent of taxpayers who make more than $100,000 … have the greatest opportunities to shortchange the government because they receive most of the nonwage income.”
Most would agree that the article contains only true and accurate statements; however, most would also agree that the statements are more likely to be made by a liberal than a conservative. Indeed, the centrist and right-leaning news outlets by our measure (the Washington Times, Fox News’ Special Report, the Newshour with Jim Lehrer, ABC’s Good Morning America, and CNN’s Newsnight with Aaron Brown) failed to mention any of these facts. Meanwhile, three of the outlets on the left side of our spectrum (CBS Evening News, USA Today, and the [news pages of the] Wall Street Journal) did mention at least one of the facts.
It’s accurate and true (and, one might add, rather important) but because conservatives aren’t likely to mention it we should consider it bias. There isn’t even an effort to show that the reported facts about IRS policy changes omit something important, but they don’t.
Obviously, there is a version of what Groseclose and Milyo are suggesting here that’s correct–the world is full of facts, and which ones are reported matter a great deal, independent of the accuracy of the reporting. The extensive coverage of the dissapearance of Natalee Holloway says something rather unfortunate about CNN and Fox News, regardless of the accuracy of that coverage. But what we’ve got here is a shift in the behavior of a major federal agency. The conception of bias here has grown to such an extent that it’s hard to imagine what a news outlet could do to avoid it, other than wait and see if the other major news outlets cover it as well. The definition of bias at this point is becoming so broad as to be not particularly helpful. But at any rate, since the authors have posted their paper online, you don’t have to trust me, Continental Op, jedmunds, or Jeff Goldstein, which is a good thing because I’m sure our ADA scores would indicate that none of us our trustworthy sources.