A study: Poor replication validity of biomedical association studies reported by newspapers, published Feb 21 indicates that what I long suspected is true. Medical reporting by the general press is a hot mess.
It’s not news — so to speak — that credulous reporters too often produce nuance-free articles about research that deserves not only caveats but outright skepticism, nor how much coverage of science, and biomedicine in particular, suffers from “shiny object syndrome” — the uncontrollable impulse to chase after that latest thing to catch the eye, as long as it’s pretty and uncomplicated.
Now, however, researchers at the University of Bordeaux, France, have connected the dots with a study that shows the extent of the problem.
Their analysis of media coverage indicates that studies written about in newspapers are highly likely to be later overturned.
“This is partly due to the fact that newspapers preferentially cover ‘positive’ initial studies rather than subsequent observations, in particular those reporting null findings,” the researchers note in their study, which appears in the journal PLOS ONE.
The authors of the study offer some solutions to journalists and scientists
Our study also suggests that most journalists from the general press do not know or prefer not to deal with the high degree of uncertainty inherent in early biomedical studies. Importantly, such biased newspaper coverage can have important social consequences [32, 33]. For example, a content analysis of newspaper articles covering Caspi’s study showed that they emphasized the genetic side of the gene-by-environment interactions and this discourse deflected public attention away from the considerable impact of social inequalities upon health . Therefore, with Susan Watts , we advocate that society “needs science journalism to weigh up the values and the vices of new science” (p. 151). In particular, when preparing a report on a scientific study, journalists should always ask scientists whether it is an initial finding and, if so, they should inform the public that this discovery is still tentative and must be validated by subsequent studies. Larsson and coworkers (2003) have identified the obstacles science journalists meet to accurately cope with uncertainty . In particular, most interviewed journalists feel that it is difficult to find scientists who are independent from authors and who are willing to assist them. Thus, scientists, either as independent experts or as authors, are also responsible for improving the informative value of biomedical reporting in the mass media. In particular, they are responsible for the accuracy of the press releases covering their work and published by scientific editors or universities.
I appreciate the advice to potential expert sources, but ultimately the reporter must be able review and assess the study (not the press release or even the abstract). Or to back up a step, the reporter’s supervisor needs to care about the quality of the organization’s science reporting. And that’s why I remain cautiously pessimistic about any improvement in the way the general press reports on research.
It isn’t hard to learn how to look at a study and make some basic decisions about it. For example, were the subjects mice or men? If the subjects were all human, how many of them were there and what were their genders and ages? Then one can look at the purpose of the study, the type of study and finally the results.
These days the reporter doesn’t even have to be unchained from his desk and allowed to attend a seminar. Places like the Poytner Institute and journalism schools post helpful guides online.
This doesn’t appear to be happening, so I’ve been forced to conclude no one cares. The Health & Science section is a place for sciency-flavored fluff. Tell people they can eat dark chocolate and drink red wine, give them four different views on global climate change from six people who have letters that may or may not be relevant after their name, throw in something about death and repeat.