Science Reporting: Spotting & Correcting The Over-Hype
I just got done with reading an interesting article over at NewScientist that somewhat explains for the behavior that some people in the media business take as they try and sensationalize certain stories. A few moments ago some of you might have seen me in my debating stance over the moon landing conspiracy theory, so for anyone interested in knowing just how overly hyped and many times falsified stories and conspiracies are born here’s some nice snippets I enjoyed from the article:
I am not in favour of treating science as a special case, but I think it can be argued that some science stories are of such great public interest that the highest standards of journalism must apply.
When the press gets it wrong on science, the results can be devastating. The furore over MMR, which started in 1998 after a rogue doctor claimed a link between the vaccine and autism, is the best known example of how poor reporting can cause harm. Vaccination rates dropped to 80 per cent and cases of measles in England and Wales rose from 56 in 1998 to 1370 in 2008.
I would just like to add that while it is extremely easy for the public, myself included, to get sucked into sensationalist news titles the most important and responsible thing you can possibly do as a citizen or citizen journalist is apprehend what part of what you read was sensationalized by further reading into it using more legitimate sources until you understand the story at hand. Then, you correct yourself or your work to better fit these standards of reporting so that we don’t do it again or at least becomes harder for us to fall for anything..
The media was not solely responsible for the MMR scare, but some of the news values that caused the problem are alive and well: the appetite for a great scare story; the desire to overstate a claim made by one expert in a single small study; the reluctance to put one alarming piece of research into its wider, more revealing context; journalistic “balance” – which creates the impression of a significant divide in scientific opinion where there is none; the love of the maverick; and so on.
It’s my view that if you put the best scientists, science communicators and science journalists in a room it wouldn’t take long for them to agree on the basics of good medical science reporting.
A tick list would look something like the following. Every story on new research should include the sample size and highlight where it may be too small to draw general conclusions. Any increase in risk should be reported in absolute terms as well as percentages: for example, a “50 per cent increase” in risk or a “doubling” of risk could merely mean an increase from 1 in 1000 to 1.5 or 2 in 1000. A story about medical research should provide a realistic time frame for the work’s translation into a treatment or cure. It should emphasise what stage findings are at: if it is a small study in mice it is just the beginning; if it’s a huge clinical trial involving thousands of people it is more significant. Stories about shocking findings should include the wider context: the first study to find something unusual is inevitably very preliminary; the 50th study to show the same thing may be justifiably alarming. Articles should mention where the story has come from: a conference lecture, an interview with a scientist or a study in a peer-reviewed journal, for example.
I felt inclined to highlight this bit because this sort of process has been happening to a lot of studies, take global warming for instance. We still have a great many people thinking it’s a hoax, a lie, or miscalculation despite the evidence that says otherwise. This is a direct case in which bad reporting harms public knowledge of a subject and also highlights why it would be important to work on it and legitimize the reporting since people would like be kept in the clear about such dire matters.
And in the end, when all is said and written, it’s hardly ever the news source that reported the misconception or sensationalized article that gets the bitter end of the stick, it’s the scientists doing the study. Which I am not completely ruling out as part of the problem if they genuinely falsify data or maybe not divulge key information, but I am implying that much work is needed on the media reporting side to this as well (myself included).
However, as you can see, a lot of emphasis is being made in the article that highlights a journalists job to legitimize the story to their best knowledge and to expand that knowledge based on reliable sources in order to gain credibility of your own. But remember, we all make mistakes, and the valuable thing to get from it is to learn from the mistake and work on it using new or different methods than what you had originally applied. Head on over to NS for the full article, it’s very worth the read if you’re into the legitimacy of reporting or just have an interest in the workings of sensationalism and how to control and prevent it to yield better stories.
Full Article: A few simple checks would transform science reporting