Monday, December 28, 2009

"To clarify, I am not a Santa researcher..."

So the 'Santa causes obesity and drink-driving' spoof article in the BMJ was (mis)reported all over the place, from the Today program on the radio to the free papers on the train, and even the supposedly serious papers.

This article, I thought, highlighted an important issue - the author himself suggested that maybe the confusion arose because journalists just read the press release and not the original article itself. I suspect this is very common, judging by the number of science stories that are almost word-for-word copies of the press release they're based on.

Is it actually unreasonable to expect a science correspondent to read the paper or article when reporting on scientific research? There's the issue that not all papers are open access, but you would think that the BBC, for example, could afford subscriptions or pay-per-view? And there's the question of whether the journalist could make sense of the paper once they'd got it. It's a reasonable point, after all researchers write their papers with their peer group as the intended audience, and a general science or health journalist can't be expected to be an expert in every specific field or every scientific discipline.

But still, things could be better. Journalism is supposed to be about investigating and finding out the truth - so they could at least try to read the paper, and they should have to be suffficiently qualified that they can make some attempt at understanding the gist of it, even if they're not familiar with all the details of the methodologies etc. -they can surely do some learning themselves, or contact an 'expert' in the field or indeed one of the authors of the paper with questions - and have some familiarity with statistics and be able to make some judgements of their own.

An interesting exercise I once had to do as an undergraduate was to look at several papers (selected by the lecturers), some of which were 'good' and some which should never have made it to publication - there were some obviously terrible ones, for example graphs missing error bars or meaningful labels/units which fudged the results, or ones where grand conclusions were drawn which couldn't really be justified by the results. It shouldn't be too much to expect a science or health writer to be able to do something like this at least.

Medical writers, for example, usually need a PhD, or at least a BSc in a relevant medical or life sciences discipline, or relevant experience - and there's a good reason for that; it just doesn't work to have people writing blindly about a subject they don't understand, using words like 'virus' and 'bacterium' interchangably because they don't know or care about the difference, and lacking the education or intellectual ability to critically evaluate or understand what they're writing about. The situations are not the same, of course - a confused or ignorant medical writer could cause disastrous consequences, whereas the equivalent journalist usually does nothing worse than annoy pedants. But bad science journalism can occasionally have serious consequences of its own - a good example is health scares such as the MMR/autism 'controversy'.

Also, why don't the BBC ever link to the original research, for those who want to read it for themselves? I feel it's time to write a letter...