How Can You Tell What “Good” Science Really Says?

This column responds to a good question from an organic agriculture acquaintance. She’s convinced that I am quick to endorse all scientific reports supporting my tech-oriented perspective, while rejecting those supporting hers. “How do you decide what’s right and what’s not?” she exclaimed. Fair question. This is my response.

Though decades have passed since I was an active university researcher (a former professor of crop science, University of Guelph), I still read many research papers and remember well the process for publishing results in a peer-reviewed journal. The process was imperfect: some reviewers were too picky; some not enough. Some journals are much “easier” than others.

But imperfections aside, the system worked quite well. A peer-reviewed article was generally considered credible. “Peer-reviewed” was an assurance of meeting certain standards of quality.

My faith in the system has since weakened. I now read too many scientific papers which I cannot believe made it through a proper peer-reviewer process – papers providing only the sketchiest description of experimental methodology, or limited statistical analysis, or selected data which are clearly cherry-picked, or abstracts and conclusions which extrapolate way beyond that justified by the data. And then there is a flood of new journals emphasizing speed over thoroughness. You pay the money and you get published quickly, especially if your findings are sufficiently sensational to make the national news. Journals and some scientists and their institutions even issue news releases to make sure this happens.

And now we have retractions: peer-reviewed articles are retracted after publication. I never heard of this in my research/publishing days.

I do understand the academic process, which really has not changed much for generations. As a researcher you must publish – to get a permanent job, secure tenure and promotion, have stature, get research grants, and more. And it’s not enough just to publish: you must also be cited. That means you emphasize positive, not negative, results. “Dramatic new findings” are best. And as the number of permanent public research position numbers plateaus or declines, the pressures increase.

In theory, this should mean higher quality. If the potential supply of papers balloons, then raise the standards. Sadly, I don’t think this has happened. There’s too much money in the journal publication business. Perhaps everyone is too busy writing to have time for reviewing the submissions of others. I see a lot of what I consider to be crap – to be blunt – in the form of peer-reviewed publication.

I’ve wondered: Maybe it is just me, an old guy with a too rosy memory of what it was like in days past. But then I recently read an outstanding column in a recent issue of The Economist decrying the same (“Unreliable research: trouble at the lab,” Oct. 19, 2013, http://goo.gl/iE5fha).

And there is the experience of John Bohannon, a biologist at Harvard, who purposely fabricated an entire experiment and research report, making sure that the paper contained obvious flaws – and having it accepted for publication in more than half of the 304 research journals to which it was submitted.

The bar has been lowered: peer-reviewed – while still better than not – is not the same guarantor of quality and credibility as before.

For me in crop agriculture, the most high-profile example of failure is the recently retracted rats-fed-glyphosate-and-GMO-corn paper of Dr. Séralini and colleagues at the Université de Caen. The journal, Food and Chemical Toxicology, is (or least was) considered prestigious. The editors ultimately did the right thing – a forced retraction despite the authors’ objection. But serious damage was done – to the stature of GMO technology and well-being of people who could benefit from its usage, to the credibility of the journal, and to science itself.

There are lots more examples still within the realm of agricultural technology: A paper by Carman et al in the obscure Journal of Organic Systems, claiming notable health problems for pigs fed GMO crops, is one. A paper from computer staff at MIT in another obscure journal, Entropy, asserting even larger health problems for humans exposed to the herbicide, glyphosate, is another. Both papers triggered immediate responses from knowledgeable critiques, exposing the obvious flaws, and have been dismissed by most informed scientists. But the fact remains: they were peer-reviewed, not retracted, and continue to be cited publicly as “proof” by those with anti-GMO perspectives.

The retractions aren’t all on one side: Dr. Pamela Ronald, a well-respected geneticist at the University of California (she works on GMO rice) recently retracted her own peer-reviewed paper because she later discovered that some of the data were incorrect. The anti-GMO crowd had a field day praising the retraction, using this to try to undermine Ronald’s credibility – just as they reacted in reverse with the Séralini paper retraction. The difference is that Ronald initiated the retraction herself.

One cannot eliminate personal perspective and bias totally in making judgements as to which scientific papers/reports are credible and which ones are not. I’ve those biases, myself, as does everyone in science. I can counter this by being more receptive to papers which challenge my personal bias, but that’s not being objective either.

So how do you make a judgement? For what it’s worth, here are a few guidelines I use:

1. Are the findings consistent with basic principles, i.e., physical, chemical and (to the extent that we understand them) biological and economic principles?

As an example, I’ve had no problem accepting that an increase in atmospheric concentrations of carbon dioxide should mean an increase in average global temperature and rainfall. Both are totally consistent with basic physics. I’ve had more problems with statements/conclusions that global climate is becoming more variable as the physical basis seems far less obvious. (Working Group I of the International Panel on Climate Change seems to have the same problem.)

2. Is the statistical analysis solid? I don’t expect the data in every paper to have been subjected to every possible statistical test – most of which I don’t understand myself – but it is reasonable to expect some reasonable replication and basic analyses of the results.

Another example: A recent paper from Purdue University is being cited everywhere as proof that a category of insecticides is deadly to bees. Its high profile in the journal PLoS One attracted lots of attention. I confess my bias: I don’t believe that these insecticides are the dominant cause of bee colony mortality as the anti-pesticide crowd proclaim. But the paper presented data with few meaningful statistical analysis and included non replicated measurements. How did it pass peer-reviewing?

3. Do results seem consistent with common experience?

Again, the Séralini and Carman papers come to mind. For, if the results presented were correct and applicable to humans, then one should expect to see massive health problems for the billions of people who have eaten GM-based foods. We haven’t.

4. Is there a conflict of interest?

This is commonly raised as a huge concern if the data/publication are linked to industry funding. That’s a legitimate concern. But it applies equally when the work comes from someone with a known agenda/bias of a contrary nature. Is research supported by Monsanto any less credible than that linked to Greenpeace? This does not mean that reported results are automatically wrong, but that they do need to be interpreted with special caution.

One judgement I use is whether the researcher is known for findings which are always one sided – for example, always supportive or negative, on issues such as pesticide safety, GMO technology, increased climatic variability, or hundreds of other controversial issues. If always one-sided, I’m suspicious.

5. Has the work been repeated/verified by independent researchers?

This is a given requirement for the acceptance of almost all research findings. Even when results are reported as being statistically significant at the 95% probability level, that means one chance in 20 of being a random fluke. The potential for inadvertent errors induced by the research technique itself is larger. This has been called the “single study syndrome.” Research findings must be confirmed by other labs before being accepted as “likely true.”

6. Is the researcher well recognized?

This one is problematic as it would seem to discriminate against younger researchers who are often the most brilliant. Perhaps a better criterion is the reciprocal: Is the researcher recognized negatively?

This should have been a red flag with Dr. Séralini – the rat study is not the first time his work has been challenged for objectivity.

But it also works in reverse. I’ve been saddened by the story of Dr. Don Huber, formerly a respected Purdue University pathologist, who in later life has been making incredulous and unsupported claims about a mysterious new organism which renders humans susceptible to a raft of illnesses triggered by glyphosate. Without his (formerly) good reputation, his recent assertions would have been dismissed as quackery well before now.

Of course, these guidelines all work best if you have some scientific experience, which most of the human population lacks. And even with scientific training, my guidelines are not infallible. They would surely have led me, if I was around at the time and into physics, to reject Einstein’s initial paper on relativity – inconsistent with known physical principles or common experience, an unknown researcher, etc. In my defence, most of Einstein’s contemporaries rejected his work initially, as well. (I don’t know if they pushed for a retraction.)

The biggest scientific findings are those which contradict the “well known,” though that’s where and why the need for independent verification is so important. Bear in mind that science can never prove anything to be absolutely true. Einstein’s conclusions were only accepted after they were tested in some now-legendary experiments, and no one can yet say they are absolutely correct more than a century later.

So what’s my advice for those without scientific training? Well points 3, 4 and 5 above still apply. And a healthy degree of skepticism works too. Just because some new research finding gets high-profile attention on the national news does not mean it’s right. In fact, the odds are that it is wrong. The article in The Economist concludes that most research reports are only partially true, at best. Wait at least a few days before drawing any conclusions. A few weeks or months is even better. Wait for the counter perspectives to come out – which, incidentally, are unlikely to be reported in the national news; the Internet and social media are better sources.

Science is wonderful and the basis for a large portion of what we call quality of life. But it can be a big challenge to know just what good science really says. Unfortunately, that challenge is getting bigger. That’s a huge problem for those of us who proclaim, “Trust good science.”