Hilborn, R. 2006. Faith-based fisheries. Fisheries 31:554-555.
Ray Hilborn throws the evolution/intelligent design debate into the laps of all of us concerned with the state of the oceans with this provocative piece suggesting that many current ideas about the state of over-fishing are over-stated, in fact, they are taken on faith. This article strikes a nerve, because as a scientist who also works on policy issues I often wonder how much of the “science” I am relaying (e.g., findings about perchlorate published in EHP that I would relay to my old boss Congresswoman Hilda Solis) did I actually just take on faith? The usual answer is that we have “faith” in the peer review system that the science reported is not faith based, but based on some accepted methodology and norms in science. Hilborn goes right to the heart of this argument by suggesting that many of the more alarming recent papers, especially in Science and Nature are either not properly peer reviewed or not properly followed up upon when new contradictory information appears. He suggests that both the idea of failed fisheries management and papers that purportedly show massive overfishing are taken on faith. Hilborn specifically attacks a handful of papers, and I leave it to those authors to defend their work, here (in the comments below) or elsewhere.
The larger question raised by Hilborn’s piece is – is there a systematic flaw, especially in these flashy journals, and especially in the area of fisheries research? All of us who have published in science have seen flaws in the peer review system (I’m still pissed about a paper that got sunk by one very highly esteemed reviewer who happened to be completely wrong).
One solution is Hilborn’s idea of publishing names of those reviewers who supported publication. The benefit, according to Ray, is that then we’d know if there were some consistent source of bias (are the same old folks reviewing all these papers?). I can see a few problems with this. Knowing who supported a past paper of yours may subconsciously bias you toward supporting their paper when you are reviewing it (scientists are human after all). Moreover, this may lead to biases in who is selected to review papers. Hilborn, for example, might ask editors not to let a person who has favorably reviewed these “doom and gloom” papers to review his work; or the “doom and gloom” crowd might target a limited subset of “yes” men and women to review theirs. It would seem that the best you could do with this kind of “outing” of reviewers is make a Bill O’Reilly Fox News style blacklist of people you can only argue post-hoc are going too easy on a subset of authors. I’m not a fan of creating strawmen that seem to fit a certain framework and then knocking them down – let’s argue on the individual
I hope that this forum can provide a better solution – that is, a way to quickly (rather than the laborious and uncertain process of publishing a counter argument and waiting for a counter counter argument to appear in the same or another journal – a process Hilborn rightly points out as flawed) get into discussion on controversial new papers.