Do you remember the Sokal affair? Mathematical physicist Alan Sokal submitted a fake paper titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” to a postmodern cultural studies journal Social Text. Well, it was a real paper, but it was a hoax. It was written as a hoax, cobbling together the silliest tropes he could find about social construction of science and left-wing politics. Once it was published, he used it as a club to beat up on postmodern theory, and the humanities more generally. Many self-satisfied scientists claimed, at the time, that this confirmed all their worst suspicions about the emptiness of academic jargon-spinning in the humanities and social-scientists. The implication — sometimes made explicit — was that the reverse could never happen, that scientists know exactly what their terms mean, and could never be fooled by such a prank.
It turns out, the situation is much worse than that. Springer and IEEE have been forced to withdraw 120 papers that they published in various conference proceedings, and that turn out to have been randomly generated by the software SCIgen. Concerned to assure the public of the reliability of their peer-review process, Springer has now announced a technical solution: SciDetect, software that checks papers to determine whether they have been generated by SCIgen. (If it had been announced today I would have assumed that it was an April Fools prank, but the press release is from 23 March.)
Confirming me in my opinion that the whole system of peer review has outlived its usefulness, and is now living on as a vestigial parasite on the scientific enterprise.
Anyway, Springer has now thrown down the gauntlet, and young computer scientists should rise to the challenge of improving SCIgen, to fool the new software. We may see an accelerating arms race in scientific publishing, a kind of reverse Turing Test, with computers trying to fool other computers into believing that they are computer scientists. In the end, maybe the software will get so good that it will be doing original research and writing real scientific papers.
I am of two minds about efforts to put pressure on particularly bad actors in the scientific publishing field (such as Elsevier) to reform, since the result of that reform would be a slightly less greedy ectoparasite sucking the blood of the research community, slightly more sustainably. I think (as I wrote here) that the whole model of peer review is antiquated and oppressive and (as the British like to say) no longer fit for purpose. Perhaps we should seek to sharpen the contradictions, in the hopes that the academic proletariat will shake off these leeches. We should strive to make all journals like Elsevier, and double the prices.
Mathematician Timothy Gowers started organising a boycott of Elsevier a couple of years ago. I’m not sure how it’s going, but here’s some information about it. And here’s some artwork:
Although, in fairness, I must point out that it wasn’t Elsevier who first tried to lock down the Tree of Knowledge. It was this guy:
“Here’s your takedown notice.”
Now that Elsevier has taken to making legal threats against academics who publish articles on their own academic web sites, Henry Farrell is proposing a novel strategy that combines the boycott with an embrace of Elsevier’s tactics:
I think that everyone should submit as much of their work to Elsevier as they possibly can. Any article that has even a modest chance of success. People should bear through the revise and resubmit process as many times as it takes. Once the piece has finally been accepted, then, and only then, should they withdraw the article from consideration, and then publish it on their university or personal website with an “accepted by Elsevier Journal x and then withdrawn in protest,” together with a copy of the acceptance email (containing the editor’s email address etc).
I’ve just been reading two books on the climate-change debate, both focusing on the so-called “hockey stick graph”: Michael Mann’s The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, and A. W. Montford’s The Hockey Stick Illusion: Climategate and the Corruption of Science. I’ll comment on these in a later post, but right now I want to comment on the totemic role that the strange ritual of anonymous peer review plays for the gatekeepers of science.
One commonly hears that anonymous peer review (henceforth APR) is the “gold standard” for scientific papers. Now, this is a reasonable description, in that the gold standard was a system that long outlived its usefulness, constraining growth and innovation by attempting to measure something that is inherently fluid and abstract by an arbitrary concrete criterion, and persisting through the vested interests of a few and deficient imagination of the many.
That’s not usually what people mean, though.
An article is submitted to a journal. An editor has read it and decided to include it. It appears in print. What does APR add to this? It means that the editor also solicited the opinion of at least one other person (the “referee(s)”). That’s it. The opinion may have been expressed in three lines or less. She may have ignored the opinion.
Furthermore, to drain away any incentive for the referee(s) to be conscientious about their work,
- They are unpaid.
- They are anonymous. We know how well that works for raising the tone of blog comments.
- Anonymity implies: Their contributions will never be acknowledged. If they contribute important insights to the paper, they may be recognised in the acknowledgement section: “We are grateful for the helpful suggestions of an anonymous referee.” Very occasionally an author will suggest, through the editor, that a referee who has made important contributions be invited to join the paper as a co-author. More commonly, a paper will be sent from journal to journal, collecting useful suggestions until it has actually become worth publishing.*
- No one will ever take issue with any positive remarks the referee makes, as no one but the authors (and the editor) will ever see them. Negative comments, on the other hand, may get pushback from the author, and thus need to be justified, requiring far more work.
- Normally, the author will be forced to demonstrate that she has taken the referee’s criticism to heart, no matter how petty or subjective. This encourages the referee to adopt an Olympian stance, passing judgement on what by rights ought to be the author’s prerogative.
Of course, I don’t mean to say that most referees most of the time don’t do a very conscientious job. I take refereeing seriously, and make a good-faith effort to be fair, judicious, and helpful. But I’m sure that I’m not the only one who feels that the incentives are pushing in other directions, and to the extent that I do a careful job, it is mainly out of some abstract sense of duty. I am particularly irritated when I find myself forced to put original insights into my report, to explain why the paper is deficient. I would much rather the paper be published as is, and then I could make my criticism publicly, and then, if I’m right, be recognised for my contribution. (more…)