I wonder how Alan Sokal feels about becoming the new Piltdown, the metonym for a a certain kind of hoax?
So now there’s another attack on trendy subfields of social science, being called “Sokal squared” for some reason. I guess it’s appropriate to the ambiguity of the situation. if you thought the Sokal Hoax was already big, squaring it would make it bigger; on the other hand, if you thought it was petty, this new version is just pettier. And if, like me, you thought it was just one of those things, the squared version is more or less the same.
The new version is unlike the original Sokal Hoax in one important respect: Sokal was mocking social scientists for their credulity about the stupid stuff physicists say. The reboot mocks social scientists for their credulity about the stupid stuff other social scientists say. A group of three scholars has produced a whole slew of intentionally absurd papers, in fields that they tendentiously call “grievance studies”, and managed to get them past peer review at some reputable journals. The hoaxers wink with facially ridiculous theses, like the account of canine rape culture in dog parks.
But if we’re not going to shut down bold thought, we have to allow for research whose aims and conclusions seem strange. (Whether paradoxical theses are unduly promoted for their ability to grab attention is a related but separate matter. For example, one of the few academic economics talks I ever attended was by a behavioural economist explaining the “marriage market” in terms of women’s trading off the steady income they receive from a husband against the potential income from prostitution that they would forego. And my first exposure to mathematical finance was a lecture on how a person with insider information could structure a series of profitable trades that would be undetectable by regulators.) If the surprising claim is being brought by a fellow scholar acting in good faith, trying to advance the discussion in the field, then you try to engage with the argument generously. You need to strike a balance, particularly when technical correctness isn’t a well-defined standard in your field. Trolling with fake papers poisons this cooperative process of evaluation.
There are bad papers in technical fields that could be considered hoaxes. Imagine, if you will, someone were to make a false claim about the limits of certain stochastic spatial models associated with evolution, because it seemed to them (and their colleagues) inherently plausible, and then tried to justify it with some mumbo jumbo about an expansion where higher-order terms were being neglected, hiding the essential fact that the neglected terms were actually larger than the terms being dropped. Crazy, right?
Now imagine that this has actually occurred, in a very reputable journal of mathematical physics. Front page news in the NY Times, right? This fundamentally undermines the so-called reliability of mathematical physics? Why isn’t anyone publishing this? The weirdly admired journal Nature recently published an article by a team of biologists, claiming that the maximum human lifespan was about 117 years, five years shorter than the longest lifespan actually observed to date. Was this a hoax? If so, the authors seem to still be continuing the joke.
But, of course, in these technical fields the prejudices that lead people to fallacious conclusions are themselves fairly technical, hence not likely to raise a big laugh or headlines if they were revealed to be a weird hoax.
The lesson is not that new approaches to social science directed at understanding (and undermining) entrenched power relations are uniquely susceptible to fashions and deceptions. It is that there is no gold standard for truth in science! And if there were, it definitely wouldn’t be a procedure where the editor solicits the opinions of approximately two researchers with somewhat relevant expertise, completely at random unless they’ve been selected as particular friends of the editor, and have them comment on the work anonymously and in secret. (The failure of peer review in the lifespan paper I mentioned was the subject of an exposé in the Dutch newspaper NRC Handelsblad.)
That said, it is certainly true that a field’s complexity and demands to master a technical vocabulary as a threshold for even being considered as a participant in the research community is a good barrier to charlatans and cranks. (By no means impenetrable, though.) These barriers are also well known to have costs, so it is not true that the optimal setting for technical jargon is “as challenging as possible”). The culture of my undergraduate days — the scientific part of the culture, the part that wasn’t dominated by the reek of vomiting Brett Kavanaugh types — was dominated by the running battle between mathematicians and physicists over the appropriate level of technical rigor and standards of proof: mathematicians ridiculing the physicists as muddle-headed non-rigorous softies, and the physicists charging that the mathematicians were ignoring all the interesting problems, retreating to their isolated lampposts where their ponderous rigor machines actually could work.
What the success of these hoaxes suggest is perhaps a more limited critique than the gleeful “hard science” crew would like to claim: Social scientists have been too ready to suppose that mastery of a superficial technical vocabulary was a sufficient guarantee of bona fides for entry into expert discussion of their fields. In fact, their speech turns out to be easy to imitate, and the imitations hard to detect. This doesn’t mean, as some would claim, that their technical language is fake, shaman feathers, smoke and mirrors. It just means that the technicalities don’t go as deep. You can produce a plausible simulacrum without any real investment in the ideas and discourses of these fields.
One more point: A reason why a hoax like this is much easier in the social sciences and humanities than in mathematical sciences, say, is that the former fields mainly have double-blind reviewing — papers are reviewed without the authors’ names. In mathematical sciences it’s easy to investigate the credentials of the authors, and apply stringent scrutiny to interlopers and fly-by-night hoaxers.
You can either go conservative, shut down your openness to strange-seeming ideas; or you can rely on some sort of credentialing to keep fakers out of the debate in your field. (Economics has taken a modified form of the second approach, relying not so much on names and titles, as on the mastery of mostly irrelevant mathematical tools to keep out the riffraff.)
Or you could (radical proposal coming here) allow individual journals and individual readers to set the balance according to their preferences, and not pretend that once a paper has passed peer review (TM) it is proven correct and readers can shut off their critical faculties.