The unbearable heaviness of buildings: Another episode in the series “Useless units”

Apparently, Manhattan is sinking by 1-2mm per year, due to the weight of its skyscrapers. The Guardian reports on the research led by Tom Parsons, of the US Geological Survey, saying that New York City’s buildings “weigh a total of 1.68tn lbs”.

What’s that, you say? You don’t have any intuition for how much 1.68 tn lbs is? The Guardian feels you. They’ve helpfully translated it into easy-to-grasp terms. This, they go on to say, “is roughly equivalent to the weight of 140 million elephants”.

A new challenge for computer science

Do you remember the Sokal affair? Mathematical physicist Alan Sokal submitted a fake paper titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity” to a postmodern cultural studies journal Social Text. Well, it was a real paper, but it was a hoax. It was written as a hoax, cobbling together the silliest tropes he could find about social construction of science and left-wing politics. Once it was published, he used it as a club to beat up on postmodern theory, and the humanities more generally. Many self-satisfied scientists claimed, at the time, that this confirmed all their worst suspicions about the emptiness of academic jargon-spinning in the humanities and social-scientists. The implication — sometimes made explicit — was that the reverse could never happen, that scientists know exactly what their terms mean, and could never be fooled by such a prank.

It turns out, the situation is much worse than that. Springer and IEEE have been forced to withdraw 120 papers that they published in various conference proceedings, and that turn out to have been randomly generated by the software SCIgen. Concerned to assure the public of the reliability of their peer-review process, Springer has now announced a technical solution: SciDetect, software that checks papers to determine whether they have been generated by SCIgen. (If it had been announced today I would have assumed that it was an April Fools prank, but the press release is from 23 March.)

Confirming me in my opinion that the whole system of peer review has outlived its usefulness, and is now living on as a vestigial parasite on the scientific enterprise.

Anyway, Springer has now thrown down the gauntlet, and young computer scientists should rise to the challenge of improving SCIgen, to fool the new software. We may see an accelerating arms race in scientific publishing, a kind of reverse Turing Test, with computers trying to fool other computers into believing that they are computer scientists. In the end, maybe the software will get so good that it will be doing original research and writing real scientific papers.

Prenatal sex ratio

A paper that I’ve been involved with for a dozen years already has finally been published. We bring together multiple data sets to show that the primary sex ratio — the ratio of boys to girls conceived — is 1, or very close to 1. Consequently, the fact that more boys than girls are born — the ratio is about 1.06 pretty universally, except where selective abortion is involved — implies that there must be a period in the first trimester when female embryos are more likely to miscarry than male.

This is one of those things that is unsurprising if you’re not an expert. The experts had developed something close to a consensus, based on very little evidence, that the sex ratio at conception was much higher, some saying it’s has high as 2 (so that 2/3 of the conceptuses would be male), with excess female mortality throughout gestation. (We know that male mortality is higher in the second half of pregnancy, and after that… forever.)

The paper has its problems, but I think it’s a useful contribution. It’s also the first time I’ve been involved in research that is of any interest to the general public. Several publications have expressed interest, and an article has already appeared in two German magazines online, including the general news magazine Der Spiegel.

Update: Guardian too. This makes it interesting, in retrospect, that we had such a hard time getting a journal even to be willing to review it. One said it was too specialised.

We need better scientific fraud

A friend sent me this article about Dutch social psychologist Diederik Stapel, who “perpetrated an audacious academic fraud by making up studies that told the world what it wanted to hear about human nature.” What caught my attention was this comment about how the fraud was noticed:

He began writing the paper, but then he wondered if the data had shown any difference between girls and boys. “What about gender differences?” he asked Stapel, requesting to see the data. Stapel told him the data hadn’t been entered into a computer yet.

Vingerhoets was stumped. Stapel had shown him means and standard deviations and even a statistical index attesting to the reliability of the questionnaire, which would have seemed to require a computer to produce. Vingerhoets wondered if Stapel, as dean, was somehow testing him. Suspecting fraud, he consulted a retired professor to figure out what to do. “Do you really believe that someone with [Stapel’s] status faked data?” the professor asked him.

And later

When Zeelenberg challenged him with specifics — to explain why certain facts and figures he reported in different studies appeared to be identical — Stapel promised to be more careful in the future.

How hard is it to invent data? The same thing occurred to me with regard to Jan Hendrik Schön, a celebrated Dutch (not that I’m suggesting anything specific about the Dutch…) [update: German, as a commenter has pointed out. Sorry. Some of my best friends are Dutch.] materials scientist who was found in 2002 to have faked experimental results.

In April, outside researchers noticed that a figure in the Nature paper on the molecular-layer switch also appeared in a paper Science had just published on a different device. Schön promptly sent in a corrected figure for the Science paper. But the incident disturbed McEuen, who says he was already suspicious of results reported in the two papers. On 9 May, McEuen compared figures in some of Schön’s other papers and quickly found other apparent duplications.

I’m reminded of a classic article from the Journal of Irreproducible Results, “A Drastic Cost Saving Approach to Using Your Neighbor’s Electron Microscope”, advocating that researchers take advantage of the fact that all electron micrographs look the same. It printed four copies of exactly the same picture, with four different captions: One described it as showing fine structure of an axe handle, another said it showed macrophages devouring a bacterium. When it comes to plots of data (rather than photographs, which might be hard to generate de novo) I really can’t see why anyone would need to re-use a plot, or would be unable to supply made-up data for a made-up experiment. Perhaps there is a psychological block against careful thinking, or against willfully generating a dataset, some residual “I’m-not-really-doing-this-I’m-just-shifting-figures-around” resistance to acknowledging the depths to which one has sunk.

Certainly a statistician would know how to generate a perfect fake data set — which means a not-too-perfect fit to relevant statistical and scientific models. Maybe there’s an opportunity there for a new statistical consulting business model. Impact!

Update: Of course, I should have said, there’s an obvious bias here: I only know about the frauds that have been detected. They were unbelievably amateurish — couldn’t even be bothered to invent data — and still took years to be detected. How many undetected frauds are out there? It’s frightening to think about it. Mendel’s wonky data weren’t discovered for half a century. Cyril Burt may have committed the biggest fraud of all time, or maybe he was just sloppy, and we may never know for sure.

I just looked at the Wikipedia article on Burt, and discovered a fascinating quote from one of his defenders, psychologist Arthur Jensen that makes an appropriate capstone for this post:

[n]o one with any statistical sophistication, and Burt had plenty, would report exactly the same correlation, 0.77, three times in succession if he were trying to fake the data.

In other words, his results were so obviously faked that they must be genuine. If he were trying to fake the data he would certainly have made them look more convincingly real.

More Hockey Statisticks

I wrote last week about my surprising response to two books about the public conflicts over palaeoclimatology. Whereas I expected to find myself sympathising with the respected scientist Michael Mann, I found both authors equally repellant — both are smug and self-absorbed, both write crudely — and had most sympathy with Steven McIntyre, the former mining engineer who stars in Andrew Montford’s book. Fundamentally, I found that Mann’s own account made him seem like just the sort of arrogant senior scientist I have occasionally had to deal with as a statistician, one who is outraged that anyone outside his close circle would want to challenge his methodology.

A pair of long comments on the post underlined my impression of the cultish behaviour of people who have gotten enmeshed in the struggle over climate change, on both sides. The commenter writes:

I would suggest that McIntyre’s work went out of its way to try to cast doubt on Mann’s research, and in that process created as many errors of its own. Montford’s book takes that dubious effort and magnifies it for the purposes of attacking climate change science in general by vilifying a single piece of research by a single researcher.

I have to say, Montford’s effort has been highly effective. In one lecture I saw, given by Dr Richard Alley, he recounted being in Washington speaking to a science committee where one high level member stated, “Well, we know all this climate change stuff is based on a fraudulent Hockey Stick graph.”

I’m sure [Andrew] Montford appreciates your piece here perpetuating that position.

I don’t know exactly what Montford’s “effort” is. Certainly, in his book he has little to say about the rest of climate science, but what he does have to say can hardly give any impression other than that the “hockey stick” is a small part of palaeoclimatology, and that palaeoclimatology is a small part of climate research. He never accuses Mann or anyone else of fraud in his book, although he is unyielding and close to hysterical in imputing incompetence to Mann and some of his closest collaborators.

As for McIntyre’s work going “out of its way to try to cast doubt”, this hardly seems different to me than the usual way scientists are motivated. It’s no different than the comments about “getting rid of the Mediaeval Warm Period”, that Montford is obsessed with, as evidence of scientific corruption. I was never bothered by that comment, or any of the comments that came out of the disgraceful email hack of the Climatic Research Unit, because I understand that scientists rarely launch an investigation without any preconceptions. It’s perfectly plausible — even likely — that climate researchers would have had a strong gut feeling that this warm period was much less substantial than it had seemed, but were casting about for a way to prove the point. The trick here is to have a rigorous methodology that won’t bend to your preconceptions. The same way, McIntyre had a gut feeling that the climate was much more variable in the past than the mainstream researchers wanted to believe, and he set about proving his point by trying to find the flaws in their methodology.

The fact that later studies ended up confirming the broad outlines of Mann’s picture, and disproving McIntyre’s intuition does not make his critique any the less serious or important. And it doesn’t make Mann’s efforts to portray all of his opponents as villains any less unsavoury. And his efforts to present scientific defensiveness as high principle do a disservice to science in general, and to climate science more specifically.

The commenter describes Mann’s self-righteous refusal to provide essential materials for McIntyre’s attempts to re-evaluate his work as a natural response to ” the levels to which “skeptics” are willing to go. It may seem absurd, but I think that is only because the levels they go to are so outrageous.” Except that it looks to me as though Mann’s stonewalling came first. Maybe that’s wrong, but again, if so, he doesn’t seem to think anyone has a right to expect evidence of the fact.

Mann comes across in his own book as a manipulator who would like to tar all of his opponents with the outrageous actions that some have committed. He accuses McIntyre of “intimidation” without considering it necessary to provide any shred of evidence. The portion of their correspondence quoted by Montford obviously doesn’t show anything beyond occasional exasperation at Mann’s stonewalling. Obviously there could be more to it, but Mann seems so persuaded of his own saintliness that his bare assertion of his own pure motives — and of the correctness of his methodology — ought to persuade every reader. And so convinced of the objectivity of his friends and colleagues that merely quoting their statements in his defence should suffice.

Science is science, but many climate scientists have (quite rightly) decided that the implications of what they have learned demand political action. They can’t then express horror when others blend their scientific inquiry with a political agenda.

Of hockey sticks and statistics

[Updated at bottom] I recently read two books on climate science — or rather, two books on the controversies around climate science. One book was Michael Mann’s book The Hockey Stick and the Climate Wars; the other The Hockey Stick Illusion by Andrew Montford.

Now, I am, by inclination and tribal allegiance, of the party of Michael Mann, one of the world’s leading climate scientists. He and his colleagues have been subject to beastly treatment by political opponents, some of which is detailed in his book. And I only picked up the Montford book out of a sense of obligation to see what the opposing side was saying. And yet…

Montford’s book makes a pretty persuasive case. Not that climate science is bunk, or a conspiracy, or that anthropogenic global warming is a fiction — there is far too much converging evidence from different fields to plausibly make that claim (and indeed, Montford never makes such a claim) — but that a combination of egotism and back-scratching has seriously slowed down the process of evaluation and correction of sometimes sloppy statistical procedures, and tarnished the reputation of the scientific community generally.

I admit to a certain bias here: The attacks on Mann’s work that Montford describes are statistical in nature, and Mann’s response reminds me of the tone that is all too common when statisticians raise questions about published scientific work. Montford has a remarkable amount of technical detail — so much that I found myself wondering who his intended audience is — and the critiques he describes (mainly due to the Canadian mining engineer Steve McIntyre) seem eminently sensible. In the end, I don’t think they panned out, put they were genuine shortcomings in the early work, and McIntyre seems to have done the right things in demonstrating the failure of a statistical method, at least in principle, and to have earned for his trouble only incomprehension and abuse from Mann and his colleagues.

Continue reading “Of hockey sticks and statistics”

Sex education and the multiverse

I recently read and enjoyed David Deutsch’s book The Beginning of Infinity, a tour d’horizon of quantum physics and philosophy of science, brewed up with a remarkably persuasive idiosyncratic worldview, even if it does descend into a slightly cranky and increasingly ignorant rant on politics and economics by the end. This was my first introduction to the “multiverse”, which seems to be the modern version of the many-worlds interpretation of quantum mechanics. I was impressed at how cogent this picture has become since I last interested myself for quantum mechanics and its philosophical interpretations in my teens.

It might not be right, but it does lay down a marker against the Copenhagen interpretation — position and path don’t exist except when measured,  wave-particle “duality”, etc. — which in comparison seems more like a counsel of despair than a physical theory in any meaningful sense.

In thinking about it, I realised that I’ve long had the feeling that the Copenhagen interpretation was more than anything the physics educator’s version of chastity education: not a real solution, but mainly a way to avoid dealing with parents yelling “Your teacher told you what?!”

Germany’s German-brain drain

Der Spiegel has just published an interview with Nobel-Prize winner Thomas Südhof, in which the editors express their dismay that the Göttingen-born and -educated Südhof has spent his entire professional career in the US, except for an apparently disastrous 2 years as director of a Max Planck Institute. He sounds apologetic, praising Göttingen and his supervisor, praising the research environment in Germany. He left only because

I think every scientist should spend some time abroad; a country should make this possible — but naturally should also try to get them back.

Hmm. “Try to get them back”? He also makes clear that he doesn’t even know if he has retained his German citizenship. The interview continues:

Spiegel: Many researchers leave for the US or England because they don’t like the conditions for scientists in Germany. What do you think?

Südhof: The research landscape in Germany is terrific. Many of my collaborators, very good people, have returned to Germany — happily. Germany has a lot to offer.

Spiegel: Why don’t you return?

Südhof: Professionally I’m probably too old. I’d like to keep doing research as long as I am able. In the US that’s possible. Otherwise I’d really love to return to Germany, if only so that my young children would learn the language.

He’s still seven or eight years away from normal retirement, and lots of exceptions are made, so this sounds like a polite excuse.

But I’m interested in this presumption that German scientists should want to return to Germany, and that Germany should be trying to lure them back. Germany isn’t Canada. It’s not as though German science is overrun with foreigners. The statistics I read a few years back were that about 94% of professors in German universities are German, and two thirds of the rest are from neighbouring German-speaking countries. My own experience has been that German universities are much less open to applications from foreign academics than British or Belgian or Dutch or French or Canadian ones. I don’t think the number of Germans at British universities is so much higher than the number of Britons at German universities because of “better research conditions”, and I think language is only a marginal issue.

Why is it that there is a constant outcry over the need to bring back a few more sufficiently teutonic academics from abroad? I suggest that they should be thinking about how German universities can make themselves more attractive to good researchers — not just a few star scientists who can run a Max Planck Institute — regardless of their nationality? I don’t have the impression that the UK goes into mourning when a British-born scientist working abroad wins a prize. And maybe, if German universities were less insular — and less prone to academic nepotism — more of the cosmopolitan sort of German scientists would be eager to build their careers there.

Certibus paribus

I was just reading a theoretical biology paper that included the phrase “certibus paribus”. (I won’t say which paper, because I’m not aiming to embarrass the authors.)

Now, I like unusual words, particularly if they’re Latin. Ceteris paribus is sufficiently close to common usage in some branches of science, including mathematics, or was into living memory, that I could imagine using it in print. Maybe I even have. But it’s sufficiently rare that I can’t imagine using it without looking up both the spelling and the meaning. So, while I can see how mistaken word usage can slip into ones everyday language, and then spill out into print, I can’t quite picture how an error like this happens.