We need better scientific fraud

A friend sent me this article about Dutch social psychologist Diederik Stapel, who “perpetrated an audacious academic fraud by making up studies that told the world what it wanted to hear about human nature.” What caught my attention was this comment about how the fraud was noticed:

He began writing the paper, but then he wondered if the data had shown any difference between girls and boys. “What about gender differences?” he asked Stapel, requesting to see the data. Stapel told him the data hadn’t been entered into a computer yet.

Vingerhoets was stumped. Stapel had shown him means and standard deviations and even a statistical index attesting to the reliability of the questionnaire, which would have seemed to require a computer to produce. Vingerhoets wondered if Stapel, as dean, was somehow testing him. Suspecting fraud, he consulted a retired professor to figure out what to do. “Do you really believe that someone with [Stapel’s] status faked data?” the professor asked him.

And later

When Zeelenberg challenged him with specifics — to explain why certain facts and figures he reported in different studies appeared to be identical — Stapel promised to be more careful in the future.

How hard is it to invent data? The same thing occurred to me with regard to Jan Hendrik Schön, a celebrated Dutch (not that I’m suggesting anything specific about the Dutch…) [update: German, as a commenter has pointed out. Sorry. Some of my best friends are Dutch.] materials scientist who was found in 2002 to have faked experimental results.

In April, outside researchers noticed that a figure in the Nature paper on the molecular-layer switch also appeared in a paper Science had just published on a different device. Schön promptly sent in a corrected figure for the Science paper. But the incident disturbed McEuen, who says he was already suspicious of results reported in the two papers. On 9 May, McEuen compared figures in some of Schön’s other papers and quickly found other apparent duplications.

I’m reminded of a classic article from the Journal of Irreproducible Results, “A Drastic Cost Saving Approach to Using Your Neighbor’s Electron Microscope”, advocating that researchers take advantage of the fact that all electron micrographs look the same. It printed four copies of exactly the same picture, with four different captions: One described it as showing fine structure of an axe handle, another said it showed macrophages devouring a bacterium. When it comes to plots of data (rather than photographs, which might be hard to generate de novo) I really can’t see why anyone would need to re-use a plot, or would be unable to supply made-up data for a made-up experiment. Perhaps there is a psychological block against careful thinking, or against willfully generating a dataset, some residual “I’m-not-really-doing-this-I’m-just-shifting-figures-around” resistance to acknowledging the depths to which one has sunk.

Certainly a statistician would know how to generate a perfect fake data set — which means a not-too-perfect fit to relevant statistical and scientific models. Maybe there’s an opportunity there for a new statistical consulting business model. Impact!

Update: Of course, I should have said, there’s an obvious bias here: I only know about the frauds that have been detected. They were unbelievably amateurish — couldn’t even be bothered to invent data — and still took years to be detected. How many undetected frauds are out there? It’s frightening to think about it. Mendel’s wonky data weren’t discovered for half a century. Cyril Burt may have committed the biggest fraud of all time, or maybe he was just sloppy, and we may never know for sure.

I just looked at the Wikipedia article on Burt, and discovered a fascinating quote from one of his defenders, psychologist Arthur Jensen that makes an appropriate capstone for this post:

[n]o one with any statistical sophistication, and Burt had plenty, would report exactly the same correlation, 0.77, three times in succession if he were trying to fake the data.

In other words, his results were so obviously faked that they must be genuine. If he were trying to fake the data he would certainly have made them look more convincingly real.

Up is down

I mentioned in an earlier post George Lakoff’s work on metaphorical language. One fascinating issue is the way same metaphorical target can be mapped onto by multiple conceptual domains, and sometimes these can come into conflict — or a metaphor can come into conflict with the literal meaning of the target. When the figurative-literal target conflict is particularly succinct, this tends to be called “oxymoron”. One of my favourites is the 1970s novel and subsequent film about a burning skyscraper, called The Towering Inferno.

This particular one depends on the conflict between the “UP is GOOD, DOWN is BAD” metaphor (an indirect  form of it, since it goes by way of DOWN is BAD is HELL is BURNING), conflicting with the literally towering skyscraper. Anyway, the UP-DOWN dichotomy gets used a lot, creating lots of potential confusion. For example, UP is DIFFICULT and DOWN is EASY, inspiring the famous allegory of Hesiod that inspired so many devotional images:

Vice is easy to have; you can take it by handfuls without effort. The road that way is smooth and starts here beside you, but between us and virtue, the immortals have put what will make us sweat. The road to virtue is long and steep uphill hard climbing at first.

Hence the uncertainty of the phrase “Everything’s going downhill.” Is it getting worse, or getting easier?

There is a triple ambiguity when numbers get involved. LARGE NUMBERS are UP (“higher numbers”, “low number”) when we are counting the floors of a building, but SMALL NUMBERS are UP when ranking (#1 is the winner and comes at the top of the list).

This brings us to the example that inspired this post. The BBC news web site this morning told us that “A&E waiting times in England have fallen to their worst level for a decade.” It’s hard to feel much sense of urgency about the fact that waiting times have “fallen”.

BBC website A&E morning

 

 

Presumably that’s why the text had changed in the afternoon:

bbc website afternoon

Aren’t all famous people friends?

By way of Andrew Sullivan, I found this book review by Diane Johnson, referring to

Freud’s friend Arthur Schnitzler’s Dream Novel, the inspiration for the Stanley Kubrick film Eyes Wide Shut…

Poor Schnitzler. He’s one of my favourite authors, and Traumnovelle is one of his masterpieces, but he needs to be put into context for English readers by his connection to two people who are much better known.

I find the Freud hook particularly poignant because Freud was famously not a friend of Schnitzler. They were contemporaries, yes, and neighbours in Vienna. They read each other’s work. But they were not friends. There is one famous letter from Freud to Schnitzler (out of about 10 in total), on the occasion of the latter’s 60th birthday, in which Freud expresses his admiration, and explains why he had never made an effort to meet him. He says it was “Doppelgängerscheu”, fear of meeting his double. Schnitzler used a similar expression some years later in an interview with an American journalist, and he had long been fascinated by Freud’s theories, though also critical.

Freud did invite Schnitzler to his home after that letter, but there seem to have been only a few encounters after that. It would have been more accurate to call Schnitzler’s work “the inspiration for the Stanley Kubrick film Eyes Wide Shut, and the inspiration for many of Freud’s theories of dream analysis”.

Four ways of paying the piper

I was thinking about four different expressions, interestingly different, for the platitude that people shape their consciences to their circumstances.

The most straightforward is the English classic

Who pays the piper calls the tune.

This is the most straightforwardly economical. The boss makes the decisions, and the opinions of the underlings are irrelevant. It says nothing about what those underling opinions might be.

A step more cynical is the old-German proverb

Wes Brot ich ess, des Lied ich Sing. [Whose bread I eat, that’s whose song I sing.]

It has the same general musical theme, suggesting the court jester performing for his master. But the singing, rather than piping, is more intimate, and to my mind suggests a more complete subordination of ones own beliefs to those of the master.

Perhaps the most pessimistic is the saying that Mark Twain claims to have learned as a boy, from a young slave:

You tell me whar a man gits his corn pone, en I’ll tell you what his ‘pinions is.

In other words, as Twain explains, no one can afford to have opinions that interfere with his livelihood. It’s not a matter of dissembling — which is what makes this more pessimistic (but maybe less cynical?) — but rather of naturally adopting the opinions that are a comfortable fit to his circumstances. (Twain’s more cynical version was “It is by the fortune of God that, in this country, we have three benefits: freedom of speech, freedom of thought, and the wisdom never to use either.”)

And then there was the perfection of the corn pone line, the famous dictum of Upton Sinclair,

It is difficult to get a man to understand something, when his salary depends upon his not understanding it.

Here we see the complete identification of master and slave. The slave not only gives voice to his master’s views, he not only comes to accept the master’s views, he has deformed his intellect to the point where any other opinion has become completely incomprehensible to him.

 

 

 

 

 

Data-mining for Cthulhu

I don’t ordinarily repost what other people have written, but this post by The Atlantic‘s Alexis Madrigal is so beautiful that I feel the need to copy it. It really just consists of juxtaposing the buzzword Big Data with this quote from H. P. Lovecraft — one that I was already familiar with, but had never exactly put into this context. It is the famous opening of The Call of Cthulhu:

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. The sciences, each straining in its own direction, have hitherto harmed us little; but some day the piecing together of dissociated knowledge will open up such terrifying vistas of reality, and of our frightful position therein, that we shall either go mad from the revelation or flee from the deadly light into the peace and safety of a new dark age.

Safety of a new dark age. Hmm. If only I could turn that into a grant proposal…

Is bleating shrill?

Having taken on the controversial question of the significance of ascribing shrillness (shrillity? shrillth?) to ones opponents, I feel obliged to wade in on the pressing issue of “bleating”.

The occasion is an open letter by a group of British education experts, pointing out the well-established fact that the UK obsession with getting children learning arithmetic and reading at ever earlier ages — formal schooling starts at age 3 1/2 — is counterproductive, and that children would be better off with age-appropriate education. The education ministry has responded with an extraordinarily unprofessional (shrill, or perhaps “spittle-flecked” would be the vernacular description) ejaculation of mostly generic insults, including the charge that

We need a system that aims to prepare pupils to solve hard problems in calculus or be a poet or engineer – a system freed from the grip of those who bleat bogus pop-psychology about ‘self image’, which is an excuse for not teaching poor children how to add up.

I can’t fault the alliteration of “those who bleat bogus pop-psychology”, but what does it mean? It sounds like an insult, but I’m not sure what is insulting about it. Presumably it’s supposed to make you think of a flock of sheep, dumbly repeating some meaningless sounds. And, bleating is sort of a shrill sound, so maybe it also is meant to have effeminate overtones.

The term “pop-psychology” is interesting in this context. Given that the letter is signed by professors and senior lecturers in psychology and education, I have to assume that, right or wrong, what they’re talking about is real psychology, not “pop”.  So it’s interesting that the bureaucrats felt that they couldn’t take on the reputation of academic psychology directly, but only by insinuating that it is all just self-help pablum. (And is “bogus” a modifier of pop-psychology — to say, this isn’t even the top-drawer pop — or a redundant intensifier, as when one refers to “disingenuous government propaganda”?) Continue reading “Is bleating shrill?”

Paradoxes of belief: Holocaust denial edition

(or, Vonnegut’s Mother Night reversed)

I’ve long thought it amazing how many odd, nearly unbelievable, individual stories are hidden in the corners of the grand ghastly narrative of the Holocaust; and no matter how many stories I read — Peter Wyden’s account of Stella Goldschlag, for instance, his Jewish schoolmate in 1930s Berlin who specialised in sniffing out undercover Jews for the Gestapo — there’s always another even stranger, such as the jewish graphic designer Cioma Schönhaus who survived the war, and saved many other lives, by learning to forge identity papers.

Holocaust denial seems to have its own bizarre corners. To wit, this new revelation:

[David Stein] a cerebral, fun-loving gadfly who hosted boozy gatherings for Hollywood’s political conservatives […] brought right-wing congressmen, celebrities, writers and entertainment industry figures together for shindigs, closed to outsiders, where they could scorn liberals and proclaim their true beliefs. That he made respected documentaries on the Holocaust added intellectual cachet and Jewish support to Stein’s cocktail of politics, irreverence and rock and roll.

[Under his original name David Cole he] was once a reviled Holocaust revisionist who questioned the existence of Nazi gas chambers. He changed identities in January 1998.

This reads like an April Fool’s prank, or a high-concept film plot from the fevered leftist imagination. The right-wing Jewish Holocaust documentary maker and fanatical Israel supporter is actually a secret neo-Nazi. Ha ha. Who would believe that? It’s not so easy to change your identity, particularly if you’ve just made yourself notorious on TV chat shows. And how would a man with no past be able to start a new career and become a political insider?

But what intrigues me most of all is when the Guardian article touches on the question of Stein/Cole’s true beliefs. One of the important lessons of modern cognitive psychology and philosophy of mind is that it is very difficult — perhaps impossible — to develop a coherent theory of beliefs, under which statements like “X believes Y” are statements of fact. (See, for example, the seminal book by Stephen Stich, that relegates beliefs — and other many other concepts — to the realm of “folk psychology”.)

Continue reading “Paradoxes of belief: Holocaust denial edition”

Conspiratorial resurrection

By way of Andrew Sullivan, there’s this report from Scientific American about the psychology of conspiracy theorists. Key lines:

while it has been known for some time that people who believe in one conspiracy theory are also likely to believe in other conspiracy theories, we would expect contradictory conspiracy theories to be negatively correlated. Yet, this is not what psychologists Micheal Wood, Karen Douglas and Robbie Suton found in a recent study. Instead, the research team, based at the University of Kent in England, found that many participants believed in contradictory conspiracy theories. For example, the conspiracy-belief that Osama Bin Laden is still alive was positively correlated with the conspiracy-belief that he was already dead before the military raid took place. This makes little sense, logically: Bin Laden cannot be both dead and alive at the same time

Contradiction is in the mind of the beholder. They are ignoring the possibility that President Obama, dissatisfied with the poor progress of the minions he had been able to hire to destroy America (he’s a stickler for benchmarks) sent a Kenyan voodoo strike team into Pakistan to resurrect Osama bin Laden, who had already been garroted personally by George W. Bush (in one of the many top secret missions he carried out while his body double cleared brush on the ranch).

The next president will not only have a $20 trillion debt to cope with, he’ll also have to take out an undead al Qaeda leader. Continue reading “Conspiratorial resurrection”

The risk of terrorism

As an example of the irrational — or, at least, inconsistent — way our minds process risks with small likelihood, Andrew Sullivan has collected a few comments from around the web on the risk of being killed in a terror attack. When I am worrying about the risk of a bomb on a flight that I’m going on, I sometimes consider the following: There hasn’t been a bomb in a plane for a while. Suppose I knew that today someone had planted a bomb on a US commercial flight. Well, then I’d be thankful that I had been warned, and I’d have to be CRAZY to get on a plane that day.

But here’s the thing: there are nearly 30,000 commercial flights every day, so if I did take the flight, it would increase my risk of dying (assuming all onboard would be killed by the bomb, and ignoring other air-travel-associated risks, as well as the fact that being on the plane will protect me from other risks; I definitely won’t be hit by a bus while I’m flying) by about 0.0033%. Now, in a typical year a typical American has about a .039% chance of dying in an accident (ignoring other causes of death). So that insane decision to fly when you know there is a bomb on a plane somewhere in the US exposes you to about the same risk of dying a horrible sudden death as about a month of ordinary life. And if you knew only that there were a bomb on a flight somewhere in the world, the risk to you would be about the same as about 10 days of ordinary life.

Does this change the way we feel about the risk? Should it? It sort of works for me, but then, my fear was never very great, and my faith in numbers is exceptionally high…

Daniel Kahneman, Social Psychology, and Finance

I’ve been a booster of the Tversky-Kahneman cognitive-bias revolution since I read their article in Scientific American as a high school student. (To be honest, I’d always lazily thought of it as Tversky’s work, but Daniel Kahneman has had the good sense not to die prematurely, and to collect a Nobel memorial prize.) And I’ve greatly enjoyed Kahneman’s new popular book on his collected lessons from many decades of research on cognitive biases.

Putting that together with my longstanding contempt for the finance profession (expressed at greatest length here, but more generally listed here), I was particularly delighted to start in on the chapter titled “The Illusion of Validity”, where Kahneman lays into the self-serving illusions of finance professionals. It turns out, though, that this chapter is an intellectual trainwreck, with oversimplifications piling up on crude distortions, while the whistle of self-satisfied self-promotion shrills incessantly in the background. It’s both insufferable and so poorly reasoned that it begins to call the reliability of the rest of the book into question. Kahneman doesn’t claim to be free of the cognitive biases he analyses in others, but you might expect more self-awareness.

Continue reading “Daniel Kahneman, Social Psychology, and Finance”