Health selection bias: A choose your own preposition contest

Back when I was in ninth grade, we were given a worksheet where we were supposed to fill in the appropriate conjunction in sentences where it had been left out. One sentence was “The baseball game was tied 0 to 0, ——– the game was exciting.” Not having any interest in spectator sports, I guessed “but”, assuming that no score probably meant that nothing interesting had happened. This was marked wrong, because those who know the game know that no score means that lots of exciting things needed to happen to prevent scoring. Or something.

With that in mind, fill in the appropriate preposition in this sentence:

Death rates in children’s intensive care units are at an all-time low ————— increasing admissions, a report has shown.

If you chose despite you would agree with the BBC. But a good argument could be made that because of or following a period of. That is, if you think about it, it’s at least as plausible — I would say, more plausible — to expect increasing admissions to lead to lower death rates. The BBC is implicitly assuming that the ICU children are just as sick as ever, and more of them are being pushed into an overburdened system, so it seems like a miracle if the outcomes improve. Presumably someone has done something very right.

But in the absence of any reason to think that children are getting sicker, the change in numbers of admissions must mean a different selection criterion for admission to the ICU. The most likely change would be increasing willingness to admit less critically ill children to the ICU, which has the almost inevitable consequence of raising survival rates (even if the effect on the sickest children in the ICU is marginally negative).

When looking at anything other than whole-population death rates, you always have the problem of selection bias. This is a general complication that needs to be addressed when comparing medical statistics between different systems. For instance, an increase of end-of-life hospice care, it has been pointed out, has the effect of making hospital death rates look better. (Even for whole-population death rates you can have problems caused by migration, if people tend to move elsewhere when they are terminally ill. This has traditionally been a problem with Hispanic mortality rates in the US, for instance.)

Is bleating shrill?

Having taken on the controversial question of the significance of ascribing shrillness (shrillity? shrillth?) to ones opponents, I feel obliged to wade in on the pressing issue of “bleating”.

The occasion is an open letter by a group of British education experts, pointing out the well-established fact that the UK obsession with getting children learning arithmetic and reading at ever earlier ages — formal schooling starts at age 3 1/2 — is counterproductive, and that children would be better off with age-appropriate education. The education ministry has responded with an extraordinarily unprofessional (shrill, or perhaps “spittle-flecked” would be the vernacular description) ejaculation of mostly generic insults, including the charge that

We need a system that aims to prepare pupils to solve hard problems in calculus or be a poet or engineer – a system freed from the grip of those who bleat bogus pop-psychology about ‘self image’, which is an excuse for not teaching poor children how to add up.

I can’t fault the alliteration of “those who bleat bogus pop-psychology”, but what does it mean? It sounds like an insult, but I’m not sure what is insulting about it. Presumably it’s supposed to make you think of a flock of sheep, dumbly repeating some meaningless sounds. And, bleating is sort of a shrill sound, so maybe it also is meant to have effeminate overtones.

The term “pop-psychology” is interesting in this context. Given that the letter is signed by professors and senior lecturers in psychology and education, I have to assume that, right or wrong, what they’re talking about is real psychology, not “pop”.  So it’s interesting that the bureaucrats felt that they couldn’t take on the reputation of academic psychology directly, but only by insinuating that it is all just self-help pablum. (And is “bogus” a modifier of pop-psychology — to say, this isn’t even the top-drawer pop — or a redundant intensifier, as when one refers to “disingenuous government propaganda”?) Continue reading “Is bleating shrill?”

Macho science: Deflowering virgin nature

I was listening to BBC radio this morning, which I rarely do, because I find it generally dull. A scientist named Mark Lythgoe was being interviewed, director of the Centre for Advanced Biomedical Imaging at University College London, and an enthusiastic mountain climber. The interviewer asked — inevitably — about the connection between mountain-climbing and science. The answer was almost archaically macho, in what it said both about science and about mountain climbing. It wasn’t anything about planning or co-operation, or the beauty or peace of nature, not evening about pushing yourself to your limits. No, it was about deflowering virgin territory.

There is a very special moment when you stand on a part of this earth that no one else has ever stood on, and look out on a view that no one else has seen… That’s the same as when someone calls you up from the lab with an image that no one else has seen before… The hairs stand up on the back of your neck.

I don’t know of any better argument against the narrow British approach to education — where a scientist will never read a book after age 16 — than that through exposure to the humanities a certain percentage of scientists would recognise deflowering virgin nature as an embarrassing 19th century cliché. Some might acquire enough of a familiarity with how to think about human matters — about society and gender and nature — with a sophistication to match the overweening self-confidence that their scientific expertise tends to impart.

He also presented what was supposed to be an example of the brilliant eclecticism of his institute (and himself), a collaboration with a biologist to study homing pigeons. He says that there was a theory that pigeons have magnetite in their beaks or brains that orient them with respect to the Earth’s magnetic field. So they developed a brand new imaging technology that could make refined images of the distribution of magnetite in the beaks and brains, and found — there is no magnetite. Brilliant! Except, isn’t it a huge waste of time and effort to develop refined imaging technologies, without first figuring out (by, I don’t know, grinding up the beaks and doing some chemical tests)? Maybe I’m missing something — maybe that wasn’t possible for some reason that he was too busy talking about his drive for success to explain — or maybe I just don’t sufficiently appreciate big science and IMPACT.

Certibus paribus

I was just reading a theoretical biology paper that included the phrase “certibus paribus”. (I won’t say which paper, because I’m not aiming to embarrass the authors.)

Now, I like unusual words, particularly if they’re Latin. Ceteris paribus is sufficiently close to common usage in some branches of science, including mathematics, or was into living memory, that I could imagine using it in print. Maybe I even have. But it’s sufficiently rare that I can’t imagine using it without looking up both the spelling and the meaning. So, while I can see how mistaken word usage can slip into ones everyday language, and then spill out into print, I can’t quite picture how an error like this happens.

What is a disease?

Gilbert’s Syndrome is a genetic condition, marked by raised blood levels of unconjugated bilirubin, caused by less active forms of the gene for conjugating bilirubin.

There are disagreements about whether this should be called a disease. Most experts say it is not a disease, because it has no significant adverse consequences. The elevated bilirubin can lead to mild jaundice, and some people with GS may have difficulty breaking down acetaminophen and some other drugs, and so be at greater risk of drug toxicity. They also have elevated risk for gallstones. GS may be linked to fatigue, difficulty in concentration, and abdominal pain. On the other hand, a large longitudinal study found that the 11% of the population possessing one of these Gilbert’s variants had its risk of cardiovascular disease reduced by 2/3.

WHAT? 2/3 lower risk of the greatest cause of mortality in western societies? That’s the “syndrome”?

Maybe we should rewrite that: anti-Gilbert Syndrome is a genetic ailment, marked by lowered blood levels of unconjugated bilirubin, caused by overly active forms of the gene for conjugating bilirubin. This leads to a tripled risk of cardiovascular disease. On the other hand, the 89% of the population suffering from AGS has lower risk of gallstones, and tends to have lowered risk of acetaminophen poisoning. They may have lowered incidence of fatigue and abdominal pain.

The gambler’s cross

2013-08-28 15.55.13The 13th century University Church of St. Mary is an important Oxford landmark. It was the first building of the university, and stands as an imposing symbol of traditional Anglicanism on the High Street. And now, apparently, it is funded by the proceeds of gambling.

I’ve long been fascinated by the gradual moral detoxification of gambling, something that I discussed at some length in my review of The Quants. Christians have vacillated between viewing gambling as a heinous sin and as a good way to fund their churches. Not unlike their earlier views of loans at interest and capitalism more generally.

It’s particularly striking to see a church displaying the symbol of the cross in the sacrilegious form of the gambler’s crossed fingers. I wonder how Christians react to the symbol. It seems like a gestural swear word, as though a priest began his sermon with “God almighty, it sure is hot this week. What are we doing in church, for Christ’s sake?”

The peer-review fetish: Let’s abolish the gold standard!

I’ve just been reading two books on the climate-change debate, both focusing on the so-called “hockey stick graph”: Michael Mann’s The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, and A. W. Montford’s The Hockey Stick Illusion: Climategate and the Corruption of Science. I’ll comment on these in a later post, but right now I want to comment on the totemic role that the strange ritual of anonymous peer review plays for the gatekeepers of science.

One commonly hears that anonymous peer review (henceforth APR) is the “gold standard” for scientific papers. Now, this is a reasonable description, in that the gold standard was a system that long outlived its usefulness, constraining growth and innovation by attempting to measure something that is inherently fluid and abstract by an arbitrary concrete criterion, and persisting through the vested interests of a few and deficient imagination of the many.

That’s not usually what people mean, though.

An article is submitted to a journal. An editor has read it and decided to include it. It appears in print. What does APR add to this? It means that the editor also solicited the opinion of at least one other person (the “referee(s)”). That’s it. The opinion may have been expressed in three lines or less. She may have ignored the opinion.

Furthermore, to drain away any incentive for the referee(s) to be conscientious about their work,

  • They are unpaid.
  • They are anonymous. We know how well that works for raising the tone of blog comments.
  • Anonymity implies: Their contributions will never be acknowledged. If they contribute important insights to the paper, they may be recognised in the acknowledgement section: “We are grateful for the helpful suggestions of an anonymous referee.” Very occasionally an author will suggest, through the editor, that a referee who has made important contributions be invited to join the paper as a co-author. More commonly, a paper will be sent from journal to journal, collecting useful suggestions until it has actually become worth publishing.*
  • No one will ever take issue with any positive remarks the referee makes, as no one but the authors (and the editor) will ever see them. Negative comments, on the other hand, may get pushback from the author, and thus need to be justified, requiring far more work.
  • Normally, the author will be forced to demonstrate that she has taken the referee’s criticism to heart, no matter how petty or subjective. This encourages the referee to adopt an Olympian stance, passing judgement on what by rights ought to be the author’s prerogative.

Of course, I don’t mean to say that most referees most of the time don’t do a very conscientious job. I take refereeing seriously, and make a good-faith effort to be fair, judicious, and helpful. But I’m sure that I’m not the only one who feels that the incentives are pushing in other directions, and to the extent that I do a careful job, it is mainly out of some abstract sense of duty. I am particularly irritated when I find myself forced to put original insights into my report, to explain why the paper is deficient. I would much rather the paper be published as is, and then I could make my criticism publicly, and then, if I’m right, be recognised for my contribution. Continue reading “The peer-review fetish: Let’s abolish the gold standard!”

Post-Newtonian politics, or, Psychopathology and national security

A short addendum to the comment about the seemingly counter-productive tactics of the US and European security apparatus in its attack on everyone involved in the Snowden NSA-document affair. Inspired by remarks of John Quiggin, I observed that we can’t understand what is happening when we view the state as a unitary goal-directed entity. Much of what is going on now can only be interpreted as eruptions of an internal power struggle, where the security services feel threatened, and are throwing their weight around.

Talking about throwing weight around puts one in mind of celestial mechanics. Under most circumstances we can consider planets as being simple objects, a mass located at a single point, the so-called centre of mass, whose motion is defined by a single momentum vector. It is only when we look at the fine structure, long-term behaviour, or extreme events that we need to consider the internal disposition of the mass. So it is with governments, that we may incline to see as unitary objects moved by the single will of the president or prime minister. Of course, political theorists and historians know that even the most extreme dictatorship has factions and power structures that shape the master’s will.

The analogy has been applied to the philosophy of mind. Two decades ago Philosopher Daniel Dennett introduced the definition of the self as a “center of narrative gravity”. We have intuitive models of human psychology that work, like Newton’s Laws, to predict people’s behaviour without reference to their complex inner life. Thus, if I arrange to meet you at a restaurant at 6, it suffices for me to have a few high-level beliefs about you — you want to see me, you know where the restaurant is, you have a watch — to predict that you will be there at about 6. I don’t need to concern myself with your inner life, and, in fact, for me to do so would be intrusive. It is only when behaviour becomes pathological that the unitary self loses traction.

Similarly, the pathological outbursts of the security apparatus (calling them “services” suggest that they are serving someone other than themselves, which is doubtful) force us to consider the complex power relations between government institutions.

We need to turn to some unemployed old kremlinologists to understand our own governments.

The mistimed death clock: How much time do I have left?

Someone has set up a macabre “death clock“, a web site where individuals can enter a few personal statistics — birthdate, sex, smoking status, and general level of optimism, and it will calculate a “personal date of death”, together with an ominous clock ticking down the seconds remaining in your life. (For Americans, ethnic group is a hugely significant predictor, but I’m not surprised that they leave this out. Ditto for family income.) It’s supposed to be a sharp dose of reality, I suppose, except that it’s nonsense.

Not because no one knows the day or the hour, though that is true, but because the author has built into the calculator a common but elementary misconception about life expectancy, namely, that we lose a year of expected remaining life for every year that we live. Thus, when I enter my data the clock tells me that I am expected to die on August 6 2042. If I move my birthdate back* by 10 years — making myself 10 years older — my date of death moves back by the same amount, to August 6 2032. If I tell it I was born in 1936 it tells me that my time has already run out, which is obviously absurd.

In fact, every year that you live, you lose 1 year, but gain a proportion a remainder equivalent to the probability that you might have died. Thus, a 46-year-old US man has expected remaining lifespan 33.21 years. He has probability 0.00365 of dying in the next year; if he makes it through that year and reaches his 47th birthday, his expected remaining lifespan is (33.21-1)+.00365 x 32.21 = 32.33 years.** So he’s only lost 0.88 years off his remaining lifespan. In this way, it’s actually possible to have more expected remaining lifespan at an older age than at a younger, if the mortality rate is high enough. Thus, if we go back to 1933 mortality rates, the expected lifespan at birth was 59.2 years. But a 1-year-old, having made it through the 6.5% infant mortality, now has 62.3 years remaining on average.

This is another way of expressing the well-known but still often not-sufficiently-appreciated impact of infant mortality on life expectancy. The life-expectancy at birth for US males is 76.4 years. But that obviously doesn’t mean that everyone keels over 5 months into their 77th year. 60% of the newborn males are expected to live past this age, and a 77-year-old man has 10 remaining years on average.

Of course, these are all what demographers call “period” life expectancies, based on the mortality rates experienced in the current year, and pretending that these mortality rates will continue into the future. Based on the experience of the past two centuries we expect the mortality rates to continue to fall, in which case the true average lifespans for people currently alive — the “cohort life expectancies” will exceed these period calculations — but there is no way to know. If an asteroid hits the earth tomorrow and wipes out all life on earth, this period calculation will be rendered nugatory (but there will be no one left to point that out. Hah!) The true average lifespan of the infants born this year will not be known until well into the 22nd century. Or, if Aubrey de Grey is right, not until the 32nd century.

* Or is it moving my birthdate forward by 10 years when I make it 10 years earlier? Reasonable people disagree on this point! And there’s interesting research on the habits of mind that lead one to choose the metaphor of the stationary self with time streaming past me, or the self moving like a river through a background of time.

** Actually, it’s (33.21-1)/(1-.00365)

Dawkins’ faulty taxonomy

Science enthusiast Richard Dawkins is always good for a laugh, even if the laughter sometimes curdles at his anti-Catholic and anti-Muslim bigotry, and his inclination to minimise the the significance of child rape when it serves the interests of the former. He has recently published on Twitter the comment

All the world’s Muslims have fewer Nobel Prizes than Trinity College, Cambridge. They did great things in the Middle Ages, though.

There are all kinds of comments one could make about this, and many have, but what I find most striking is the utter failure of logic in the area that is closest to his area of purported expertise, which is not religion or sociology, but taxonomy. To a statistician, this comparison seems risible. Not only are Muslim and Member of Trinity College not comparable categories (I hope Professor Dawkins won’t get the vapours when I mention that they are not even mutually exclusive), but even if they were, Dawkins seems to be suggesting that the difference in NPF (Nobel Prize Frequency) between the devotees of Muhammed and of the Cambridge Trinity are due to negative selection by Islam, whereas another observer might suspect that there is some form of positive selection by Trinity College.

To put it baldly, you don’t need a Nobel Prize to get a post at Trinity College, but it doesn’t hurt. For example the most recent Trinity College Nobel Prize went to Venkatraman Ramakrishnan, who had a nearly 30-year scientific career before joining Trinity College.

A more valid comparison would ask, why does Trinity College, Cambridge boast so many more nobel laureates (32) than the comparably sized Trinity College, Oxford. (2, by my count from this list).  Is it the vitiating effect of Oxford’s high-church Anglicanism? Or is it that Dawkins cherry-picked one of the wealthiest, most exclusive academic institutions, one most concentrated on exactly the sorts of subjects that attract Nobel prizes? Why have Scandinavian authors received so many Nobel Prizes in Literature? Religion? Climate? Reindeer?

I leave the resolution of these questions to the skeptical reader. Those who are interested in a more amusing version of Dawkinsian taxonomy can have a look at Borges’s essay “John Wilkins’ Analytical Language“. Borges describes an imaginary ancient Chinese encyclopedia, Celestial Emporium of Benevolent Knowledge that divides up all animals into the following categories:

Continue reading “Dawkins’ faulty taxonomy”