Occasional reflections on Life, the World, and Mathematics

Archive for the ‘Academic’ Category

Guilt of admission

When I first arrived at Oxford I expressed admiration for the rigorously academic nature of the student admissions procedure. I have since soured somewhat on the whole segregate-the-elite approach, as well as on the implicit fiction that we are selecting students to be future academics, but I still appreciate the clarity of the criteria, which help to avoid the worst corruption of the American model. I have long been astonished at how little resentment there seemed to be in the US at the blatant bias in favour of economic and social elites, with criticism largely focused on discrimination for or against certain racial categories. Despite the enormous interest in the advantages, or perceived advantages, of elite university degrees, very little attention has been focused on the intentionally byzantine admissions procedures, on the bias in favour of children of the wealthy and famous (particularly donors or — wink-wink — future donors), the privileging of students with well-curated CVs and expensive and time-consuming extracurricular activities, the literal grandfather clauses in admissions.

Now some of the wealthy have taken it too far, by defrauding the universities themselves, paying consultants to fake exam results and athletic records. The most unintentionally humorous element of the whole scandal is this comment by Andrew Lelling, U.S. attorney for the District of Massachusetts:

We’re not talking about donating a building so that a school is more likely to take your son or daughter. We’re talking about deception and fraud.

Fraud is defined here as going beyond the ordinary bounds of abusing wealth and privilege. You pay your bribes directly to the university, not to shady middlemen. The applicant needs to actually play a sport only available in elite prep schools, not produce fake testimonials and photoshop their head onto an athlete’s body.

Of course, this is all fraud, because no one is paying millions of dollars because they think their child will receive a better education. The whole point is to lay a cuckoo’s egg in the elite-university nest, where they will be mistaken for the genuinely talented. For a careful (tongue-in-cheek) analysis of the costs and benefits of this approach, see my recent article on optimised faking.

The Brexit formula

A collaborative project with Dr Julia Brettschneider (University of Warwick) has yelded a mathematical formulation of the current range of Brexit proposals coming from the UK, that we hope will help to facilitate a solution:

Numeric calculations seem to confirm the conjecture that the value of the solution tends to zero as t→29/3.

Achieving transparency, or, Momus among the rotifers

1561 v. Heemskerck Momus tadelt die Werke der Goetter

At the recent Evolutionary Demography meeting in Miami there were two talks on the life history of rotifers. One of the speakers praised rotifers as a model organism for their generously choosing to be transparent, facilitating study of their growth and physiological changes. I had never really thought much about transparency as a positive feature of organisms (from the perspective of those wanting to study them) — though I guess the same has also favoured the development of the C. elegans system.

I was reminded of the famous quip (reported by his son Leonard Huxley) of T H Huxley, when he asked a student at the end of a lecture whether he had understood everything:

“All but one part,” replied the student, “during which you stood between me and the blackboard.”

To which Huxley rejoined, “I did my best to make myself clear, but could not render myself transparent.”

Momus would have been pleased.

The inflationary university

The universe, the standard model tells us, began with rapid inflation. The university as well, or at least, the modern exam-centered university.

With UK universities being upbraided by the Office for Students (OfS), the official regulator of the UK higher education sector, for handing out too many first-class degrees, I am reminded of this wonderful passage unearthed by Harry Lewis, former dean of Harvard College, from the report of Harvard’s 1894 Committee on Raising the Standard:

The Committee believes… that by defining anew the Grades A, B, C, D, and E, and by sending the definitions to every instructor, the Faculty may do something to keep up the standard of the higher grades. It believes that in the present practice Grades A and B are sometimes given too readily — Grade A for work of no very high merit, and Grade B for work not far above mediocrity.

Noting that letter grades were first introduced at Harvard in 1886, Lewis summarises the situation thus:

It took only eight years for the first official report that an A was no longer worth what it had been and that the grading system needed reform.

UV radiation and skin cancer — a history

If I had been asked when it first came to be understood that skin cancer is caused by exposure to the sun, I would have said probably the 1970s, maybe 1960s among cognoscenti, before it was well enough established to become part of public health campaigns. But I was just reading this 1953 article by C. O. Nordling on mutations and cancer — proposing, interestingly enough, that cancers are caused by the accumulation of about seven mutations in a cell — which mentions, wholly incidentally, in a discussion of latency periods between the inception of a tumour cell and disease diagnosis

40 years for seaman’s cancer (caused by solar radiation).

So, apparently skin cancer was known to be frequent among sailors, and the link to sun exposure was sufficiently well accepted to be mentioned here parenthetically.

Mission accomplished!

Reading Dava Sobel’s book on the women astronomers of the Harvard Observatory in the early 20th century, The Glass Universe, I was surprised to discover that the first Association to Aid Scientific Research by Women was founded in the 19th century. It awarded grants and an Ellen Richards Research prize, named for the first woman admitted to MIT, who went on to become associate professor of chemistry at MIT, while remaining unpaid. The prize was last awarded in 1932. Why?

[After selecting the winners of the 1932 prize] the twelve members declared themselves satisfied with the progress they had seen, and they drafted a resolution to dissolve the organization. “Whereas,” it said, “the objects for which this association has worked for thirty-five years have been achieved, since women are given opportunities in Scientific Research on an equality with men, and to gain recognition for their achievements, be it Resolved, that this association cease to exist after the adjournment of this meeting.”

Medical hype and under-hype

New heart treatment is biggest breakthrough since statins, scientists say

I just came across this breathless headline published in the Guardian from last year. On the one hand, this is just one study, the effect was barely statistically significant, and experience suggests a fairly high likelihood that this will ultimately have no effect on general medical practice or on human health and mortality rates. I understand the exigencies of the daily newspaper publishing model, but it’s problematic that the “new research study” has been defined as the event on which to hang a headline. The only people who need that level of up-to-the-minute detail are those professionally involved in winnowing out the new ideas and turning them into clinical practice. We would all be better served if newspapers instead reported on what new treatments have actually had an effect over the last five years. That would be just as novel to the general readership, and far less erratic.

On the other hand, I want to comment on one point of what I see as exaggerated skepticism: The paragraph that summarises the study results says

For patients who received the canakinumab injections the team reported a 15% reduction in the risk of a cardiovascular event, including fatal and non-fatal heart attacks and strokes. Also, the need for expensive interventional procedures, such as bypass surgery and inserting stents, was cut by more than 30%. There was no overall difference in death rates between patients on canakinumab and those given placebo injections, and the drug did not change cholesterol levels.

There is then a quote:

Prof Martin Bennett, a cardiologist from Cambridge who was not involved in the study, said the trial results were an important advance in understanding why heart attacks happen. But, he said, he had concerns about the side effects, the high cost of the drug and the fact that death rates were not better in those given the drug.

In principle, I think this is a good thing. There are far too many studies that show a treatment scraping out a barely significant reduction in mortality due to one cause, which is highlighted, but a countervailing mortality increase due to other causes, netting out to essentially no improvement. Then you have to say, we really should be aiming to reduce mortality, not to reduce a cause of mortality. (I remember many years ago, a few years after the US started raising the age for purchasing alcohol to 21, reading of a study that was heralded as showing the success of this approach, having found that the number of traffic fatalities attributed to alcohol had decreased substantially. Unfortunately, the number of fatalities not attributed to alcohol had increased by a similar amount, suggesting that some amount of recategorisation was going on.) Sometimes researchers will try to distract attention from a null result for mortality by pointing to a secondary endpoint — improved results on a blood test linked to mortality, for instance — which needs to be viewed with some suspicion.

In this case, though, I think the skepticism is unwarranted. There is no doubt that before the study the researchers would have predicted reduction in mortality from cardiovascular causes, no reduction due to any other cause, and likely an increase due to infection. The worry would be that the increase due to infection — or to some unanticipated side effect — would outweigh the benefits.

The results confirmed the best-case predictions. Cardiovascular mortality was reduced — possibly a lot, possibly only slightly. Deaths due to infections increased significantly in percentage terms, but the numbers were small relative to the cardiovascular improvements. The one big surprise was a very substantial reduction in cancer mortality. The researchers are open about not having predicted this, and not having a clear explanation. In such a case, it would be wrong to put much weight on the statistical “significance”, because it is impossible to quantify the class of hypotheses that are implicitly being ignored. The proper thing is to highlight this observation for further research, as they have properly done.

When you deduct these three groups of causes — cardiovascular, infections, cancer — you are left with approximately equal mortality rates in the placebo and treatment groups, as expected. So there is no reason to be “concerned” that overall mortality was not improved in those receiving the drug. First of all, overall mortality was better in the treatment group. It’s just that the improvement in CV mortality — as predicted — while large enough to be clearly not random when compared with the overall number of CV deaths, it was not large compared with the much larger total number of deaths. This is no more “concerning” than it would be, when reviewing a programme for improving airline safety, to discover that it did not appreciably change the total number of transportation-related fatalities.

Social choice Brexit

The discussion over a possible second Brexit referendum has foundered on the shoals of complexity: If the public were only offered May’s deal or no deal, that wouldn’t be any kind of meaningful choice (and it’s absurd to imagine that a Parliament that wouldn’t approve May’s deal on its own would be willing to vote for the fate of Brexit to be decided by the public on those terms. So you’re left with needing an unconventional referendum with at least three options: No deal, May’s deal, No Brexit (plus possible additional alternatives, like, request more time to negotiate the Better Deal™).

A three-choice (or more) referendum strikes many people as crazy. There are reasonable concerns. Some members of the public will inevitably find it confusing, however it is formulated and adjudicated. And the impossibility of aggregating opinions consistent with basic principles of fairness, not even to say in a canonical way, is a foundational theorem of social-choice theory (due to Kenneth Arrow).

Suppose we followed the popular transferable vote procedure: People rank the options, and we look only at the first choices. Whichever option gets the smallest number of first-choice votes is dropped, and we proceed with the remaining options, until one option has a first-choice majority. The classic paradoxical situation is all too likely in this setting. Suppose the population consists of

  1. 25% hardened brexiteers. They prefer a no-deal Brexit, but the last thing they want is to be blamed for May’s deal, which leaves the UK taking orders from Brussels with no say in them. If they can’t have their clean break from Brussels, they’d rather go back to the status quo ante and moan about how their beautiful Brexit was betrayed.
  2. 35% principled democrats. They’re nervous about the consequences of Brexit, so they’d prefer May’s soft deal, whatever it’s problems. But if they can’t have that, they think the original referendum needs to be respected, so their second choice is no deal Brexit.
  3. 40% squishy europhiles. They want no Brexit, barring that they’d prefer the May deal. No-deal Brexit for them is the worst.

The result will be that no deal drops out, and we’re left with 65% favouring no Brexit. But if the PDs anticipated this, they could have ranked no deal first, producing a result that they would have preferred.

So, that seems like a problem with a three-choice referendum. But here’s a proposal that would be even worse: We combine choices 2 and 3 into a single choice, which we simply call “Leave”. Then those who wants to abandon the European project entirely will be voting for the same option as those who are concerned about the EU being dominated by moneyed interests, and they’ll jointly win the referendum and then have to fight among themselves after the fact, leaving them with the outcome — no-deal Brexit — that the smallest minority preferred.

Unfortunately, that’s the referendum we actually had.

Schrödinger’s menu

I was just rereading Erwin Schrödinger’s pathbreaking 1944 lectures What is Life? which is often praised for its prescience — and influence — on the foundational principals of genetics in the second half of the twentieth century. At one point, in developing the crucial importance of his concept of negative entropy as the driver of life,  he remarked on the misunderstanding that “energy” is what organisms draw from their food. In an ironic aside he says

In some very advanced country (I don’t remember whether it was Germany or the U.S.A. or both) you could find menu cards in restaurants indicating, in addition to the price, the energy content of every dish.

Also prescient!

How odd that the only biological organisms that Schrödinger is today commonly associated with are cats…

FDA sample menu with energy content

The Silver Standard 4: Reconsideration

After writing in praise of the honesty and accuracy of fivethirtyeight’s results, I felt uncomfortable about the asymmetry in the way I’d treated Democrats and Republicans in the evaluation. In the plots I made, low-probability Democratic predictions that went wrong pop out on the left-hand side, whereas low-probability Republican predictions  that went wrong would get buried in the smooth glide down to zero on the right-hand side. So I decided, what I’m really interested in are all low-probability predictions, and I should treat them symmetrically.

For each district there is a predicted loser (PL), with probability smaller than 1/2. In about one third of the districts the PL was assigned a probability of 0. The expected number of PLs (EPL) who would win is simply the sum of all the predicted win probabilities that are smaller than 1/2. (Where multiple candidates from the same party are in the race, I’ve combined them.) The 538 EPL was 21.85. The actual number of winning PLs was 13.

What I am testing is whether 538 made enough wrong predictions. This is the opposite of the usual evaluation, which gives points for getting predictions right. But when measured by their own predictions, the number of districts that went the opposite of the way they said was a lot lower than they said it would be. That is prima facie evidence that the PL win probabilities were being padded somewhat. To be more precise, under the 538 model the number of winning PLs should be approximately Poisson distributed with parameter 21.85, meaning that the probability of only 13 PLs winning is 0.030. Which is kind of low, but still pretty impressive, given all the complications of the prediction game.

Below I show plots of the errors for various scenarios, measuring the cumulative error for these symmetric low predictions. (I’ve added an “Extra Tarnished” scenario, with the transformation based on the even more extreme beta(.25,.25).) I show it first without adjusting for the total number of predicted winning PLs:

image

We see that tarnished predictions predict a lot more PL victories than we actually see. The actual predictions are just slightly more than you should expect, but suspiciously one-sided — that is, all in the direction of over predicting PL victories, consistent with padding the margins slightly, erring in the direction of claiming uncertainty.

And here is an image more like the ones I had before, where all the predictions are normalised to correspond to the same number of predicted wins:

TarnishedSymmetric

 

Tag Cloud