Boris Johnson has aroused the ire of many classical historians for his dubious claim that the Roman Empire was destroyed by “uncontrolled immigration”. What is most striking is the unquestioned implication that when Romans moved outward, conquering and enslaving their neighbours, that was GLORY, and much to be lamented when it was (possibly) destroyed by their ultimate failure to prevent people from “the east” from migrating in the opposite direction. It seems to me, if there’s anyone who had a problem with uncontrolled migration from the east it was Carthage.
The article seems to have good intentions, but this headline in today’s Guardian is the most sexist I’ve seen in some time. It sounds like the men were hard at work “creating language”, and some women helped out with some testing, and maybe brought snacks. Also some Neanderthals came by and lent a hand. And apes.
There’s something fascinating about 19th and 20th century English antisemitism. In continental Europe hatred of Jews was seen as fundamentally political, hence controversial, and was viewed with some distaste by many bien-pensant intellectuals.
Not so in England, where anti-Semitism was never so passionate or violent, but also never particularly controversial until the Nazis went and gave it a bad name. It’s all over the literature, hardly seeming to demand any comment, as I noted with some surprise a while back about the gratuitous anti-Semitism in The Picture of Dorian Grey.
Anyway, I just got around to reading for the first time Olaf Stapledon’s Last and First Men. It’s a remarkable piece of work, barely a novel, giving a retrospective overview of about a billion years of human history from the perspective of the dying remnant of humanity eking out its last days on Neptune. And the early parts, at least, are blatantly antisemitic. Chapter 4 tells of a time, still only thousands rather than millions of years in our future, when all racial and national distinctions have vanished through intermixing of populations and the creation of a world state. There is just one exception: the Jews. They are still there, defining themselves as a separate “tribe”, that uses their native “cunning” — specifically, financial cunning — to dominate their weaker-minded and less ruthless fellow humans:
The Jews had made themselves invaluable in the financial organization of the world state, having far outstripped the other races because they alone had preserved a furtive respect for pure intelligence. And so, long after intelligence had come to be regarded as disreputable in ordinary men and women, it was expected of the Jews. In them it was called satanic cunning, and they were held to be embodiments of the powers of evil… Thus in time the Jews had made something like “a corner” in intelligence. This precious commodity they used largely for their own purposes; for two thousand years of persecution had long ago rendered them permanently tribalistic, subconsciously if not consciously. Thus when they had gained control of the few remaining operations which demanded originality rather than routine, they used this advantage chiefly to strengthen their own position in the world… In them intelligence had become utterly subservient to tribalism. There was thus some excuse for the universal hate and even physical repulsion with which they were regarded; for they alone had failed to make the one great advance, from tribalism to a cosmopolitanism which in other races was no longer merely theoretical. There was good reason also for the respect which they received, since they retained and used somewhat ruthlessly a certain degree of the most distinctively human attribute, intelligence.
I was having a conversation recently about Biblical ancestry and the antediluvian generations, and it got me to thinking about how scientists sometimes like to use biblical references as attention-grabbing devices, without actually bothering to understand what they’re referring to — in this case, the so-called “mitochondrial Eve”. The expression was not used in the 1987 Nature paper that first purported to calculate the genealogical time back to the most recent common ancestor (MRCA) of all present-day humans in the female line, but it was a central to publicity around the paper at the time, including in academic journals such as Science.
The term has come to be fully adopted by the genetics community, even while they lament the misunderstandings that it engenders among laypeople — in particular, the assumption that “Eve” must in some sense have been the first woman, or must have been fundamentally different from all the other humans alive at the time. The implication is that the smart scientists were making a valiant effort to talk to simple people in terms they understand, taking the closest approximation (Eve) to the hard concept (MRCA), and the simple bible-y people need to make an effort on their part to understand what they’re really talking about.
In fact, calling this figure Eve is a blunder, and it reveals a fundamental misunderstanding of the biblical narrative. Eve is genuinely a common ancestor of all humans, according to Genesis, but she is not the most recent in any sense, and suggesting that she is just confusing. The MRCA in the Bible is someone else, namely the wife of Noah. Appropriately, she is not named, but if we want a name for her, the midrashic Genesis Rabbah calls her Na’ama. She has other appropriate characteristics as well, that would lead people toward a more correct understanding. To begin with, she lived many generations after the first humans. She lived amid a large human population, but a catastrophic event led to a genetic bottleneck that only she and her family survived. (That’s not quite the most likely scenario, but it points in the right direction.) And perhaps most important — though this reflects the core sexism of the biblical story — there was nothing special about her. She just happened to be in right place at the right time, namely, partnered with the fanatic boat enthusiast when the great flood happened.
I’ve always heard of the Metropolis algorithm having been invented for H-bomb calculations by Nicholas Metropolis and Edward Teller. But I was just looking at the original paper, and discovered that there are five authors: Metropolis, Rosenbluth, Rosenbluth, Teller, and Teller. Particularly striking having two repeated surnames, and a bit of research uncovers that these were two married couples: Arianna Rosenbluth and Marshall Rosenbluth, and Augusta Teller and Edward Teller. In particular, Arianna Rosenbluth (née Wright) appears to have been a formidable character, according to her Wikipedia page: She completed her physics PhD at Harvard at the age of 22.
In keeping with the 1950s conception of computer programming as women’s work, the two women were responsible, in particular, for all the programming — a heroic undertaking in those pre-programming language days, on the MANIAC I — and Rosenbluth in particular did all the programming for the final paper.
And also in keeping with the expectations of the time, and more depressingly, according to the Wikipedia article “After the birth of her first child, Arianna left research to focus on raising her family.”
From an article on the vaccine being developed by Robin Shattock’s group at Imperial College:
The success rate of vaccines at this stage of development is 10%, Shattock says, and there are already probably 10 vaccines in clinical trials, “so that means we will definitely have one”
It could be an exercise for a probability course:
- Suppose there are exactly 10 vaccines in this stage of development. What is the probability that one will succeed?
- Interpret “probably 10 vaccines” to mean that the number of vaccines in clinical trials is Poisson distributed with parameter 10. What is the probability that one will succeed?
[Cross-posted with Statistics and Biodemography Research Group blog.]
The age-specific estimates of fatality rates for Covid-19 produced by Riou et al. in Bern have gotten a lot of attention:
These numbers looked somewhat familiar to me, having just lectured a course on life tables and survival analysis. Recent one-year mortality rates in the UK are in the table below:
Depending on how you look at it, the Covid-19 mortality is shifted by a decade, or about double the usual one-year mortality probability for an average UK resident (corresponding to the fact that mortality rates double about every 9 years). If you accept the estimates that around half of the population in most of the world will eventually be infected, and if these mortality rates remain unchanged, this means that effectively everyone will get a double dose of mortality risk this year. Somewhat lower (as may be seen in the plots below) for the younger folk, whereas the over-50s get more like a triple dose.
A little-publicised development in statistics over the past two decades has been the admission of causality into respectable statistical discourse, spearheaded by the computer scientist Judea Pearl. Pearl’s definition (joint with Joseph Harpern) of causation (“X having setting x caused effect E”) has been formulated approximately as follows:
- X=x and E occurs.
- But for the fact that X=x, E would not have occurred.
Of course, Pearl is not the first person to think carefully about causality. He would certainly recognise the similarity to Koch’s postulates on demonstrating disease causation by a candidate microbe:
- No disease without presence of the organism;
- The organism must be isolated from a host containing the disease ;
- The disease must arise when the organism is introduced into a healthy animal;
- The organism isolated from that animal must be identified as the same original organism.
I was reminded of this recently in reading the Buddhist Assutava Sutta, the discourse on “dependent co-arising”, where this formula (that also appears in very similar wording in a wide range of other Buddhist texts) is stated:
When this is, that is;
This arising, that arises;
When this is not, that is not;
This ceasing, that ceases.
Pretty much since I became a professional academic two decades ago there has been constant agitation against lecturing as a technology for teaching. Either new research has proven it, or new technology has rendered it, obsolete. Thus I was amused to read this comment in Boswell’s Life of Johnson:
We talked of the difference between the mode of education at Oxford, and that in those Colleges where instruction is chiefly conveyed by lectures. JOHNSON: ‘Lectures were once useful; but now, when all can read, and books are so numerous, lectures are unnecessary. If your attention fails, and you miss a part of a lecture, it is lost; you cannot go back as you do upon a book.’ Dr. Scott agreed with him. ‘But yet (said I), Dr. Scott, you yourself gave lectures at Oxford.’ He smiled. ‘You laughed (then said I) at those who came to you.’
When I first arrived at Oxford I expressed admiration for the rigorously academic nature of the student admissions procedure. I have since soured somewhat on the whole segregate-the-elite approach, as well as on the implicit fiction that we are selecting students to be future academics, but I still appreciate the clarity of the criteria, which help to avoid the worst corruption of the American model. I have long been astonished at how little resentment there seemed to be in the US at the blatant bias in favour of economic and social elites, with criticism largely focused on discrimination for or against certain racial categories. Despite the enormous interest in the advantages, or perceived advantages, of elite university degrees, very little attention has been focused on the intentionally byzantine admissions procedures, on the bias in favour of children of the wealthy and famous (particularly donors or — wink-wink — future donors), the privileging of students with well-curated CVs and expensive and time-consuming extracurricular activities, the literal grandfather clauses in admissions.
Now some of the wealthy have taken it too far, by defrauding the universities themselves, paying consultants to fake exam results and athletic records. The most unintentionally humorous element of the whole scandal is this comment by Andrew Lelling, U.S. attorney for the District of Massachusetts:
We’re not talking about donating a building so that a school is more likely to take your son or daughter. We’re talking about deception and fraud.
Fraud is defined here as going beyond the ordinary bounds of abusing wealth and privilege. You pay your bribes directly to the university, not to shady middlemen. The applicant needs to actually play a sport only available in elite prep schools, not produce fake testimonials and photoshop their head onto an athlete’s body.
Of course, this is all fraud, because no one is paying millions of dollars because they think their child will receive a better education. The whole point is to lay a cuckoo’s egg in the elite-university nest, where they will be mistaken for the genuinely talented. For a careful (tongue-in-cheek) analysis of the costs and benefits of this approach, see my recent article on optimised faking.