Occasional reflections on Life, the World, and Mathematics

Archive for the ‘Politics’ Category

A wall with two sides

Right after the EU referendum I commented

Those now entering retirement have locked in promises of high pensions to themselves that no one before or after them will be able to receive…

I can’t help but wonder whether, on some level, the over-60s see the situation they’ve manoeuvred the younger generations into — crumbling infrastructure, insufficient and overpriced housing, excessive pensions that will come at the expense of social spending for decades, and the only solution they can see — since a pension isn’t worth much if there aren’t enough working people to actually provide the services you depend on — is to block off their children’s potential escape routes.

Maybe it’s not about keeping THEM out. It’s about keeping the younger generation IN.

This is, of course, intentionally provocative, and while I believe there’s some truth to it I’m not sure exactly how profoundly I really believe it. But I was reminded of this perspective while reading James C. Scott’s eye-opening “deep history of the earliest states” Against the Grain:

Owen Lattimore… has made the case most forcefully that the purpose of the Great Wall(s) was as much to keep the Chinese taxpayers inside as to block barbarian incursions.

In the latter part of the twentieth century people in the wealthy world got used to the notion that everyone is free to travel, but there is a natural right of nations to decide who to let in — a right granted primarily to wealthy individuals and citizens of wealthy nations. This belief was communicated to me so strongly that when I first encountered the historical fact that passports were originally documents confirming the permission to leave granted by a state to its subject — rather like the travel passes issued to slaves by their owners — I found this intensely shocking, and put it on my growing the-past-is-a-foreign-country list, right next to the Roman practice of “exposing” surplus newborns, leaving them to die or be picked up by other families in need of a slave. Indeed, I just came across this comment in Jill Lepore’s magisterial new one-volume history of the US, on the first US federal laws regulating passports:

In 1856, Congress passed a law declaring that only the secretary of state “may grant and issue passports,” and that only citizens could obtain them. In August of 1861, Lincoln’s secretcary of state, William Seward, issued this order: “Until further notice, no person will be allowed to go abroad from a port of the United States without a passport either from this Department or countersigned by the Secretary of State.” From then until the end of the war, this restriction was enforced; its aim was to prevent men from leaving the country in order to avoid military service.

The fact that the soi-disant German Democratic Republic had built a wall, with armed guards to prevent its own citizens from fleeing was widely seen to fatally undermine that state’s legitimacy. Indeed, the GDR’s rulers themselves seemed to concede this point, as they denied the obvious truth of the wall’s function, designating it in official proclamations the antifaschistischer Schutzwall [antifascist defensive rampart]. (Just by the way, I’ve long been fascinated by the way this word “Wall”, a partly-false-cognate to the English expression for what Germans generally called by the more standard German word Mauer, came to be used as a competing term in GDR propaganda.)

Possibly one effect of rising economic inequality is that freedom of movement will be one of the special privileges that states had been routinely providing to their citizens that will increasingly be reserved to a privileged elite. And like the GDR, states like the UK will assure their citizens that they are not being kept in, but rather, that they are being protected from the barbarian hordes outside.

(Title of this post with apologies to André Cayatte.)

The inflationary university

The universe, the standard model tells us, began with rapid inflation. The university as well, or at least, the modern exam-centered university.

With UK universities being upbraided by the Office for Students (OfS), the official regulator of the UK higher education sector, for handing out too many first-class degrees, I am reminded of this wonderful passage unearthed by Harry Lewis, former dean of Harvard College, from the report of Harvard’s 1894 Committee on Raising the Standard:

The Committee believes… that by defining anew the Grades A, B, C, D, and E, and by sending the definitions to every instructor, the Faculty may do something to keep up the standard of the higher grades. It believes that in the present practice Grades A and B are sometimes given too readily — Grade A for work of no very high merit, and Grade B for work not far above mediocrity.

Noting that letter grades were first introduced at Harvard in 1886, Lewis summarises the situation thus:

It took only eight years for the first official report that an A was no longer worth what it had been and that the grading system needed reform.

Social choice Brexit, 2: The survey

Last week I suggested that it could very well be that a second Brexit referendum would stumble into Arrow’s paradox, with Remain preferred to soft Brexit, soft Brexit preferred to no deal, and No Deal preferred to Remain. I wasn’t expecting rapid confirmation, but here it is, from a poll of about 1000 British adults:

Pareto’s revenge

Several months ago I suggested that no one could ultimately support soft Brexit, because the soft Brexit strategy — something like EFTA, formally outside of the EU, but still in a customs union and/or the single market, still recognising most rights of EU citizens in the UK — is, to use a bit of economics game-theory jargon, Pareto dominated by staying in the EU. Even if the damage wrought by a no-deal Brexit would be vastly better worse by a negotiated surrender, if you look at it category by category — rights of UK citizens, disruption to markets, business flight, reputation, civil peace, diplomatic influence, sovereignty over market regulations — staying in the EU would be even better. There’s nothing you can point to and say, this is what we got for our trouble (which is why Theresa May was at such pains to bash immigrants when announcing her deal). We’ll always have Parisians.

Apparently, some other leading Brexiters are noticing their Pareto trap. Earlier in the week Dominic Raab resigned in protest that the deal (that he was responsible for negotiating himself!) was worse than staying in the EU. Today it’s Shanker Singham, leading trade adviser to the Leave campaign, who says “From a trade policy perspective this is a worse situation than being in the EU.” (Much of the Guardian article explains how this leading intellectual light of the Leave campaign has a mostly fake CV.)

The effect will be zero. The Brexit dead-enders only need to keep Parliament in turmoil and run out the clock, until their glorious Thelma & Louise consummation. I’m sure Jacob Rees-Mogg has shorted the pound, and the FTSE, so he’ll be fine. Would that count as insider trading?

Fantasy queues

Reviving the spirit of the Blitz! Food queues in wartime London.

On moving to the UK almost a dozen years ago I quickly noticed that the one thing that unites the political establishment, left and right, is that they don’t like foreigners. Or rather, maybe better phrased, they may personally like and even admire some foreigners, but they recognise that such exotic tastes are not for everyone, and that disliking foreigners is a valuable national pastime, deserving of their official support.

And so, after claiming through the Brexit campaign that it was all about national sovereignty and repatriating billions of pounds for the NHS, and having spent the better part of two years diplomatically digging a grave for the national future, the Tories strike on bedrock: We have betrayed national sovereignty and destroyed the national economy, but it’s all worth it because we still get to kick out the foreigners. Or, in the prime minister’s words:

“Getting back full control of our borders is an issue of great importance to the British people,” she will say, adding that EU citizens will no longer be able to “jump the queue ahead of engineers from Sydney or software developers from Delhi”.

I’m willing to go out on a limb here and suggest that the subset of British — or, given their current fragile mental state, I should perhaps call them Brittle — voters who voted Leave with the thought uppermost in their minds of improving the prospects for Indians to migrate to the UK was… less than a majority. But even more striking, EU citizens who moved to the UK over the past 40 years, following the same agreements that allowed Brittle people to seek work and better lives anywhere on the Continent, are retrospectively branded as “queue jumpers”, the most rebarbative class in the English moral order. Theresa May is summoning her countrymen and -women to defend a fantasy queue.

In the 1990s the German media teemed with entreaties to tear down the Mauer im Kopf, the “mental wall” (a phrase of the author Peter Schneider that actually long preceded the fall of the physical wall in Berlin), as a necessary prelude to a stable national identity, and a democratic and prosperous future. Maybe the Brittles need to dissolve their fantasy queues, to break up the Schlange im Kopf, before they can start building a new nation on the rubble they are making of their past.

Social choice Brexit

The discussion over a possible second Brexit referendum has foundered on the shoals of complexity: If the public were only offered May’s deal or no deal, that wouldn’t be any kind of meaningful choice (and it’s absurd to imagine that a Parliament that wouldn’t approve May’s deal on its own would be willing to vote for the fate of Brexit to be decided by the public on those terms. So you’re left with needing an unconventional referendum with at least three options: No deal, May’s deal, No Brexit (plus possible additional alternatives, like, request more time to negotiate the Better Deal™).

A three-choice (or more) referendum strikes many people as crazy. There are reasonable concerns. Some members of the public will inevitably find it confusing, however it is formulated and adjudicated. And the impossibility of aggregating opinions consistent with basic principles of fairness, not even to say in a canonical way, is a foundational theorem of social-choice theory (due to Kenneth Arrow).

Suppose we followed the popular transferable vote procedure: People rank the options, and we look only at the first choices. Whichever option gets the smallest number of first-choice votes is dropped, and we proceed with the remaining options, until one option has a first-choice majority. The classic paradoxical situation is all too likely in this setting. Suppose the population consists of

  1. 25% hardened brexiteers. They prefer a no-deal Brexit, but the last thing they want is to be blamed for May’s deal, which leaves the UK taking orders from Brussels with no say in them. If they can’t have their clean break from Brussels, they’d rather go back to the status quo ante and moan about how their beautiful Brexit was betrayed.
  2. 35% principled democrats. They’re nervous about the consequences of Brexit, so they’d prefer May’s soft deal, whatever it’s problems. But if they can’t have that, they think the original referendum needs to be respected, so their second choice is no deal Brexit.
  3. 40% squishy europhiles. They want no Brexit, barring that they’d prefer the May deal. No-deal Brexit for them is the worst.

The result will be that no deal drops out, and we’re left with 65% favouring no Brexit. But if the PDs anticipated this, they could have ranked no deal first, producing a result that they would have preferred.

So, that seems like a problem with a three-choice referendum. But here’s a proposal that would be even worse: We combine choices 2 and 3 into a single choice, which we simply call “Leave”. Then those who wants to abandon the European project entirely will be voting for the same option as those who are concerned about the EU being dominated by moneyed interests, and they’ll jointly win the referendum and then have to fight among themselves after the fact, leaving them with the outcome — no-deal Brexit — that the smallest minority preferred.

Unfortunately, that’s the referendum we actually had.

Brexit deal still halfway concluded!

I cited this joke in the context of Brexit before:

A rabbi announces in synagogue, at the end of Yom Kippur, that he despairs at the burning need for wealth to be shared more equally. He will depart for the next year to travel through the world, speaking to all manner of people, ultimately to persuade the rich to share with the poor. On the following Yom Kippur he returns, takes his place at the head of the congregation without a word, and leads the service. At the end of the day congregants gather around him. “Rabbi, have you accomplished your goal? Will the rich now share with the poor?” And he says, “Halfway. The poor are willing to accept.”

And it’s returned, because we again have these headlines:

Except the backing of the “cabinet” means the backing of those who didn’t resign because they wouldn’t back the agreement. Including the Northern Ireland Secretary (!) and the Brexit Secretary (!!) who was tasked with negotiating the thing. So, except for all the members most relevant to the deal (oh, yes, the Foreign Secretary already resigned over Brexit a few months ago), Theresa May has managed to get her handpicked cabinet to accept the negotiated agreement. All that remains is the other half of the task, which is to get it through Parliament, which only requires that she corral votes from her coalition partner, who see the backstop on Northern Ireland as a fundamental betrayal, and sufficient members of the opposition parties, because why wouldn’t they sign up to take responsibility for an agreement that everyone will blame for everything that goes wrong, even if nothing goes wrong, rather than forcing new elections?

The Silver Standard 4: Reconsideration

After writing in praise of the honesty and accuracy of fivethirtyeight’s results, I felt uncomfortable about the asymmetry in the way I’d treated Democrats and Republicans in the evaluation. In the plots I made, low-probability Democratic predictions that went wrong pop out on the left-hand side, whereas low-probability Republican predictions  that went wrong would get buried in the smooth glide down to zero on the right-hand side. So I decided, what I’m really interested in are all low-probability predictions, and I should treat them symmetrically.

For each district there is a predicted loser (PL), with probability smaller than 1/2. In about one third of the districts the PL was assigned a probability of 0. The expected number of PLs (EPL) who would win is simply the sum of all the predicted win probabilities that are smaller than 1/2. (Where multiple candidates from the same party are in the race, I’ve combined them.) The 538 EPL was 21.85. The actual number of winning PLs was 13.

What I am testing is whether 538 made enough wrong predictions. This is the opposite of the usual evaluation, which gives points for getting predictions right. But when measured by their own predictions, the number of districts that went the opposite of the way they said was a lot lower than they said it would be. That is prima facie evidence that the PL win probabilities were being padded somewhat. To be more precise, under the 538 model the number of winning PLs should be approximately Poisson distributed with parameter 21.85, meaning that the probability of only 13 PLs winning is 0.030. Which is kind of low, but still pretty impressive, given all the complications of the prediction game.

Below I show plots of the errors for various scenarios, measuring the cumulative error for these symmetric low predictions. (I’ve added an “Extra Tarnished” scenario, with the transformation based on the even more extreme beta(.25,.25).) I show it first without adjusting for the total number of predicted winning PLs:

image

We see that tarnished predictions predict a lot more PL victories than we actually see. The actual predictions are just slightly more than you should expect, but suspiciously one-sided — that is, all in the direction of over predicting PL victories, consistent with padding the margins slightly, erring in the direction of claiming uncertainty.

And here is an image more like the ones I had before, where all the predictions are normalised to correspond to the same number of predicted wins:

TarnishedSymmetric

 

The Silver Standard, Part 3: The Reckoning

One of the accusations most commonly levelled against Nate Silver and his enterprise is that probabilistic predictions are unfalsifiable. “He never said the Democrats would win the House. He only said there was an 85% chance. So if they don’t win, he has an out.” This is true only if we focus on the top-level prediction, and ignore all the smaller predictions that went into it. (Except in the trivial sense that you can’t say it’s impossible that a fair coin just happened to come up heads 20 times in a row.)

So, since Silver can be tested, I thought I should see how 538’s predictions stood up in the 2018 US House election. I took their predictions of the probability of victory for a Democratic candidate in all 435 congressional districts (I used their “Deluxe” prediction) from the morning of 6 November. (I should perhaps note here that one third of the districts had estimates of 0 (31 districts) or 1 (113 districts), so a victory for the wrong candidate in any one of these districts would have been a black mark for the model.) I ordered the districts by the predicted probability, to compute the cumulative predicted number of seats, starting from the smallest. I plot them against the cumulative actual number of seats won, taking the current leader for the winner in the 11 districts where there is no definite decision yet.

Silver_PredictedvsActual

The predicted number of seats won by Democrats was 231.4, impressively close to the actual 231 won. But that’s not the standard we are judging them by, and in this plot (and the ones to follow) I have normalised the predicted and observed totals to be the same. I’m looking at the cumulative fractions of a seat contributed by each district. If the predicted probabilities are accurate, we would expect the plot (in green) to lie very close to the line with slope 1 (dashed red). It certainly does look close, but the scale doesn’t make it easy to see the differences. So here is the plot of the prediction error, the difference between the red dashed line and the green curve, against the cumulative prediction:

Silver_PredictedvsError

There certainly seems to have been some overestimation of Democratic chances at the low end, leading to a maximum cumulative overprediction of about 6 (which comes at district 155, that is, the 155th most Republican district). It’s not obvious whether these differences are worse than you would expect. So in the next plot we make two comparisons. The red curve replaces the true outcomes with simulated outcomes, where we assume the 538 probabilities are exactly right. This is the best case scenario. (We only plot it out to 100 cumulative seats, because the action is all at the low end. The last 150 districts have essentially no randomness. The red curve and the green curve look very similar (except for the direction; the direction of the error is random). The most extreme error in the simulated election result is a bit more than 5.

What would the curve look like if Silver had cheated, by trying to make his predictions all look less certain, to give himself an out when they go wrong? We imagine an alternative psephologist, call him Nate Tarnished, who has access to the exact true probabilities for Democrats to win each district, but who hedges his bets by reporting a probability closer to 1/2. (As an example, we take the cumulative beta(1/2,1/2) distribution function. this leaves 0, 1/2, and 1 unchanged, but .001 would get pushed up to .02, .05 is pushed up to .14, and .2 becomes .3. Similarly, .999 becomes .98 and .8 drops to .7. Not huge changes, but enough to create more wiggle room after the fact.

In this case, we would expect to accumulate much more excess cumulative predicted probability on the left side. And this is exactly what we illustrate with the blue curve, where the error repeatedly rises nearly to 10, before slowly declining to 0.

SilverTornished

I’d say the performance of the 538 models in this election was impressive. A better test would be to look at the predicted vote shares in all 435 districts. This would require that I manually enter all of the results, since they don’t seem to be available to download. Perhaps I’ll do that some day.

The Silver Standard: Stochastics pedagogy

I have written a number of times in support of Nate Silver and his 538 project: Here in general, and here in advance of the 2016 presidential elections. Here I want to make a comment about his salutary contribution to the public understanding of probability.

His first important contribution was to force determinism-minded journalists (and, one hopes, some of their readers) to grapple with the very notion of what a probabilistic prediction means. In the vernacular, “random” seems to mean only a fair coin flip. His background in sports analysis was helpful in this, because a lot of people spend a lot of time thinking about sports, and they are comfortable thinking about the outcomes of sporting contests as random, where the race is not always to the swift nor the battle to the strong, but that’s the way to bet. People understand intuitively that the “best team” will not win every match, and winning 3/4 of a large number of contests is evidence of overwhelming superiority. Analogies from sports and gaming have helped to support intuition, and have definitely improved the quality of discussion over the past decade, at least in the corners of the internet where I hang out.*

Frequently Silver is cited directly for obvious insights like that an 85% chance of winning (like his website’s current predicted probability of the Democrat’s winning the House of Representatives) is like the chance of rolling 1 through 5 on a six-sided die, which is to say, not something you should take for granted. But he has also made a great effort to convey more subtle insights into the nature of probabilistic prediction. I particularly appreciated this article by Silver, from a few weeks ago.

As you see reports about Republicans or Democrats giving up on campaigning in certain races for the House, you should ask yourself whether they’re about to replicate Clinton’s mistake. The chance the decisive race in the House will come somewhere you’re not expecting is higher than you might think…

It greatly helps Democrats that they also have a long tail of 19 “lean R” seats and 48 “likely R” seats where they also have opportunities to make gains. (Conversely, there aren’t that many “lean D” or “likely D” seats that Democrats need to defend.) These races are long shots individually for Democrats — a “likely R” designation means that the Democratic candidate has only between a 5 percent and 25 percent chance of winning in that district, for instance. But they’re not so unlikely collectively: In fact, it’s all but inevitable that a few of those lottery tickets will come through. On an average election night, according to our simulations, Democrats will win about six of the 19 “lean R” seats, about seven of the 48 “likely R” seats — and, for good measure, about one of the 135 “solid R” seats. (That is, it’s likely that there will be at least one total and complete surprise on election night — a race that was on nobody’s radar, including ours.)

This is a more subtle version of the problem that all probabilities get rounded to 0, 1, or 1/2. Conventional political prognosticators evaluate districts as “safe” or “likely” or “toss-up”. The likely or safe districts get written off as certain — which is reasonable from the point of view of individual decision-making — but cumulatively a large number of districts with a 10% chance of being won by the Democrat are simply different from districts with a 0% chance. It’s a good bet that the Republican will win each one, but if you have 50 of them it’s a near certainty that the Democrats will win at least 1, and a strong likelihood they will win 8 or more.

The analogy to lottery tickets isn’t perfect, though. The probabilities here don’t represent randomness so much as uncertainty. After 5 of these “safe” districts go the wrong way, you’re almost certainly going to be able to go back and investigate, and discover that there was a reason why it was misclassified. If you’d known the truth, you wouldn’t have called it safe it all. This enhances the illusion that no one loses a safe seat — only, toss-ups can be mis-identified as safe.

* On the other hand, Dinesh D’Souza has proved himself the very model of a modern right-wing intellectual with this tweet:

 

Tag Cloud