Occasional reflections on Life, the World, and Mathematics

Posts tagged ‘polls’

Good words

There has been a lot of reporting on this recent poll, where people were asked what word first came to mind when they thought of President Trump. Here are the top 20 responses (from 1,079 American adults surveyed):

idiot         39
incompetent   31
liar          30
leader        25
unqualified   25
president     22
strong        21
businessman   18
ignorant      16
egotistical   15
asshole       13
stupid        13
arrogant      12
trying        12
bully         11
business      11
narcissist    11
successful    11
disgusting    10
great         10

The fact that idiot, incompetent, and liar head the list isn’t great for him. But Kevin Drum helpfully coded the words into “good” and “bad”:

What strikes me is that even the “good” words aren’t really very good. If you’re asked what word first comes to mind when you think of President Trump and you answer president, that sounds to me more passive-aggressive than positive. Similarly, you need a particular ideological bent to consider businessman and business to be inherently positive qualities. Leader — I don’t know, I guess der Führer is a positive figure for those who admire that sort of thing. Myself, I prefer to know where we’re being led. If we include that one, there are 4 positive words, 4 neutral words, and 12 negative. (I’m including trying as neutral because I don’t know if people mean “working hard to do his job well”, which sounds like at least a back-handed compliment, or “trying my patience”.)

Why were the polls so wrong?

While Tuesdays election result is a global disaster, it is most immediately distressing for three groups: American Latinos, American Muslims, and American pollsters.

First of all, let us dispel with the idea (that I have heard some propound) that they weren’t wrong. Huge numbers of polls done independently in multiple states gave results that were consistently at variance in the same direction with the actual election results. I can see three kinds of explanations:

  1. The pollsters shared a mistaken idea or methodology for correcting their tiny unrepresentative samples for differential turnout.
  2. Subjects lied about their voting intentions.
  3. Subjects changed their minds between the last poll and the election.

3 seems unlikely to account for a lot, as it seems implausible to suppose that many people changed their minds so rapidly. 2 is plausible, but hard to check and difficult  impossible to correct. 1 is a nice technical-sounding explanation, and certainly seems like there must be some truth to it. Except, probably not much. As evidence, I bring the failure of VoteCastr.

Slate magazine teamed up with the big-data firm VoteCastr to trial a system of estimating votes in real time. Ahead of time they did extensive polling to fit an extensive model to predict an individual’s vote (probabilistically) as a function of several publicly-available demographic variables. Then they track records of who actually voted, and update their totals for the number of votes for each candidate accordingly.

Sounds like a perfectly plausible scheme. And it bombed. For instance, their final projection for Florida was 4.9 million (actually, 4,225,249) for Clinton and 4.6 million for Trump, a lead of about 3% for Clinton. The real numbers were 4.5 million and 4.6 million, a lead of 1.3% for Trump. (The difference in the total seems to be mainly due to votes for other candidates, though the total number of Florida votes in VoteCastr is about 100,000 more than in the official tally, which I find suspicious.) They projected a big victory for Clinton in Wisconsin.

The thing is, this removes the uncertainty related to reason 1: They know exactly who came to vote, and they’re matched by age, sex, and party registration. Conclusion: Estimating turnout is not the main problem that undermined this year’s presidential election polls.

Waiting for Armageddon

The US presidential election is now just 4 days away. It seems to me that people are not taking the danger seriously. In particular, last week, in the hour after the FBI started its intervention to bring on the apocalypse (more work for them, I suppose), it was reported that the S&P500 stock index fell by about 1%. If we suppose that the FBI’s announcement made a Trump victory 2% more likely, that suggests an expectation that Trump would wipe 50% off the value of the stock market. Yet that doesn’t seem to be incorporated into the current value of the markets. It’s almost as though investors are in denial: When forced to focus on the Trump danger they rate it apocalyptic, but as soon as the news quiets down — within minutes — they go back to treating the probability as 0.

I’d like to believe Sam Wang’s projections, that the election result is all but certain for Clinton. Nothing is all but certain, least of all the future. Nate Silver’s reasoning, leading to about a 2/3 chance for Clinton, seems to me very sound: Trump will win only if he wins about half a dozen states where he has about a 50% chance. That sounds like about a 2% chance, except that they are unlikely to be independent. The reality is likely to be somewhere between 2% and 50%. Where it is, is almost impossible to judge. I’m slightly more hopeful than that because I believe in the power of Clinton’s organisation. But how much more?

But even 2%, for the risk of a crybaby fascist as president, is far too much. It’s not clear to me how the US can come back from this disaster, even if Trump loses.

Pollster infallibility

I was reading this article by John Cassidy of the New Yorker about the current state of the US presidential election campaign, according to the polls, and was surprised by this sentence:

Of course, polls aren’t infallible—we relearned that lesson in the recent Brexit referendum.

It hardly requires any major evidence to argue against the straw man that the polls are infallible, but I didn’t recall any notable poll failure related to Brexit; on the contrary, I was following this pretty closely, and it seemed that political commentators were desperately trying to discount the polls in the weeks leading up to the referendum, arguing that the public would ultimately break for the status quo, no matter what they were telling the pollsters. I looked it up on Wikipedia:

UK referendum polls

So it looks like the consensus of the polls was that Leave and Remain were about equal, with a short-term trend toward leave, and about 10% still undecided. Hardly a major case against the polls being infallible when Leave won by a few percent…

Tag Cloud