The unbearable heaviness of buildings: Another episode in the series “Useless units”

Apparently, Manhattan is sinking by 1-2mm per year, due to the weight of its skyscrapers. The Guardian reports on the research led by Tom Parsons, of the US Geological Survey, saying that New York City’s buildings “weigh a total of 1.68tn lbs”.

What’s that, you say? You don’t have any intuition for how much 1.68 tn lbs is? The Guardian feels you. They’ve helpfully translated it into easy-to-grasp terms. This, they go on to say, “is roughly equivalent to the weight of 140 million elephants”.

More metric-imperial conversion hijinks

A while back I noted how an article on Ebola in the NY Times had apparently translated “one millilitre of blood” in a medical context into “one-fifth of a teaspoon of blood”. Hilarity ensued. Now I see that the fun doesn’t go in only one direction. I just got a letter from the NHS about an upcoming appointment, including these instructions:

Do not come to your appointment if you or anyone living with you has the symptoms of a new continuous cough (in the last week) or a temperature above 37.8 degrees or loss or change to your sense of smell or taste.

37.8 degrees? Why exactly this number? It sounds both arbitrary and absurdly precise. A bit of reflection revealed that 37.8 degrees Celsius is precisely 100 degrees Fahrenheit. They obviously copied some American guidelines, and instead of rounding appropriately — or reconsidering the chosen level — they just calculated the corresponding Celsius temperature. The funny thing is, Americans are used to having the very non-round guideline of 98.6 degrees as the supposed “normal” body temperature, because someone* in the 19th Century decided 37 degrees Celsius was roughly the right number, and that magic number got translated precisely into Fahrenheit.

* Carl Reinhold August Wunderlich, actually.

The end of the Turing test

The Turing test has always had a peculiar place in the philosophy of mind. Turing’s brilliant insight was that we should be able to replace the apparently impossible task of developing a consensus definition of the words ‘machine’ and ‘think’, with a possibly simpler procedural definition: Can a machine succeed at the “Imitation game”, whose goal is to convince a neutral examiner that it (and not its human opponent) is the real human? Or, to frame it more directly — and this is how it tends to be interpreted — can a computer carry on a natural language conversation without being unmasked by an interrogator who is primed to recognise that it might not be human?

Turing’s argument was that, while it certainly is possible without passing the test — even humans may be intelligent while being largely or entirely nonverbal — we should be able to agree on some observable activities, short of being literally human in all ways, that would certainly suffice to persuade us that the attribution of human-like intelligence is warranted. The range of skills required to carry on a wide-ranging conversation makes that ability a plausible stand-in for what is now referred to as general intelligence. (The alert interrogator is a crucial part of this claim, as humans are famously gullible about seeing human characteristics reflected in simple chat bots, forces of nature, or even the moon.)

If we won’t accept any observable criteria for intelligence, Turing points out, then it is hard to see how we can justify attributing intelligence even to other humans. He specifically takes on, in his essay, the argument (which he attributes specifically to a Professor Jefferson) that a machine cannot be intelligent merely because it performs certain tasks. Machine intelligence, Jefferson argued, is impossible because

No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.

Turing retorts that this leads to the solipsistic view that

the only way by which one could be sure that a machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man.

In principle everyone could doubt the content of everyone else’s consciousness, but “instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.” Turing then goes on to present an imagined dialogue that has since become a classic, in which the computer riffs on Shakespeare sonnets, Dickens, the seasons, and Christmas. The visceral impact of the computer’s free-flowing expression of sentiment and understanding, Turing then suggests, is such that “I think that most of those who support the argument from consciousness could be persuaded to abandon it rather than be forced into the solipsist position.” He compares it, charmingly, to a university oral exam, by which it is established that a student has genuinely understood the material, rather than being able simply to reproduce rote phrases mechanically.

I used to accept this argument, but reflecting on Chat-GPT has forced me to reconsider. This is a predictive text generation tool recently made available that can produce competent texts based on arbitrary prompts. It’s not quite ready to pass the Turing test*, but it’s easy to see how a successor program — maybe GPT-4, the version that is expected to be made available to the public next year — might. And it’s also clear that nothing like this software could be considered intelligent.

Thinking about why not helps to reveal flaws in Turing’s reasoning that were covered by his clever rhetoric. Turing specifically argues against judging the machine by its “disabilities”, or its lack of limbs, or its electronic rather than biological nervous system. This sounds very open-minded, but the inclination to assign mental states to fellow humans rather than to computers is not irrational. We know that other humans have similar mental architecture to our own, and so are not likely to be solving problems of intellectual performance in fundamentally different ways. Modern psychology and neurobiology have, in fact, shown this intuition to be occasionally untrue: apparently intelligent behaviours can be purely mechanical, and this is particularly true of calculation and language.

In this respect, GPT-3 may be seen as performing a kind of high-level glossolalia, or like receptive aphasia, where someone produces long strings of grammatical words, but devoid of meaning. Human brain architecture links the production of grammatical speech to representations of meaning, but these are still surprisingly independent mechanisms. Simple word associations can produce long sentences with little or no content. GPT-3 has much more complex associational mechanisms, but only the meanings that are implicit in verbal correlations. It turns out to be true that you can get very far — probably all the way to a convincing intellectual conversation — without any representation of the truth or significance of the propositions being formed.

It’s a bit like the obvious cheat that Turing referred to, “the inclusion in the machine of a record of someone reading a sonnet, with appropriate switching to turn it on from time to time”, but on a level and complexity that he could not imagine.

Chat-GPT does pass one test of human-like behaviour, though. It’s been programmed to refuse to answer certain kinds of questions. I heard a discussion where it was mentioned that it refused to give specific advice about travel destinations, responding with something like “I’m not a search engine. Try Google.” But when the query was changed to “Write a script in which the two characters are a travel agent and a customer, who comes with the following query…” it returned exactly the response that was being sought, with very precise information.

It reminds me of the Kasparov vs Deep Blue match in 1997, when a computer first defeated a world chess champion. The headlines were full of “human intelligence dethroned”, and so on. I commented at the time that it just showed that human understanding of chess had advanced to a point that we could mechanise it, and that I would consider a computer intelligent only when we have a program that is supposed to be doing accounting spreadsheets but instead insists on playing chess.

Continue reading “The end of the Turing test”

Suicides at universities, and elsewhere

The Guardian is reporting on the inquest results concerning the death by suicide of a physics student at Exeter University in 2021. Some details sound deeply disturbing, particularly the account of his family contacting the university “wellbeing team” to tell them about his problematic mental state, after poor exam results a few months earlier (about which he had also written to his personal tutor), but

that a welfare consultant pressed the “wrong” button on the computer system and accidentally closed the case. “I’d never phoned up before,” said Alice Armstrong Evans. “I thought they would take more notice. It never crossed my mind someone would lose the information.” She rang back about a week later but again the case was apparently accidentally closed.

Clearly this university has structural problems with the way it cares for student mental health. I’m inclined, though, to focus on the statistics, and the way they are used in the reporting to point at broader story. At Exeter, we are told, there have been (according to the deceased student’s mother) 11 suicides in the past 6 years. The university responds that “not all of the 11 deaths have been confirmed as suicides by a coroner,” and the head of physics and astronomy said “staff had tried to help Armstrong Evans and that he did not believe more suicides happened at Exeter than at other universities.”

This all sounds very defensive. But the article just leaves these statements there as duelling opinions, whereas some of the university’s claims are assertions of fact, which the journalists could have checked objectively. In particular, what about the claim that no more suicides happen at Exeter than at other universities?

While suicide rates for specific universities are not easily accessible, we do have national suicide rates broken down by age and gender (separately). Nationally, we see from ONS statistics that suicide rates have been roughly constant over the past 20 years, and that there were 11 suicides per 100,000 population in Britain in 2021. That is, 16/100,000 among men and 5.5/100,000 among women. In the relevant 20-24 age group the rate was also 11. Averaged over the previous 6 years the suicide rate in this age group was 9.9/100,000; if the gender ratio was the same, then we get 14.4/100,000 men and 5.0/100,000 women.

According to the Higher Education Statistics Agency, the total number of person years of students between the 2015/2016 and 2020/2021 academic years were 81,795 female, 69,080 male, and 210 other. This yields a prediction of around 14.5 deaths by suicide in a comparable age group over a comparable time period. Thus, if the number 11 in six years is correct, it is still fewer deaths by suicide at the University of Exeter than in comparable random sample of the rest of the population.

It’s not that this young man’s family should be content that this is just one of those things that happens. There was a system in place that should have protected him, and it failed. Students are under a lot of stress, and need support. But non-students are also under a lot of stress, and also need support. It’s not that the students are being pampered. They definitely should have institutionalised well-trained and sympathetic personnel they can turn to in a crisis. Where where are the “personal tutors” for the 20-year-olds who aren’t studying, but who are struggling with their jobs, or their families, or just the daily grind of living? And what about the people in their 40s and 50s, whose suicide rates are 50% higher than those of younger people?

Again, it would be a standard conservative response to say, We don’t get that support, so no one should get it. Suck it up! A more compassionate response is to say, students obviously benefit from this support, so let’s make sure it’s delivered as effectively as possible. And then let’s think about how to ensure that everyone who needs it gets helped through their crises.

Is Covid hacking people’s brains?

The single-celled parasite toxoplasma gondii is known to structurally change the brains of infected mice to cause them to lose their fear of cats. This transformation aids the fitness of the pathogen essential for the pathogen to complete its life cycle, because it can reproduce sexually only in cat guts. The fungus Ophiocordyceps unilateralis infects carpenter ants, and then

it grows through the insect’s body, draining it of nutrients and hijacking its mind. Over the course of a week, it compels the ant to leave the safety of its nest and ascend a nearby plant stem. It stops the ant at a height of 25 centimeters—a zone with precisely the right temperature and humidity for the fungus to grow. It forces the ant to permanently lock its mandibles around a leaf. Eventually, it sends a long stalk through the ant’s head, growing into a bulbous capsule full of spores. And because the ant typically climbs a leaf that overhangs its colony’s foraging trails, the fungal spores rain down onto its sisters below, zombifying them in turn.

The rabies virus is well known to induce aggression in its hosts, leading them to bite others and so transmit the virus in its saliva.

Is any of this relevant to humans? Toxoplasma infection is found in around 30% of UK residents — acquired from contact with pet cats — and there is evidence that it may contribute to schizophrenia. There is strong evidence that prenatal maternal infection raises the risk of the child going on to develop schizophrenia. But this is presumably just a byproduct of the essential neuropathogenicity that promoted the pathogen’s fitness in mice.

I was thinking of this, though, when I saw this new study:

Executive dysfunction following SARS-CoV-2 infection: A cross-sectional examination in a population-representative sample

People who had previously suffered a Covid infection “reported a significantly higher number of symptoms of executive dysfunction than their non-infected counterparts”. Executive dysfunction, according to Wikipedia, is “a disruption to the efficacy of the executive functions, which is a group of cognitive processes that regulate, control, and manage other cognitive processes… Executive processes are integral to higher brain function, particularly in the areas of goal formation, planning, goal-directed action, self-monitoring, attention, response inhibition, and coordination of complex cognition.”

Perhaps coincidentally, we have seen, since the start of the pandemic, an upsurge of seemingly inexplicable emotionally overwrought rejection of measures that might prevent the individual from spreading the virus, or from catching it again oneself, especially masking and vaccination. Could it be that this is itself a neurological sequela of a Covid infection, that manipulates the sufferer’s brain, like the carpenter ant’s, to maximise the spread to conspecifics? Or that, like a hacker “backdooring” a compromised system, the virus has evolved to make its host pliable to future infection, once the immune response has waned?

I’m just asking questions.

Thinking about exponential growth and “Freedom Day”

The UK government is holding fast to its plan to drop all pandemic restrictions as of 19 July, even in the face of rapidly increasing infection rates, hospitalisation rates, and Covid deaths — all up by 25-40% over last week. And numerous medical experts are being quoted in the press supporting the decision. What’s going on?

To begin with, Johnson has boxed himself in. He promised “Freedom Day” to coincide with the summer solstice, and then was forced to climb down, just as he was from his initial “infect everyone, God will recognise his own” plan last March, on realising that his policies would yield an unsustainable level of disruption. The prime minister has, by now, no reputation for consistency or decisiveness left to protect, but even so he probably feels at the very least that a further delay would undermine his self-image as the nation’s Fun Dad. At the same time, the the new opening has been hollowed out, transformed from the original “Go back to living your lives like in pre-pandemic days” message to “Resume taxable leisure activities, with the onus on individuals and private businesses to enforce infection-minimisation procedures.” Thus we have, just today, the Transport Secretary announcing that he expected rail and bus companies to insist on masking, even while the government was removing the legal requirement.

But what are they hoping to accomplish, other than a slight reduction in the budget deficit? The only formal justification offered is that of Health Secretary Sajid Javid, who said on Monday

that infection rates were likely to get worse before they got better – potentially hitting 100,000 a day – but said the vaccination programme had severely weakened the link between infections, hospitalisations and deaths.
Javid acknowledged the risks of reopening further, but said his message to those calling for delay was: “if not now, then when?”.

“Weakened the link” is an odd way of putting a situation where cases, hospitalisations, and Covid deaths are all growing exponentially at the same rate. What has changed is the gearing, the chain and all of its links is as strong as ever. In light of that exponential growth, what should we make Javid’s awkward channeling of Hillel the Elder?

I’ll talk about “masking” as synecdoche for all measures to reduce the likelihood of a person being infected or transmitting Covid. We need to consider separately the questions of when masking makes sense from an individual perspective, and from a public perspective. The individual perspective is straightforward:

On the societal level it’s more complicated, but I do find the argument of England’s Chief Medical Officer Chris Whitty… baffling:

“The slower we take it, the fewer people will have Covid, the smaller the peak will be, and the smaller the number of people who go into hospital and die,” he said.
By moving slowly, he said modelling suggested the pressure on the NHS would not be “unsustainable”.
Prof Whitty said there was less agreement on the “ideal date” to lift restrictions as there is “no such thing as an ideal date” .
However, he said a further delay would mean opening up when schools return in autumn, or in winter, when the virus has an advantage and hospitals are under more pressure.

We may argue about how much effect government regulations have on the rate of the virus spreading, but I have never before heard anyone argue that the rate of change of government regulation is relevant. Of course, too rapid gyrations in public policy may confuse or anger the public. But how the rapidity of changing the rules relates to the size of the peak seems exceptionally obscure. To the extent that you are able to have any effect with the regulations, that effect should be seen directly in R0, and so in the weekly growth or contraction of Covid cases. If masking can push down the growth rate its effect is essentially equivalent at any time in terms of the final infection rate, but masking early gives fewer total cases.

To see this, consider a very simple model: With masking cases grow 25%/week, without masking they shrink 20%/week. So if we have 1000 cases/day now, then after some weeks of masking and the same number of weeks without masking, we’ll be back to 1000 cases/day at the end. But the total number of cases will be very different. Suppose there are 10 weeks of each policy, and we have four possibilities: masking first, unmasking first, alternating (MUMU…), alternating (UMUM…). The total number of cases will be:

StrategyTotal cases
masking first57 000
unmasking first513 000
alternating (MU…)127 000
alternating (UM…)154 000

Of course, the growth rate will not remain constant. The longer we delay, the more people are immune. In the last week close to 2 million vaccine doses have been administered in the UK. That means that a 4-week delay means about 4 million extra people who are effectively immune. If we mask first, the higher growth rate will come later, thus the growth rate will be lower, and more of the cases will be mild.

The only thing I can suppose is that someone did an economic cost-benefit analysis, and decided that the value of increased economic activity was greater than the cost of lives lost and destroyed. Better to let the younger people — who have patiently waited their turn to be vaccinated — be infected, and obtain their immunity that way, than to incur the costs of another slight delay while waiting for them to have their shot at the vaccine.

The young were always at the lowest risk in this pandemic. They were asked to make a huge sacrifice to protect the elderly. Now that the older people have been protected, there is no willingness to sacrifice even another month to protect the lives and health of the young.

Update on Delta in Germany

The Robert Koch Institute produces estimates of variants of concern on Wednesdays. My projection from two weeks ago turns out to have been somewhat too optimistic. At that point I remarked that there seemed to be about 120 delta cases per day, and that that number hadn’t been increasing: The dominance of delta was coming from the reduction of other cases.

This no longer seems to be true. According to the latest RKI report, the past week has seen only a slight reduction in total cases, compared to two weeks ago, to about 604/day. And the proportion of delta continues to double weekly, so that we’re now at 59%, meaning almost 360 Delta cases/day. The number of Delta cases has thus tripled in two weeks, while the number of other cases has shrunk by a similar factor. The result is a current estimated R0 close to 1, but a very worrying prognosis. We can expect that in another two weeks, if nothing changes, we’ll have 90% Delta, around 1100 cases in total, and R0 around 1.6.

Of course, vaccination is already changing the situation. How much? By the same crude estimate I used last time — counting single vaccination as 50% immune, and looking back 3 weeks (to account for the 4 days back to the middle of the reporting period, 10 days from vaccination to immunity, and another 7 days average for infections to turn into cases), the above numbers apply to a 40% immune population. Based on vaccinations to date the population in 2 weeks will be 46% immune, reducing the R0 for Delta to around 1.5. In order to push it below 1.0 we would need to immunise 1/3 of the remaining population, so we need at least 64% fully immunised. At the current (slowed) rate of vaccination, if it doesn’t decelerate further, that will take until around the middle of September, by which point we’ll be back up around 10,000 cases/day.

Delta may not mean change

Germany is in a confusing place with its pandemic developments. Covid cases have been falling as rapidly here as they have been rising in the UK: More than 50% reduction in the past week, dropping the new cases to 6.6 per 100,000 averaged over the week, under 1000 cases per day for the first time since the middle of last summer. At the same time, the Delta variant is rapidly taking over. Last week the Robert Koch Institute reported 8%, this week it’s 15%. Virologist Christian Drosten, speaking on the NDW Coronavirus podcast this week (before the new Delta numbers were available) spoke of the 80% level in England that, he said, marked the watershed between falling and rising case numbers.

I think this is the wrong back-of-the-envelope calculation, because it depends on the overall expansion rate of the virus, and the difference between Delta and Alpha, which is likely particularly large in the UK because of the large number of people who have received just one AstraZeneca dose, which seems to be particularly ineffective against Delta. There’s another simple calculation that we can do specifically for the German situation: In the past week there have been about 810 cases per day, of which 15.1% Delta, so, about 122 Delta cases per day. The previous week there were about 1557 cases per day, of which 7.9% Delta, so also about 123 Delta cases. That suggests that under current conditions (including weather, population immunity, and social distancing) Delta is not expanding. This may mean that current conditions are adequate to keep Delta in check, while Alpha and other variants are falling by more than 50% per week.

This suggests a very optimistic picture: that total case numbers will continue to fall. Within a few weeks Delta will be completely dominant, but the number of cases may not be much more than around 100 per day. And that ignores the increasing immunity: The infections reported this week occurred in the previous week, and the immunity is based on the vaccinations two weeks before that. With about 1% of the population being vaccinated every day, we should have — relative to the approximately 70% non-immune population* 20 days ago — already have about 15% reduced transmission by the first week in July. And at current vaccination rates we can expect, by the end of July that will be 30% reduced, providing some headroom for further relaxation of restrictions without an explosion of Delta cases.

That does raise the question, though, of why the general Covid transmission rate in Germany seems to be lower than in the UK. I don’t see any obvious difference in the level of social-distancing restrictions. Is it just the difference between single-dose AZ versus Biontech? If so, we should see a rapid turnaround in the UK soon.

* I’m very roughly counting each dose as 50% immunity.

Absence of caution: The European vaccine suspension fiasco

Multiple European countries have now suspended use of the Oxford/AstraZeneca vaccine, because of scattered reports of rare clotting disorders following vaccination. In all the talk of “precautionary” approaches the urgency of the situation seems to be suddenly ignored. Every vaccine triggers serious side effects in some small number of individuals, occasionally fatal, and we recognise that in special systems for compensating the victims. It seems worth considering, when looking at the possibility of several-in-a-million complications, how many lives may be lost because of delayed vaccinations.

I start with the case fatality rate (CFR) from this metaanalysis, and multiply them by the current overall weekly case rate, which is 1.78 cases/thousand population in the EU (according to data from the ECDC). This ignores the differences between countries, and differences between age groups in infection rate, and certainly underestimates the infection rate for obvious reasons of selective testing.

Age group0-3435-4445-5455-6465-7475-8485+
CFR (per thousand)0.040.682.37.52585283
Expected fatalities per week per million population0.071.24.11345151504
Number of days delay to match VFR120070206.41.80.60.2

Let’s assume now that all of the blood clotting problems that have occurred in the EEA — 30 in total, according to this report — among the 5 million receiving the AZ vaccine were actually caused by the vaccine, and suppose (incorrectly) that all of those people had died.* That would produce a vaccine fatality rate (VFR) of 6 per million. We can double that to account for possible additional unreported cases, or other kinds of complications that have not yet been recognised. We can then calculate how many days of delay would cause as many extra deaths as the vaccine itself might cause.

The result is fairly clear: even the most extreme concerns raised about the AZ vaccine could not justify even a one-week delay in vaccination, at least among the population 55 years old and over. (I am also ignoring here the compounding effect of onward transmission prevented by vaccination, which makes the delay even more costly.) As is so often the case, “abundance of caution” turns out to be the opposite of cautious.

* I’m using only European data here, to account for the contention that there may be a specific problem with European production of the vaccine. The UK has used much more of the AZ vaccine, with even fewer problems.

The return of quota sampling

Everyone knows about the famous Dewey Defeats Truman headline fiasco, and that the Chicago Daily Tribune was inspired to its premature announcement by erroneous pre-election polls. But why were the polls so wrong?

The Social Science Research Council set up a committee to investigate the polling failure. Their report, published in 1949, listed a number of faults, including disparaging the very notion of trying to predict the outcome of a close election. But one important methodological criticism — and the one that significantly influenced the later development of political polling, and became the primary lesson in statistics textbooks — was the critique of quota sampling. (An accessible summary of lessons from the 1948 polling fiasco by the renowned psychologist Rensis Likert was published just a month after the election in Scientific American.)

Serious polling at the time was divided between two general methodologies: random sampling and quota sampling. Random sampling, as the name implies, works by attempting to select from the population of potential voters entirely at random, with each voter equally likely to be selected. This was still considered too theoretically novel to be widely used, whereas quota sampling had been established by Gallup since the mid-1930s. In quota sampling the voting population is modelled by demographic characteristics, based on census data, and each interviewer is assigned a quota to fill of respondents in each category: 51 women and 49 men, say, a certain number in the age range 21-34, or specific numbers in each “economic class” — of which Roper, for example, had five, one of which in the 1940s was “Negro”. The interviewers were allowed great latitude in filling their quotas, finding people at home or on the street.

In a sense, we have returned to quota sampling, in the more sophisticated version of “weighted probability sampling”. Since hardly anyone responds to a survey — response rates are typically no more than about 5% — there’s no way the people who do respond can be representative of the whole population. So pollsters model the population — or the supposed voting population — and reweight the responses they do get proportionately, according to demographic characteristics. If Black women over age 50 are thought to be equally common in the voting population as white men under age 30, but we have twice as many of the former as the latter, we count the responses of the latter twice as much as the former in the final estimates. It’s just a way of making a quota sample after the fact, without the stress of specifically looking for representatives of particular demographic groups.

Consequently, it has most of the deficiencies of a quota sample. The difficulty of modelling the electorate is one that has gotten quite a bit of attention in the modern context: We know fairly precisely how demographic groups are distributed in the population, but we can only theorise about how they will be distributed among voters at the next election. At the same time, it is straightforward to construct these theories, to describe them, and to test them after the fact. The more serious problem — and the one that was emphasised in the commission report in 1948, but has been less emphasised recently — is in the nature of how the quotas are filled. The reason for probability sampling is that taking whichever respondents are easiest to get — a “sample of convenience” — is sure to give you a biased sample. If you sample people from telephone directories in 1936 then it’s easy to see how they end up biased against the favoured candidate of the poor. If you take a sample of convenience within a small demographic group, such as middle-income people, then it won’t be easy to recognise how the sample is biased, but it may still be biased.

For whatever reason, in the 1930s and 1940s, within each demographic group the Republicans were easier for the interviewers to contact than the Democrats. Maybe they were just culturally more like the interviewers, so easier for them to walk up to on the street. And it may very well be that within each demographic group today Democrats are more likely to respond to a poll than Republicans. And if there is such an effect, it’s hard to correct for it, except by simply discounting Democrats by a certain factor based on past experience. (In fact, these effects can be measured in polling fluctuations, where events in the news lead one side or the other to feel discouraged, and to be less likely to respond to the polls. Studies have suggested that this effect explains much of the short-term fluctuation in election polls during a campaign.)

Interestingly, one of the problems that the commission found with the 1948 polling with relevance for the Trump era was the failure to consider education as a significant demographic variable.

All of the major polling organizations interviewed more people with college education than the actual proportion in the adult population over 21 and too few people with grade school education only.