Thinking about exponential growth and “Freedom Day”

The UK government is holding fast to its plan to drop all pandemic restrictions as of 19 July, even in the face of rapidly increasing infection rates, hospitalisation rates, and Covid deaths — all up by 25-40% over last week. And numerous medical experts are being quoted in the press supporting the decision. What’s going on?

To begin with, Johnson has boxed himself in. He promised “Freedom Day” to coincide with the summer solstice, and then was forced to climb down, just as he was from his initial “infect everyone, God will recognise his own” plan last March, on realising that his policies would yield an unsustainable level of disruption. The prime minister has, by now, no reputation for consistency or decisiveness left to protect, but even so he probably feels at the very least that a further delay would undermine his self-image as the nation’s Fun Dad. At the same time, the the new opening has been hollowed out, transformed from the original “Go back to living your lives like in pre-pandemic days” message to “Resume taxable leisure activities, with the onus on individuals and private businesses to enforce infection-minimisation procedures.” Thus we have, just today, the Transport Secretary announcing that he expected rail and bus companies to insist on masking, even while the government was removing the legal requirement.

But what are they hoping to accomplish, other than a slight reduction in the budget deficit? The only formal justification offered is that of Health Secretary Sajid Javid, who said on Monday

that infection rates were likely to get worse before they got better – potentially hitting 100,000 a day – but said the vaccination programme had severely weakened the link between infections, hospitalisations and deaths.
Javid acknowledged the risks of reopening further, but said his message to those calling for delay was: “if not now, then when?”.

“Weakened the link” is an odd way of putting a situation where cases, hospitalisations, and Covid deaths are all growing exponentially at the same rate. What has changed is the gearing, the chain and all of its links is as strong as ever. In light of that exponential growth, what should we make Javid’s awkward channeling of Hillel the Elder?

I’ll talk about “masking” as synecdoche for all measures to reduce the likelihood of a person being infected or transmitting Covid. We need to consider separately the questions of when masking makes sense from an individual perspective, and from a public perspective. The individual perspective is straightforward:

On the societal level it’s more complicated, but I do find the argument of England’s Chief Medical Officer Chris Whitty… baffling:

“The slower we take it, the fewer people will have Covid, the smaller the peak will be, and the smaller the number of people who go into hospital and die,” he said.
By moving slowly, he said modelling suggested the pressure on the NHS would not be “unsustainable”.
Prof Whitty said there was less agreement on the “ideal date” to lift restrictions as there is “no such thing as an ideal date” .
However, he said a further delay would mean opening up when schools return in autumn, or in winter, when the virus has an advantage and hospitals are under more pressure.

We may argue about how much effect government regulations have on the rate of the virus spreading, but I have never before heard anyone argue that the rate of change of government regulation is relevant. Of course, too rapid gyrations in public policy may confuse or anger the public. But how the rapidity of changing the rules relates to the size of the peak seems exceptionally obscure. To the extent that you are able to have any effect with the regulations, that effect should be seen directly in R0, and so in the weekly growth or contraction of Covid cases. If masking can push down the growth rate its effect is essentially equivalent at any time in terms of the final infection rate, but masking early gives fewer total cases.

To see this, consider a very simple model: With masking cases grow 25%/week, without masking they shrink 20%/week. So if we have 1000 cases/day now, then after some weeks of masking and the same number of weeks without masking, we’ll be back to 1000 cases/day at the end. But the total number of cases will be very different. Suppose there are 10 weeks of each policy, and we have four possibilities: masking first, unmasking first, alternating (MUMU…), alternating (UMUM…). The total number of cases will be:

StrategyTotal cases
masking first57 000
unmasking first513 000
alternating (MU…)127 000
alternating (UM…)154 000

Of course, the growth rate will not remain constant. The longer we delay, the more people are immune. In the last week close to 2 million vaccine doses have been administered in the UK. That means that a 4-week delay means about 4 million extra people who are effectively immune. If we mask first, the higher growth rate will come later, thus the growth rate will be lower, and more of the cases will be mild.

The only thing I can suppose is that someone did an economic cost-benefit analysis, and decided that the value of increased economic activity was greater than the cost of lives lost and destroyed. Better to let the younger people — who have patiently waited their turn to be vaccinated — be infected, and obtain their immunity that way, than to incur the costs of another slight delay while waiting for them to have their shot at the vaccine.

The young were always at the lowest risk in this pandemic. They were asked to make a huge sacrifice to protect the elderly. Now that the older people have been protected, there is no willingness to sacrifice even another month to protect the lives and health of the young.

Update on Delta in Germany

The Robert Koch Institute produces estimates of variants of concern on Wednesdays. My projection from two weeks ago turns out to have been somewhat too optimistic. At that point I remarked that there seemed to be about 120 delta cases per day, and that that number hadn’t been increasing: The dominance of delta was coming from the reduction of other cases.

This no longer seems to be true. According to the latest RKI report, the past week has seen only a slight reduction in total cases, compared to two weeks ago, to about 604/day. And the proportion of delta continues to double weekly, so that we’re now at 59%, meaning almost 360 Delta cases/day. The number of Delta cases has thus tripled in two weeks, while the number of other cases has shrunk by a similar factor. The result is a current estimated R0 close to 1, but a very worrying prognosis. We can expect that in another two weeks, if nothing changes, we’ll have 90% Delta, around 1100 cases in total, and R0 around 1.6.

Of course, vaccination is already changing the situation. How much? By the same crude estimate I used last time — counting single vaccination as 50% immune, and looking back 3 weeks (to account for the 4 days back to the middle of the reporting period, 10 days from vaccination to immunity, and another 7 days average for infections to turn into cases), the above numbers apply to a 40% immune population. Based on vaccinations to date the population in 2 weeks will be 46% immune, reducing the R0 for Delta to around 1.5. In order to push it below 1.0 we would need to immunise 1/3 of the remaining population, so we need at least 64% fully immunised. At the current (slowed) rate of vaccination, if it doesn’t decelerate further, that will take until around the middle of September, by which point we’ll be back up around 10,000 cases/day.

Delta may not mean change

Germany is in a confusing place with its pandemic developments. Covid cases have been falling as rapidly here as they have been rising in the UK: More than 50% reduction in the past week, dropping the new cases to 6.6 per 100,000 averaged over the week, under 1000 cases per day for the first time since the middle of last summer. At the same time, the Delta variant is rapidly taking over. Last week the Robert Koch Institute reported 8%, this week it’s 15%. Virologist Christian Drosten, speaking on the NDW Coronavirus podcast this week (before the new Delta numbers were available) spoke of the 80% level in England that, he said, marked the watershed between falling and rising case numbers.

I think this is the wrong back-of-the-envelope calculation, because it depends on the overall expansion rate of the virus, and the difference between Delta and Alpha, which is likely particularly large in the UK because of the large number of people who have received just one AstraZeneca dose, which seems to be particularly ineffective against Delta. There’s another simple calculation that we can do specifically for the German situation: In the past week there have been about 810 cases per day, of which 15.1% Delta, so, about 122 Delta cases per day. The previous week there were about 1557 cases per day, of which 7.9% Delta, so also about 123 Delta cases. That suggests that under current conditions (including weather, population immunity, and social distancing) Delta is not expanding. This may mean that current conditions are adequate to keep Delta in check, while Alpha and other variants are falling by more than 50% per week.

This suggests a very optimistic picture: that total case numbers will continue to fall. Within a few weeks Delta will be completely dominant, but the number of cases may not be much more than around 100 per day. And that ignores the increasing immunity: The infections reported this week occurred in the previous week, and the immunity is based on the vaccinations two weeks before that. With about 1% of the population being vaccinated every day, we should have — relative to the approximately 70% non-immune population* 20 days ago — already have about 15% reduced transmission by the first week in July. And at current vaccination rates we can expect, by the end of July that will be 30% reduced, providing some headroom for further relaxation of restrictions without an explosion of Delta cases.

That does raise the question, though, of why the general Covid transmission rate in Germany seems to be lower than in the UK. I don’t see any obvious difference in the level of social-distancing restrictions. Is it just the difference between single-dose AZ versus Biontech? If so, we should see a rapid turnaround in the UK soon.

* I’m very roughly counting each dose as 50% immunity.

Absence of caution: The European vaccine suspension fiasco

Multiple European countries have now suspended use of the Oxford/AstraZeneca vaccine, because of scattered reports of rare clotting disorders following vaccination. In all the talk of “precautionary” approaches the urgency of the situation seems to be suddenly ignored. Every vaccine triggers serious side effects in some small number of individuals, occasionally fatal, and we recognise that in special systems for compensating the victims. It seems worth considering, when looking at the possibility of several-in-a-million complications, how many lives may be lost because of delayed vaccinations.

I start with the case fatality rate (CFR) from this metaanalysis, and multiply them by the current overall weekly case rate, which is 1.78 cases/thousand population in the EU (according to data from the ECDC). This ignores the differences between countries, and differences between age groups in infection rate, and certainly underestimates the infection rate for obvious reasons of selective testing.

Age group0-3435-4445-5455-6465-7475-8485+
CFR (per thousand)0.040.682.37.52585283
Expected fatalities per week per million population0.071.24.11345151504
Number of days delay to match VFR120070206.41.80.60.2

Let’s assume now that all of the blood clotting problems that have occurred in the EEA — 30 in total, according to this report — among the 5 million receiving the AZ vaccine were actually caused by the vaccine, and suppose (incorrectly) that all of those people had died.* That would produce a vaccine fatality rate (VFR) of 6 per million. We can double that to account for possible additional unreported cases, or other kinds of complications that have not yet been recognised. We can then calculate how many days of delay would cause as many extra deaths as the vaccine itself might cause.

The result is fairly clear: even the most extreme concerns raised about the AZ vaccine could not justify even a one-week delay in vaccination, at least among the population 55 years old and over. (I am also ignoring here the compounding effect of onward transmission prevented by vaccination, which makes the delay even more costly.) As is so often the case, “abundance of caution” turns out to be the opposite of cautious.

* I’m using only European data here, to account for the contention that there may be a specific problem with European production of the vaccine. The UK has used much more of the AZ vaccine, with even fewer problems.

The return of quota sampling

Everyone knows about the famous Dewey Defeats Truman headline fiasco, and that the Chicago Daily Tribune was inspired to its premature announcement by erroneous pre-election polls. But why were the polls so wrong?

The Social Science Research Council set up a committee to investigate the polling failure. Their report, published in 1949, listed a number of faults, including disparaging the very notion of trying to predict the outcome of a close election. But one important methodological criticism — and the one that significantly influenced the later development of political polling, and became the primary lesson in statistics textbooks — was the critique of quota sampling. (An accessible summary of lessons from the 1948 polling fiasco by the renowned psychologist Rensis Likert was published just a month after the election in Scientific American.)

Serious polling at the time was divided between two general methodologies: random sampling and quota sampling. Random sampling, as the name implies, works by attempting to select from the population of potential voters entirely at random, with each voter equally likely to be selected. This was still considered too theoretically novel to be widely used, whereas quota sampling had been established by Gallup since the mid-1930s. In quota sampling the voting population is modelled by demographic characteristics, based on census data, and each interviewer is assigned a quota to fill of respondents in each category: 51 women and 49 men, say, a certain number in the age range 21-34, or specific numbers in each “economic class” — of which Roper, for example, had five, one of which in the 1940s was “Negro”. The interviewers were allowed great latitude in filling their quotas, finding people at home or on the street.

In a sense, we have returned to quota sampling, in the more sophisticated version of “weighted probability sampling”. Since hardly anyone responds to a survey — response rates are typically no more than about 5% — there’s no way the people who do respond can be representative of the whole population. So pollsters model the population — or the supposed voting population — and reweight the responses they do get proportionately, according to demographic characteristics. If Black women over age 50 are thought to be equally common in the voting population as white men under age 30, but we have twice as many of the former as the latter, we count the responses of the latter twice as much as the former in the final estimates. It’s just a way of making a quota sample after the fact, without the stress of specifically looking for representatives of particular demographic groups.

Consequently, it has most of the deficiencies of a quota sample. The difficulty of modelling the electorate is one that has gotten quite a bit of attention in the modern context: We know fairly precisely how demographic groups are distributed in the population, but we can only theorise about how they will be distributed among voters at the next election. At the same time, it is straightforward to construct these theories, to describe them, and to test them after the fact. The more serious problem — and the one that was emphasised in the commission report in 1948, but has been less emphasised recently — is in the nature of how the quotas are filled. The reason for probability sampling is that taking whichever respondents are easiest to get — a “sample of convenience” — is sure to give you a biased sample. If you sample people from telephone directories in 1936 then it’s easy to see how they end up biased against the favoured candidate of the poor. If you take a sample of convenience within a small demographic group, such as middle-income people, then it won’t be easy to recognise how the sample is biased, but it may still be biased.

For whatever reason, in the 1930s and 1940s, within each demographic group the Republicans were easier for the interviewers to contact than the Democrats. Maybe they were just culturally more like the interviewers, so easier for them to walk up to on the street. And it may very well be that within each demographic group today Democrats are more likely to respond to a poll than Republicans. And if there is such an effect, it’s hard to correct for it, except by simply discounting Democrats by a certain factor based on past experience. (In fact, these effects can be measured in polling fluctuations, where events in the news lead one side or the other to feel discouraged, and to be less likely to respond to the polls. Studies have suggested that this effect explains much of the short-term fluctuation in election polls during a campaign.)

Interestingly, one of the problems that the commission found with the 1948 polling with relevance for the Trump era was the failure to consider education as a significant demographic variable.

All of the major polling organizations interviewed more people with college education than the actual proportion in the adult population over 21 and too few people with grade school education only.

What is the UK government trying to do with COVID-19?

It would be a drastic understatement to say that people are confused by the official advice coming with respect to social-distancing measures to prevent the spread of SARS-CoV-2. Some are angry. Some are appalled. And that includes some very smart people who understand the relevant science better than I do, and probably at least as well as the experts who are advising the government. Why are they not closing schools and restaurants, or banning sporting events — until the Football Association decided to ban themselves — while at the same time signalling that they will be taking such measures in the future? I’m inclined to start from the presumption that there’s a coherent and sensible — though possibly ultimately misguided (or well guided but to-be-proved-retrospectively wrong) — strategy, and I find it hard to piece together what they’re talking about with “herd immunity” and “nudge theory”.

Why, in particular, are they talking about holding the extreme social-distancing measures in reserve until later? Intuitively you would think that slowing the progress of the epidemic can be done at any stage, and the sooner you start the more effective it will be.

Here’s my best guess about what’s behind it, which has the advantage of also providing an explanation why the government’s communication has been so ineffective: Unlike most other countries, which are taking the general approach that the goal is to slow the spread of the virus as much as possible (though they may disagree about what is possible), the UK government wants to slow the virus, but not too much.

The simplest model for the evolution of the number of infected individuals (x) is a differential equation

Here A is the fraction immune at which R0 (the number that each infected person infects) reaches 1, so growth enters a slower phase. The solution is

Basically, if you control the level of social interaction, you change k, slowing the growth of the cumulative rate parameter K(t). There’s a path that you can run through, at varying rates, until you reach the target level A. So, assuming the government can steer k as they like, they can stretch out the process as they like, but they can’t change the ultimate destination. The corresponding rate of new infections — the key thing that they need to hold down, to prevent collapse of the NHS — is kx(Ax). (It’s more complicated because of the time delay between infection, symptoms, and recovery, raising the question of whether such a strategy based on determining the timing of epidemic spread is feasible in practice. A more careful analysis would use the three-variable SIR model.)

Suppose now you think that you can reduce k by a certain amount for a certain amount of time. You want to concentrate your effort in the time period where x is around A/2. But you don’t want to push k too far down, because that slows the whole process down, and uses up the influence. The basic idea is, there’s nothing we can do to change the endpoint (x=A); all you can do is steer the rate so that

  1. The maximum rate of new infections (or rather, of total cases in need of hospitalisation) is as low as possible;
  2. The peak is not happening next winter, when the NHS is in its annual flu-season near-collapse;
  3. The fraction A of the population that is ultimately infected — generally taken to be about 60% in most renditions — includes as few as possible of the most at-risk members of the public. That also requires that k not be too small, because keeping the old and the infirm segregated from the young and the healthy can only be done for a limited time. (This isn’t Florida!)

Hence the messaging problem: It’s hard to say, we want to reduce the rate of spread of the infection, but not too much, without it sounding like “We want some people to die.”

There’s no politic way to say, we’re intentionally letting some people get sick, because only their immunity will stop the infection. Imagine the strategy were: Rather than close the schools, we will send the children off to a fun camp where they will be encouraged to practice bad hygiene for a few weeks until they’ve all had CoViD-19. A crude version of school-based vaccination. If it were presented that way, parents would pull their children out in horror.

It’s hard enough getting across the message that people need to take efforts to remain healthy to protect others. You can appeal to their sense of solidarity. Telling people they need to get sick so that other people can remain healthy is another order of bewildering, and people are going to rebel against being instrumentalised.

Of course, if this virus doesn’t produce long-term immunity — and there’s no reason necessarily to expect that it will — then this strategy will fail. As will every other strategy.

Putting Covid-19 mortality into context

[Cross-posted with Statistics and Biodemography Research Group blog.]

The age-specific estimates of fatality rates for Covid-19 produced by Riou et al. in Bern have gotten a lot of attention:

0-910-1920-2930-3940-4950-5960-6970-7980+Total
.094.22.911.84.013469818016
Estimated fatality in deaths per thousand cases (symptomatic and asymptomatic)

These numbers looked somewhat familiar to me, having just lectured a course on life tables and survival analysis. Recent one-year mortality rates in the UK are in the table below:

0-910-1920-2930-3940-4950-5960-6970-7980-89
.012.17.43.801.84.2102885
One-year mortality probabilities in the UK, in deaths per thousand population. Neonatal mortality has been excluded from the 0-9 class, and the over-80 class has been cut off at 89.

Depending on how you look at it, the Covid-19 mortality is shifted by a decade, or about double the usual one-year mortality probability for an average UK resident (corresponding to the fact that mortality rates double about every 9 years). If you accept the estimates that around half of the population in most of the world will eventually be infected, and if these mortality rates remain unchanged, this means that effectively everyone will get a double dose of mortality risk this year. Somewhat lower (as may be seen in the plots below) for the younger folk, whereas the over-50s get more like a triple dose.

Do billionaire mayors make you live longer?

In trying to compose an argument for why Democrats’ best hope for defeating the incompetent septuagenarian autocratic billionaire Republican in the White House is to nominate a highly competent septuagenarian autocratic billionaire (former) Republican of their own, Emily Stewart at Vox — jumping in to extend Vox’s series on the leading candidates in the Democratic presidential primary with the case for late entrant Mike Bloomberg — has some reasonable points, mixed in with one very odd accolade:

Under Bloomberg, New Yorkers’ life expectancy increased by about three years.

Not that this is false, but we must recall that Bloomberg was mayor of New York for 12 years. As pointed out by Oeppen and Vaupel in a Science article that appeared in 2002 (the first year of Bloomberg’s mayoralty), life expectancy at birth in the most economically advanced countries of the world has been increasing at an astonishingly steady 2.5 years per decade since around 1840. If we had then predicted how much increase we should expect over 12 years, we should have said… three years. Indeed, looking at a few comparably wealthy countries chosen more or less at random over the same period we see life expectancy at birth as follows:

Country20022014Increase
Australia80.0782.592.52
UK78.2481.162.92
Japan81.8383.731.90
Canada79.5781.942.37
Netherlands78.4181.653.24

Mike got it done!

To be fair there are two exceptions to this trend: Japan, which had the highest life expectancy in the world in 2002 still had the highest in 2014, but it had gained only two years.

The USA, which had the lowest life expectancy at the start (among large wealthy countries), at 77.03, fell further behind, to 79.06, and has since actually decreased. So I guess you might say that Bloomberg has shown his ability to thwart the destructive trends in the US, and make it, as he made New York, as successful as an average West European country. Which doesn’t sound like the worst campaign platform.

Trump supporters are ignoring the base (rate) — Or, Ich möcht’ so gerne wissen, ob Trumps erpressen

One of the key insights from research on decision-making — from Tversky and Kahneman, Gigerenzer, and others — is the “base rate fallacy”: in judging new evidence people tend to ignore the underlying (prior) likelihood of various outcomes. A famous example, beloved of probability texts and lectures, is the reasonably accurate — 99% chance of a correct result — test for a rare disease (1 in 10,000 in the population). A randomly selected person with a positive test has a 99% chance of not having the disease, since correct positive tests on the 1 in 10,000 infected individuals are far less common than false positive tests on the other 9,999.

This seems to fit into a more general pattern of prioritising new and/or private information over public information that may be more informative, or at least more accessible. Journalists are conspicuously prone to this bias. For instance, as Brexit blogger Richard North has lamented repeatedly, UK journalists would breathlessly hype the latest leaks of government planning documents revealing the extent of adjustments that would be needed for phytosanitary checks at the border, for instance, or aviation, where the same information had been available for a year in official planning documents on the European Commission website. This psychological bias was famously exploited by WWII British intelligence operatives in Operation Mincemeat, where they dropped a corpse stuffed with fake plans for an invasion at Calais into the sea, where they knew it would wind up on the shore in Spain. They knew that the Germans would take the information much more seriously if they thought they had found it covertly. In my own experience of undergraduate admissions at Oxford I have found it striking the extent to which people consider what they have seen in a half-hour interview to be the deep truth about a candidate, outweighing the evidence of examinations and teacher evaluations.

Which brings us to Donald Trump, who has been accused of colluding with foreign governments to defame his political opponents. He has done his collusion both in private and in public. He famously announced in a speech during the 2016 election campaign, “Russia, if you’re listening, I hope you’re able to find the 30,000 emails that are missing. I think you will probably be rewarded mightily by our press.” And just the other day he said “I would think that if [the Ukrainean government] were honest about it, they’d start a major investigation into the Bidens. It’s a very simple answer. They should investigate the Bidens because how does a company that’s newly formed—and all these companies—and by the way, likewise, China should start an investigation into the Bidens because what happened in China is just about as bad as what happened with Ukraine.”

It seems pretty obvious. But no, that’s public information. Trump has dismissed his appeal to Russia as “a joke”, and just yesterday Senator Marco Rubio contended that the fact that the appeal to China was so blatant and public shows that it probably wasn’t “real”, that Trump was “just needling the press knowing that you guys are going to get outraged by it.” The private information is, of course, being kept private, and there seems to be a process by which formerly shocking secrets are moved into the public sphere gradually, so that they slide imperceptibly from being “shocking if true” to “well-known, hence uninteresting”.

I am reminded of the epistemological conundrum posed by the Weimar-era German cabaret song, “Ich möcht’ so gern wissen, ob sich die Fische küssen”:

Ich möcht’ so gerne wissen
Ob sich die Fische küssen –
Unterm Wasser sieht man’s nicht
Na, und überm Wasser tun sie’s nicht!

I would so like to know
if fish sometimes kiss.
Underwater we can’t see it.
And out of the water they never do it.

The power of baselines

From today’s Guardian:


It took decades to establish that smoking causes lung cancer. Heavy smoking increases the risk of lung cancer by a factor of about 11, the largest risk ratio for any common risk factor for any disease. But that doesn’t make it peculiar that there should be any non-smokers with lung cancer.

As with my discussion of the horrified accounts of obesity someday overtaking smoking as a cause of cancer, the main cause is a change in the baseline level of smoking. As fewer people smoke, and as non-smokers stubbornly continue to age and die, the proportional mortality of non-smokers will inevitably increase.

It is perfectly reasonable to say we should consider diverting public-health resources from tobacco toward other causes of disease, as the fraction of disease caused by smoking declines. And it’s particularly of concern for physicians, who tend toward essentialism in their view of risk factors — “lung cancer is a smoker’s disease” — to the neglect of base rates. But the Guardian article frames the lung cancer deaths in non-smokers as a worrying “rise”:

They blame the rise on car fumes, secondhand smoke and indoor air pollution, and have urged people to stop using wood-burning stoves because the soot they generate increases risk… About 6,000 non-smoking Britons a year now die of the disease, more than lose their lives to ovarian or cervical cancer or leukaemia, according to research published on Friday in the Journal of the Royal Society of Medicine.

While the scientific article they are reporting on never explicitly says that lung cancer incidence in non-smokers [LCINS] is increasing, certainly some fault for the confusion may be found there:

the absolute numbers and rates of lung cancers in never-smokers are increasing, and this does not appear to be confounded by passive smoking or misreported smoking status.

This sounds like a serious matter. Except, the source they cite a) doesn’t provide much evidence of this and b) is itself 7 years old, and only refers to evidence that dates back well over a decade. It cites one study that found an increase in LCINS in Swedish males in the 1970s and 1980s, a much larger study that found no change over time in LCINS in the US between 1959 and 2004, and a French study that found rates increasing in women and decreasing in men, concluding finally

An increase in LCINS incidence could be real, or the result of the decrease in the proportion of ever smokers in some strata of the general population, and/or ageing within these categories.

What proportion of lung cancers should we expect to be found in non-smokers? Taking the 11:1 risk ratio, and 15% smoking rate in the UK population, we should actually expect about 85/(15×11)≈52% of lung cancer to occur in non-smokers. Why is it only 1/6, then? The effect of smoking on lu estimated that lung cancer develops after about 30 years of smoking. If we look back at the 35% smoking incidence of the mid 1980s, we would get an estimate of about 65/(35×11)≈17%.