Medical hype and under-hype


New heart treatment is biggest breakthrough since statins, scientists say

I just came across this breathless headline published in the Guardian from last year. On the one hand, this is just one study, the effect was barely statistically significant, and experience suggests a fairly high likelihood that this will ultimately have no effect on general medical practice or on human health and mortality rates. I understand the exigencies of the daily newspaper publishing model, but it’s problematic that the “new research study” has been defined as the event on which to hang a headline. The only people who need that level of up-to-the-minute detail are those professionally involved in winnowing out the new ideas and turning them into clinical practice. We would all be better served if newspapers instead reported on what new treatments have actually had an effect over the last five years. That would be just as novel to the general readership, and far less erratic.

On the other hand, I want to comment on one point of what I see as exaggerated skepticism: The paragraph that summarises the study results says

For patients who received the canakinumab injections the team reported a 15% reduction in the risk of a cardiovascular event, including fatal and non-fatal heart attacks and strokes. Also, the need for expensive interventional procedures, such as bypass surgery and inserting stents, was cut by more than 30%. There was no overall difference in death rates between patients on canakinumab and those given placebo injections, and the drug did not change cholesterol levels.

There is then a quote:

Prof Martin Bennett, a cardiologist from Cambridge who was not involved in the study, said the trial results were an important advance in understanding why heart attacks happen. But, he said, he had concerns about the side effects, the high cost of the drug and the fact that death rates were not better in those given the drug.

In principle, I think this is a good thing. There are far too many studies that show a treatment scraping out a barely significant reduction in mortality due to one cause, which is highlighted, but a countervailing mortality increase due to other causes, netting out to essentially no improvement. Then you have to say, we really should be aiming to reduce mortality, not to reduce a cause of mortality. (I remember many years ago, a few years after the US started raising the age for purchasing alcohol to 21, reading of a study that was heralded as showing the success of this approach, having found that the number of traffic fatalities attributed to alcohol had decreased substantially. Unfortunately, the number of fatalities not attributed to alcohol had increased by a similar amount, suggesting that some amount of recategorisation was going on.) Sometimes researchers will try to distract attention from a null result for mortality by pointing to a secondary endpoint — improved results on a blood test linked to mortality, for instance — which needs to be viewed with some suspicion.

In this case, though, I think the skepticism is unwarranted. There is no doubt that before the study the researchers would have predicted reduction in mortality from cardiovascular causes, no reduction due to any other cause, and likely an increase due to infection. The worry would be that the increase due to infection — or to some unanticipated side effect — would outweigh the benefits.

The results confirmed the best-case predictions. Cardiovascular mortality was reduced — possibly a lot, possibly only slightly. Deaths due to infections increased significantly in percentage terms, but the numbers were small relative to the cardiovascular improvements. The one big surprise was a very substantial reduction in cancer mortality. The researchers are open about not having predicted this, and not having a clear explanation. In such a case, it would be wrong to put much weight on the statistical “significance”, because it is impossible to quantify the class of hypotheses that are implicitly being ignored. The proper thing is to highlight this observation for further research, as they have properly done.

When you deduct these three groups of causes — cardiovascular, infections, cancer — you are left with approximately equal mortality rates in the placebo and treatment groups, as expected. So there is no reason to be “concerned” that overall mortality was not improved in those receiving the drug. First of all, overall mortality was better in the treatment group. It’s just that the improvement in CV mortality — as predicted — while large enough to be clearly not random when compared with the overall number of CV deaths, it was not large compared with the much larger total number of deaths. This is no more “concerning” than it would be, when reviewing a programme for improving airline safety, to discover that it did not appreciably change the total number of transportation-related fatalities.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: