One of the most politically important economics results of recent years has been the paper by Reinhart and Rogoff on the link between high sovereign debt and low GDP growth. This work is something I’d been following for a while, as R&R’s book was one that I’d admired greatly. Their work claimed to show a strong negative correlation between sovereign debt/GDP ratio and ensuing GDP growth, and was reported as saying that 90% debt/GDP ratio marks a cliff that an economy falls off, killing future growth. This was seized upon by proponents of austerity as proof that budget cuts can’t wait.
As reported here and here by Paul Krugman, and here and here by Matt Yglesias, it now turns out that the result isn’t just theoretically misguided, it’s bogus. Economists who struggled to reproduce the results finally isolated a whole raft of errors and dubious hidden assumptions that completely undermine the conclusion. Only the most blatantly ridiculous fault was an error in their Excel spreadsheet formula that caused them to exclude important sections of the data from their computation. You’d think that this couldn’t get any worse, but instead of apologising abjectly, R&R have tried to argue that none of this was really essential to their real point, whatever that was.
My main thoughts:
- Do economists really do their analysis with Excel? I find this kind of shocking, like if I found out that some surgeons like to make their incisions with flint knives, or if airline pilots were calculating their flightpaths with slide rules. Once you accept that premise, it’s not surprising that they made a blunder like this. I’m not a snob about technology. Spreadsheets are great for doing payrolls, and for getting a look at tables of numbers, and doing some quick calculations. But they’re so opaque, they’re not appropriate to academic work, and they’re so inflexible that it’s inconceivable to me that someone who analyses data on a more or less regular basis would choose to use them.
- Maybe I should have listened to my colleagues who say (sometimes in nicer words) that economists are basically full of shit. The typical mathematician’s opinion of economists seems to be that they dress up self-serving opinions in flashy mathematical clothes to beguile the natives. (The typical mathematician’s view of his/her own universal brilliance was unsurpassably described by Swift.)
- The combination of high influence, in policy debates that actually affect rich people’s pocketbooks, and low standards of objectivity is a magnet for charlatans, and may tend to tempt people in the field who were serious into charlatanism.
- Of course, there are many exceptions. Paul Krugman argues here that this is the fault of the “VSP” (very serious people), the editorial page poobahs and finance ministers and central bankers, who elevated this paper to a dogma of economics, rather than presenting it as a disputed claim by one group of economists. But why is it that the field of economics has so much difficulty distinguishing responsible thinkers from cranks, so that interested parties can so easily rely on their team within the economics profession?
- Is economics undermining the reputation of science more generally? I’ve been genuinely mystified by the professed belief of Congressional Republicans that global warming is a hoax perpetrated by climate scientists. Anyone who knows the culture of physical scientists know that they are as likely to systematically falsify data to advance a political agenda as Mormons are to run a chain of taverns to raise money for lobbying for same-sex marriage. But US politicians are unlikely to have studied physical sciences, and the closest most of them came to natural sciences in their student days was probably an economics course. There were equations, so it probably looked like hard science. If this speculation is correct, the effort by economists to raise the status of their discipline by making it look more like physics may instead have yielded blowback against the reputation of the physical sciences.
- Scientists often will go to great lengths to avoid admitting that they’ve made a serious blunder, even when the blunder is not in any real sense debatable or a matter of opinion. Constructive engagement with the critique is less common than you might hope, even among mathematicians.* This is not specific to economics. R&R’s response is typical of the kind of distraction and delay response you get when you confront researchers with statistical errors in their work: a) The effect isn’t really so big; b) Whatever portions of the work are affected weren’t really the main point.
I first got interested in economics a few years back, as a consequence of the financial crisis, and started reading some textbooks, classic texts, and popular works, particularly on behavioural economics. Until then, my impressions of economics as a discipline, while not quite the naive contempt common to mathematicians, was largely shaped, on the one hand, by the clearly simpleminded efforts of financial mathematics (about which I have recounted my opinions at great length elsewhere), and cock-crowing popular works like Freakonomics, which gave me the impression of economists as contemptuous of all other social sciences, believing they can answer all the world’s questions better than the people who actually understand the facts, because they know how to do multiple regression. Like mathematicians (as parodied by Jonathan Swift), without even knowing much mathematics. (Lawrence Summers gave a bravura performance as the insufferable economist on his stage as president of Harvard.)
But my reading — in particular, Paul Krugman’s books and popular essays, but also Shiller and Akerlof on behavioural economics, Reinhart and Rogoff’s book This Time Is Different, about the history of financial crises, and listening to Brad DeLong’s lectures on economic history, convinced me that these people actually have a huge toolkit of useful concepts that can make sense of the otherwise explicable. The fact that they’re carving understanding out of an inchoate and ever-changing human reality is something to praise them for, not to criticise.
Now I think I may need to revise my opinion again.
The inclination to fix a theoretical or empirical argument around a prior political or moral commitment is foreign to mathematicians. It is known to statisticians, who are experts in bias above all, but the ethic of the field works strongly against it. I have found myself arguing against colleagues who consider economists to be just politicians with spreadsheets (and without votes). And yet, I have also found myself speaking with an esteemed economist (privately) after a talk he gave that seemed to have been based on a serious technical flaw — at least, I sought some explanation of why it was not subject to this technical flaw — and being blown off — he refused to let me even say what it was — because his work was supposed to argue that certain people should continue to receive disability payments, and “they’re scared, and what are you going to offer them?” That is, he explicitly refused to consider a possible error in his analysis, until I could convince him that the conclusions that he wanted to reach would not be affected. So clearly there’s some rot in the field. But I still argued that the basic architecture of economics seemed to be pretty sound.
Now, along come R&R. Their work was immediately recognised as dubious by those not in thrall to the austerians, since there is obvious causality in the other direction: low growth leads to rising debt ratios. And reading all of this with no disciplinary stake in the fight, it did seem slightly as though R&R were overselling their conclusions. But I couldn’t have imagined that they had simply failed to do a basic regression correctly.
I remarked earlier on Natalia Cecire’s attack on statistics as a puerile endeavour, puerile because it is committed to “rules of the game” above all. Literary theorists in the 20th century have done amazing work in elucidating how formalism covers all manner of political sins, but it is wrong to conclude then that there is no alternative to simply coming to every academic question from the perspective of a political advocate, as though not choosing to use your research to attack the patriarchy/imperialist hegemony/Schweinesystem means by default to support it.
The value of technical standards is not that they automatically lead to the truth, or that they are without biases, but that their biases are different from those that humans are naturally prone to. They push in different directions, and so private agendas can run up against hard technical limits. Obviously, this doesn’t work if the technical standards and methods are so flexible that you never have to admit that your initial impulse, the idea that you wanted to demonstrate, was wrong. Then you end up with the situation rightly ridiculed, where so-called experts are just high priests using their hieratic terminology to intimidate the rubes and impose their political designs — or those of their masters.
*Mathematics really does have standards that are hard to bluff. Where you get distraction and delay there is when someone has published an erroneous proof of a true result.