I’ve just been reading Gerd Gigerenzer’s book Reckoning with Risk, about risk communication, mainly a plaidoyer for the use of “natural frequencies” in place of probabilities: Statements in the form “In how many cases out of 100 similar cases of X would you expect Y to happen”. He cites one study forensic psychiatry experts who were presented with a case study, and asked to estimate the likelihood of the individual being violent in the next six months. Half the subjects were asked “What is the probability that this person will commit a violent act in the next six months?” The other half were asked “How many out of 100 women like this patient would commit a violent act in the next six months?” Looking at these questions, it was obvious to me that the latter question would elicit lower estimates. Which is indeed what happened: The average response to the first question was about 0.3; the average response to the second was about 20.
What surprised me was that Gigerenzer seemed perplexed by this consistent difference in one direction (though, obviously, not by the fact that the experts were confused by the probability statement). He suggested that those answering the first question were thinking about the same patient being released multiple times, which didn’t make much sense to me.
What I think is that the experts were thinking of the individual probability as a hidden fact, not a statistical statement. Asked to estimate this unknown probability it seems natural that they would be cautious: thinking it’s somewhere between 10 and 30 percent they would not want to underestimate this individual’s probability, and so would conservatively state the upper end. This is perfectly consistent with them thinking that, averaged over 100 cases they could confidently state that about 20 would commit a violent act.