Superforecasting:
There is a huge market for forecasting in our country, from political talking heads on TV, to investment advisors, to the approximately 20,000 intelligence analysts working in our intelligence establishment. But, while the stakes for getting it wrong can be huge (see Iraq’s WMD), there is no formal reliable way of measuring or analyzing the track records of those doing the predicting. Pick your favorite commentator or columnist, what’s their hit rate on their predictions? That’s impossible to answer, first because no one has compared what they said would happen to what did happen, and even if they did, so many of their predictions are so vaguely worded that most of them can easily claim they meant something else and wiggle off the hook.
Philip Tetlock is trying to change that. Beginning in the 1980s, he has been studying how good experts are at prediction (answer, just slightly better than a drunk monkey throwing darts). One of his findings was that pundits who were the most confident tended to be wrong more often, but they also got on TV more often. They are hired more for their ability to tell a compelling story with confidence than for their track record in forecasting.
This latest book details his findings from a four-year project funded by IARPA, the Intelligence Advanced Research Projects Activity, to test the forecasting performance of several different teams of experts. It was a large test which asked over 500 questions to more than 20,000 participants between 2011 and 2015. It was also rigorous, with questions designed to eliminate the wiggle room problem. For example, they asked, “Will any country withdraw from the Eurozone in the next three months? How many additional countries will report cases of the Ebola virus in the next eight months?”
The study found that about 2% of participants, which he calls superforecasters, are consistently more accurate in their predictions. By identifying the superforecasters, and then testing different combinations and variables, he was able to tease out what makes them successful, and the bulk of the book explains the traits, techniques and habits of thought that make for superior judgment.[1]
The basic theme is that it’s not superior intellect that distinguishes the SFs, but how they think. Here are just a few of his recommendations:
Remote Very unlikely Unlikely Even Probably, likely Very likely Almost Certain
Even better, use percentages. It won’t guarantee[2] you’re right, but it will force you to examine your own thinking and help you adopt a more nuanced viewpoint.
There’s far more good advice than I can summarize, but frankly I’m struggling a little in deciding whether to recommend that you read Superforecasting. On the plus side, I predict that it is very likely that if you read and apply its lessons, you will become a better thinker. On the other hand, it’s an even chance that you will become a worse persuasive communicator. That’s because an effective salesperson radiates confidence about the future they recommend, while effective forecasters are far more cautious and humble about their predictions.
My personal choice would be to begin with better thinking. First, for the obvious point that you owe it to yourself and to others to give them your best thinking. Second, sustainable influence depends on credibility, which will in the long run correlate strongly with your predictions. It’s true that TV pundits who are the most confident in their predictions tend to be wrong most often, and they don’t suffer for it. But when people are putting their own reputations or money at stake based on what you predict, they tend to have longer memories.
[1] I use judgment in the sense that they are better predictors, not that they necessarily make better decisions.
[2] In fact, you may have noticed that the seven-point scale does not include certainty on either side.
Finding the right base right is often very complicated.