How to predict the future better than anyone
Phillip Tetlock, a proffessor at the University of Pennsylvania argues that almost anyone can learn to peer into the future
Your support helps us to tell the story
From reproductive rights to climate change to Big Tech, The Independent is on the ground when the story is developing. Whether it's investigating the financials of Elon Musk's pro-Trump PAC or producing our latest documentary, 'The A Word', which shines a light on the American women fighting for reproductive rights, we know how important it is to parse out the facts from the messaging.
At such a critical moment in US history, we need reporters on the ground. Your donation allows us to keep sending journalists to speak to both sides of the story.
The Independent is trusted by Americans across the entire political spectrum. And unlike many other quality news outlets, we choose not to lock Americans out of our reporting and analysis with paywalls. We believe quality journalism should be available to everyone, paid for by those who can afford it.
Your support makes all the difference.History often isn’t kind to those who go on the record making predictions. Albert Einstein once said that nuclear energy would never be a thing, while Margaret Thatcher predicted that a woman would never be prime minister in her lifetime. And remember the record executive who said the Beatles had no future in show business?
People are often spectacularly bad at forecasting the future. But they don’t have to be, says Philip Tetlock, a professor at the University of Pennsylvania who has spent decades studying how people make predictions. In a new book, “Superforecasting: The Art and Science of Prediction,” which he co-wrote with journalist Dan Gardner, Tetlock argues that almost anyone can learn to peer into the future.
Earlier in his career, Tetlock conducted a famous 20-year study in which he had a group of experts make a total of around 28,000 predictions about politics, war, economics and other topics over a timeline of one to 10 years. After scoring all of their predictions against what actually happened, Tetlock’s takeaway was that experts were only about as effective at predicting the future as dart-throwing chimpanzees.
That quip got picked up everywhere, Tetlock says. But there was another important finding from the work that many did not hear. While experts on average didn’t make predictions that were much better than chance, there was a small subset of experts that were actually pretty good at making predictions, a group that Tetlock has come to call “superforecasters.”
Tetlock and his research partner Barbara Mellers spent the last few years recruiting more than 20,000 volunteer forecasters to participate in a massive forecasting activity they dubbed "The Good Judgment Project." To figure out the best methods of forecasting the future, Tetlock, Mellers and other researchers watched these forecasters as they made a multitude of predictions about things like the price of gold and protests in Russia.
The Good Judgment Project competed as one of five teams in a forecasting tournament sponsored by American intelligence agencies, who funded the competition in an effort to learn how to make better predictions after the intelligence failures in 2006, when the United States invaded Iraq believing that Saddam Hussein had weapons of mass destruction. Each of the five teams was required to submit forecasts on a broad range of issues every morning from September 2011 to June 2015, creating a vast trove of data about when predictions succeed and why.
After all this research, Tetlock concluded that the superforecasters aren’t necessarily geniuses, math whizzes or news junkies, though all are intelligent and aware. What separates them from everyone else are certain ways of thinking and reasoning that anyone of decent intelligence can learn, Tetlock says — if they’re willing to put in the work.
The potential gain to society from making better predictions is huge and wide-ranging — from companies thinking about managing risk, to average people investing their retirement funds, to Hollywood producers planning their next blockbuster. The U.S. intelligence community in particular spends billions of dollars a year predicting what governments and other actors around the world are likely to do.
In Tetlock’s eyes, better forecasting means not just making better predictions but also gathering data afterward about which predictions were right and wrong. He says this scientific approach to forecasting and analyzing events could do a lot to resolve some of America’s most controversial and polarized debates in economics and politics.
The scientific method
One of the most important things superforecasters do, Tetlock says, is use the power of probability. You can see that in some of the Good Judgment Project's forecasts for 2016, which were given to me by Warren Hatch, a superforecaster who works in finance when he’s not carefully calculating probabilities.
Hatch walked me through an example. As of Jan. 3, the median estimate of their group was that there is a 68 percent chance that Democrats would win the White House in 2016, a 31 percent chance that a Republican would win, and a 1 percent chance of another scenario — though Hatch cautions that these estimates will probably change as the presidential race evolves.
Here are some of their other forecasts for the coming year (these are the median estimates of their group of forecasters as of Dec. 29):
A 2 percent probability that the Fed will raise interest rates at its January meeting and a 59 percent probability of a hike in March.
A 6 percent chance that a country will leave the euro zone by the end of 2016. If no country leaves the euro zone, they foresee a 35 percent chance that the euro touches the dollar by that time; if a country does leave, they see an 87 percent chance that the euro touches the dollar.
An 8 percent chance that Congress will ratify the Trans-Pacific Partnership trade agreement before the 2016 general election, but a 60 percent chance it will be ratified between the election and the end of 2017.
These odds might seem strangely specific, but for the superforecasters they are typical. Gardner and Tetlock write that most people use a “two- or three-setting mental dial,” thinking of probabilities in terms of only “yes,” “no” or “maybe.” But the superforecasters drill down on much more specific probabilities, taking care to consider whether something really has a 62 or 63 percent chance of occurring.
How did the forecasters reach the presidential election figure? Most of us would probably start by thinking about the political climate in the United States and the popularity of Barack Obama, Hillary Clinton, Donald Trump and the other Republican candidates. But Hatch cautions against this approach.
Instead, he and other forecasters try to establish a ballpark figure using history and statistics before they get into the details.
Superforecasters at a conference in London in October 2015. Hatch is in the bottom row, second from the right. Photo credit: Scott Eastman. Superforecasters at a conference in London in October 2015. Hatch is in the bottom row, second from the right. (Scott Eastman)
For example, they might look at how often a president of a different party is elected when the U.S. economy is growing at only slightly above 2 percent, as it is now. Or they might look at how often a president of the same party is elected after a two-term presidency, like that of Obama. (Hatch says it’s about a third of the time.)
Only after creating this estimate will the forecasters use their knowledge of the specific situation to adjust that estimate up or down, Hatch says. They might think about America's changing demographics and Marco Rubio's broad appeal, and bump up the estimate for the Republican Party. Or they might read news stories that attest to Clinton’s momentum and raise the estimate for the Democrats, he says.
This approach echoes the thinking of Daniel Kahneman, a psychologist who studies decision making and won the Nobel prize for his work in behavioral economics. Kahneman calls the broad, statistical estimate that the superforecasters make first "the outside view," and he's cautioned people to be more aware of the outside view when making predictions and decisions.
By doing that, he and others believe people can partially counter the tendency of the human brain to focus on our own personal experiences — what Kahneman calls "the inside view" — and ignore the substantial power of history and statistics to at least provide context.
Start making sense
Another key to being a good forecaster is to try to keep your beliefs from clouding your perceptions. This is a fault that Tetlock and Gardner say is far too common among the talking heads on television and in the newspaper who make the most influential predictions in America, about things like whether the United States can defeat the Islamic State, which candidate will win the Iowa primary, or if the Fed will raise interest rates.
People want answers. As a result, the media gravitate toward those with clear, provocative and easy-to-understand ideas, and audiences tend to believe those who speak with confidence and certainty.
The problem is that those who speak with confidence and certainty and spin a clear narrative are actually less likely to make accurate predictions, say Tetlock and Dan Gardner. And the more famous the expert is, the worse he or she seems to be at forecasting the future. Tetlock’s original study — the one in which he concluded that experts were roughly as effective as chimpanzees — actually showed an inverse relationship between the fame of an expert and the accuracy of their predictions.
People have a tendency to selectively absorb facts that confirm their worldview, and ignore those that don’t — what psychologists call “confirmation bias.” As Kahneman writes, “Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true.”
This may be particularly true for people who tend to organize their thinking around a “Big Idea” that they are passionate about — whether socialism, free-market fundamentalism or impending environmental doom. Tetlock dubs these kind of ideologically minded people "hedgehogs" after a phrase from Greek poetry (“The fox knows many things but the hedgehog knows one big thing.”)
How do you spot a hedgehog? Tetlock says to listen to their arguments and watch out for "the experts who say 'moreover' more often than they say 'however'" — i.e., people who double down on their arguments without qualifying them with contrasting information — as well as those who declare things “impossible” or “certain."
You can probably guess that Tetlock's superforecasters are not "hedgehogs." Instead, they are the kind of person that Tetlock dubs "foxes."
According to Tetlock, foxes are more pragmatic and open-minded, aggregating information from a wide variety of sources. They talk in terms of probability and possibility, rather than certainty, and they tend to use words like “however,” “but,” “although” and “on the other hand” when speaking.
When it comes to prediction, the foxes outfox the hedgehogs. While the hedgehogs managed to do slightly worse than random chance, the foxes had real foresight. The foxes weren’t just more cautious in their predictions, they were a lot more accurate.
Unlike hedgehogs, foxes also had a desire to keep reviewing their assumptions, updating their estimates, and improving their understanding — a kind of attitude that Tetlock calls "perpetual beta." Tetlock says that this approach is the single most important ingredient for learning to make accurate predictions. “For superforecasters, beliefs are hypotheses to be tested, not treasures to be guarded,” Tetlock and Gardner write.
Testing the results and figuring out which predictions were right and which were wrong is an essential part of Tetlock's formula. Without evaluating the result of a prediction, it's like "practicing free throws in the dark," he says — impossible to get better.
Unfortunately, most of the predictions you see in the media lack the specificity necessary to test them, like a specific time frame or probability, Tetlock says.
"[Experts will] say emphatic sounding things, like the Eurozone is on the verge of collapse, but they’ll say 'The Eurozone may well be on the verge of collapse' — there’s no certainty, and there’s no date. I don’t know what it means to say something 'may well be' — it could be 20% or 80%," he says.
Instead, Tetlock advocates for something he calls "adverserial collaboration" — getting people with opposing opinions in an argument to make very specific predictions about the future in a public setting, so onlookers can measure which side was more correct.
For example, hawks and doves on the Iran nuclear deal, or those who advocate austerity or stimulus policy for Europe's downtrodden economies, could go head to head in making predictions, and let the public evaluate which side is more accurate. Tetlock says this process could help make real progress on some of the most controversial issues of our time, and help depolarize unnecessarily polarized debates.
Tetlock acknowledges that the most powerful and famous experts around will probably have little motivation to join this kind of debate, since they would have little to gain and everything to lose. "The more status you have in the system, the less interested you should be in testing whether you are better than ordinary mortals." Of his forecasting tournaments, he says, "I’m not waiting for Tom Friedman to sign up."
For most mere mortals though, the gains to be had from making more scientific and accurate predictions are immense. “To be sure, in the big scheme of things, human foresight is puny, but it is nothing to sniff at when you live on that puny human scale,” Tetlock and Gardner write.
Washington Post
Join our commenting forum
Join thought-provoking conversations, follow other Independent readers and see their replies
Comments