Home > Range(55)

Range(55)
Author: David Epstein

   In 2005, he published the results of his long study of expert judgment, and they caught the attention of the Intelligence Advanced Research Projects Activity (IARPA), a government organization that supports research on the U.S. intelligence community’s most difficult challenges. In 2011, IARPA launched a four-year prediction tournament in which five researcher-led teams competed. Each team could recruit, train, and experiment however it saw fit. Every day for four years, predictions were due at 9 a.m. Eastern time. The questions were hard. What is the chance that a member will withdraw from the European Union by a target date? Will the Nikkei close above 9,500? What is the likelihood of a naval clash claiming more than ten lives in the East China Sea? Forecasters could update predictions as often as they wanted, but the scoring system rewarded accuracy over time, so a great prediction at the last minute before a question’s end date was of limited value.

   The team run by Tetlock and Mellers was called the Good Judgment Project. Rather than recruit decorated experts, in the first year of the tournament they made an open call for volunteers. After a simple screening, they invited thirty-two hundred to start forecasting. From those, they identified a small group of the foxiest forecasters—just bright people with wide-ranging interests and reading habits but no particular relevant background—and weighted team forecasts toward them. They destroyed the competition.

   In year two, the Good Judgment Project randomly arranged the top “superforecasters” into online teams of twelve, so that they could share information and ideas. They beat the other university-run teams so badly that IARPA dropped those lesser competitors from the tournament. The volunteers drawn from the general public beat experienced intelligence analysts with access to classified data “by margins that remain classified,” according to Tetlock. (He has, though, referenced a Washington Post report indicating that the Good Judgment Project performed about 30 percent better than a collection of intelligence community analysts.)

   Not only were the best forecasters foxy as individuals, they had qualities that made them particularly effective collaborators—partners in sharing information and discussing predictions. Every team member still had to make individual predictions, but the team was scored by collective performance. On average, forecasters on the small superteams became 50 percent more accurate in their individual predictions. Superteams beat the wisdom of much larger crowds—in which the predictions of a large group of people are averaged—and they also beat prediction markets, where forecasters “trade” the outcomes of future events like stocks, and the market price represents the crowd prediction.

   It might seem like the complexity of predicting geopolitical and economic events would necessitate a group of narrow specialists, each bringing to the team extreme depth in one area. But it was actually the opposite. As with comic book creators and inventors patenting new technologies, in the face of uncertainty, individual breadth was critical. The foxiest forecasters were impressive alone, but together they exemplified the most lofty ideal of teams: they became more than the sum of their parts. A lot more.

 

* * *

 

   • • •

   A few of the qualities that make the best Good Judgment Project forecasters valuable teammates are obvious from talking to them. They are bright, but so were the hedgehog experts Tetlock started with. They toss around numbers easily, estimating this country’s poverty rate or that state’s proportion of farmland. And they have range.

   Scott Eastman told me that he “never completely fit in one world.” He grew up in Oregon and competed in math and science contests, but in college he studied English literature and fine arts. He has been a bicycle mechanic, a housepainter, founder of a housepainting company, manager of a multimillion-dollar trust, a photographer, a photography teacher, a lecturer at a Romanian university—in subjects ranging from cultural anthropology to civil rights—and, most unusually, chief adviser to the mayor of Avrig, a small town in the middle of Romania. In that role, he did everything from helping integrate new technologies into the local economy to dealing with the press and participating in negotiations with Chinese business leaders.

   Eastman narrates his life like a book of fables; each experience comes with a lesson. “I think that housepainting was probably one of the greatest helps,” he told me. It afforded him the chance to interact with a diverse palette of colleagues and clients, from refugees seeking asylum to Silicon Valley billionaires whom he would chat with if he had a long project working on their homes. He described it as fertile ground for collecting perspectives. But housepainting is probably not a singular education for geopolitical prediction. Eastman, like his teammates, is constantly collecting perspectives anywhere he can, always adding to his intellectual range, so any ground is fertile for him.

   Eastman was uncannily accurate at predicting developments in Syria, and surprised to learn that Russia was his weak spot. He studied Russian and has a friend who was a former ambassador to Russia. “I should have every leg up there, but I saw over a large series of questions, it was one of my weakest areas,” he told me. He learned that specializing in a topic frequently did not bear fruit in the forecasts. “So if I know somebody [on the team] is a subject area expert, I am very, very happy to have access to them, in terms of asking questions and seeing what they dig up. But I’m not going to just say, ‘Okay, the biochemist said a certain drug is likely to come to market, so he must be right.’ Often if you’re too much of an insider, it’s hard to get good perspective.” Eastman described the core trait of the best forecasters to me as: “genuinely curious about, well, really everything.”

   Ellen Cousins researches fraud for trial lawyers. Her research naturally roams from medicine to business. She has wide-ranging interests on the side, from collecting historical artifacts to embroidery, laser etching, and lock picking. She conducts pro bono research on military veterans who should (and sometimes do) get upgraded to the Medal of Honor. She felt exactly the same as Eastman. Narrow experts are an invaluable resource, she told me, “but you have to understand that they may have blinders on. So what I try to do is take facts from them, not opinions.” Like polymath inventors, Eastman and Cousins take ravenously from specialists and integrate.

   Superforecasters’ online interactions are exercises in extremely polite antagonism, disagreeing without being disagreeable. Even on a rare occasion when someone does say, “‘You’re full of beans, that doesn’t make sense to me, explain this,’” Cousins told me, “they don’t mind that.” Agreement is not what they are after; they are after aggregating perspectives, lots of them. In an impressively unsightly image, Tetlock described the very best forecasters as foxes with dragonfly eyes. Dragonfly eyes are composed of tens of thousands of lenses, each with a different perspective, which are then synthesized in the dragonfly’s brain.

   One forecast discussion I saw was a team trying to predict the highest single-day close for the exchange rate between the U.S. dollar and Ukrainian hryvnia during an extremely volatile stretch in 2014. Would it be less than 10, between 10 and 13, or more than 13? The discussion started with a team member offering percentage predictions for each of the three possibilities, and sharing an Economist article. Another team member chimed in with a Bloomberg link and online historical data, and offered three different probability predictions, with “between 10 and 13” favored. A third teammate was convinced by the second’s argument. A fourth shared information about the dire state of Ukrainian finances. A fifth addressed the broader issue of how exchange rates change, or don’t, in relation to world events. The teammate who started the conversation then posted again; he was persuaded by the previous arguments and altered his predictions, but still thought they were overrating the possibility of “more than 13.” They continued to share information, challenge one another, and update their forecasts. Two days later, a team member with specific expertise in finance saw that the hryvnia was strengthening amid events he thought would surely weaken it. He chimed in to inform his teammates that this was exactly the opposite of what he expected, and that they should take it as a sign of something wrong in his understanding. In contrast to politicians, the most adept predictors flip-flop like crazy. The team finally homed in on “between 10 and 13” as the heavy favorite, and they were correct.

Hot Books
» House of Earth and Blood (Crescent City #1)
» A Kingdom of Flesh and Fire
» From Blood and Ash (Blood And Ash #1)
» A Million Kisses in Your Lifetime
» Deviant King (Royal Elite #1)
» Den of Vipers
» House of Sky and Breath (Crescent City #2)
» The Queen of Nothing (The Folk of the Air #
» Sweet Temptation
» The Sweetest Oblivion (Made #1)
» Chasing Cassandra (The Ravenels #6)
» Wreck & Ruin
» Steel Princess (Royal Elite #2)
» Twisted Hate (Twisted #3)
» The Play (Briar U Book 3)