on coming rain

Jordan Lei
10 min readJul 11, 2020

This is the third chapter in a series of pieces about our modern relationship with time and the future. Titled Hourglass, It’s an exploration into how our abstract view of time has changed in modernity, how it has met (or has yet to meet) the needs of the present, and what we can do to better prepare ourselves for what’s to come.

First Chapter / Previous Chapter / Next Chapter

Predicting the Future

According to ancient Greek Legend, Cassandra was blessed with the gift of prophecy. She was gifted with the ability to see the future, but cursed such that no one would believe her. Time and time again, she would warn of impending doom, but without fail, the people she would warn ignored her words.

First, she foresaw the start of the Trojan War when Paris took Helen to be his lover. But no one believed her. Next, she warned the Trojans about the so-called horse-gift that the Greeks were about to present to them, because, you know, come on guys. Again, they didn’t listen. In her final prophecy, she warned King Agamemnon of his impending death, only to be ignored. Her story serves as a cautionary tale about silencing informed voices and warning signs in the face of impending disaster — from the Trojan War to the Challenger Launch¹.

But this isn’t a story about her.

In 1831, an expedition to coastal Patagonia would change the world forever, uncovering a novel scientific theory that continues to incite heated debate in modern society. This expedition took with it a curious naturalist who hoped to catalog as-of-yet undiscovered species of animals. This naturalist would go on to prove his acumen after observing the divergence of traits in very similar species of mockingbirds and finches. After two trips on the H.M.S. Beagle, Charles Darwin would publish his greatest work, On the Origin of Species, detailing how descent with modification contributed to the vast diversity of species of flora and fauna on Earth.

But this isn’t a story about him, either.

There was another member on the HMS Beagle: the captain, Admiral Robert Fitzroy. After the two voyages of the HMS Beagle, Fitzroy gained prominence for his competence as a seafarer and the scientific discoveries that happened under his watch. He was quickly elevated in status to various cabinet and government positions once Queen Victoria ascended to the throne³.

In 1854, just 5 years before Darwin would publish On the Origin of Species, Fitzroy established the Met Office, an organization that served to reduce sailing times and improve the reliability and safety of sea travel. In the mid-19th century, seafaring conditions were incredibly uncertain, and it was still impossible to predict with any reasonable certainty what the weather might look like from one day to the next. Being able to predict storms beforehand could mean the difference between life and death for sailors, something that Fitzroy was all too familiar with as a sailor himself⁴.

Fitzroy set out to establish the first systematic, data-driven approach to constructing so-called weather charts which were aimed at making predictions about the weather. His efforts were met with ridicule:

When one MP suggested in the Commons in 1854 that recent advances in scientific theory might soon allow them to know the weather in London “twenty-four hours beforehand”, the House roared with laughter [4]

Slowly but surely, Fitzroy’s crack-prophecies and predictions gained traction for being consistent, precise, and often accurate. Today we know them, of course, by a different name: forecasts. His forecasts shifted from being a mere curiosity towards becoming a more standardized institution, used not only by sailors and fishermen, but also elites watching horse races who wanted to decide what they wanted to wear for the day⁴.

Despite the effort he put into his work, Admiral Fitzroy couldn’t shake off the criticism lobbed against his work. The scientific community viewed his predictions with suspicion, the government complained about the resource-intensive process of collecting and transmitting data, and the public complained when the forecast was incorrect, often blaming Fitzroy’s faulty predictions for a bad harvest. On April 29, 1865, Admiral Fitzroy gave his last forecast — thunderstorms over London. The next day, he killed himself⁴.

Needless to say, Admiral Fitzroy’s story is a somber reminder of the dangers and risks of peering into the future — fate doesn’t treat the prescient kindly. Whether it’s Fitzroy or Cassandra, knowing the future doesn’t ensure happiness. Prophecies often weigh more than crowns. But like Cassandra, Fitzroy’s story is also a warning about ignoring informed voices. The way we treat the Cassandras and Fitzroys of our world often says a lot more about us than it does about them. If we want to improve as a species, we need more people like Fitzroy to pioneer good, down-to-earth science about forecasting, and to do so in a systematic and well-researched fashion.

Even though Admiral Fitzroy’s last forecast was centuries ago, we can learn a lot about the paradigms he built into forecasting and apply it to our own future. With some help, we can become better forecasters of tomorrow, today.

Modern Forecasters

The preeminent scholar on forecasting in the modern age is Dr. Philip Tetlock, a professor at the Wharton School. His book Superforecasting reveals how individuals and groups can produce stunningly accurate forecasts of the future. It’s also a subtle warning against the dangers of listening to bad forecasts, those made by self-proclaimed experts who provide predictions that can’t be evaluated or falsified over time⁵.

Good forecasts, it turns out, have a remarkably simple set of criteria: they need to be specific, time-bounded, and measurable. That is to say, “the economy will take a downturn” is not a good forecast. Which parts of the economy will take a downturn? When does this forecast expire? What is the metric by which the economy is measured? And how confident are you? A much better would be “I am 70% confident that the S&P 500 will fall by 2% by the end of 2021”. This is something that can be evaluated! At the end of 2021, you can come up to me with the value for the S&P 500 Index and tell me whether or not I was right. Over time, you could tell if I was a good forecaster or not: in an ideal world, the things that I am 70% confident about would happen 70% of the time, and so on. This is reflected in what’s called a Brier Score, a metric for how well-calibrated your forecasts are on average⁵.

Tetlock constructed a tournament to select for the best forecasters — so called “superforecasters”, if you will, and he found that they consistently outperformed some of the best forecasting methods for real-world predictions. They vastly outperformed average forecasts, and even significantly outperformed prediction markets (where people had monetary stakes to bet on outcomes). These superforecasters didn’t separate themselves from average forecasters by virtue of intelligence or technological knowhow. Rather, they separated themselves by how they approached forecasting. Superforecasters were more likely to avoid common pitfalls like painting situations with a broad brush. Rather than using heuristics or hunches, superforecasters would approach each new task with an open mind, eager to search for what details might be relevant to the issue at hand. They were more likely to approach a problem from the bottom up, starting with base rates of probability and building up to finer details. If, for example, they were asked to predict whether a given candidate would win re-election, rather than starting with whether the candidate was currently popular or whether they had the energy to sway voters in their favor, they would start out by looking at the likelihood that any incumbent would win re-election, and then narrow their focus from there onward. They were also more likely to change their mind, and be precise while doing so; superforecasters were likely to update their beliefs in the face of new evidence and have targeted responses for the updates they made. Armed with these tools, Tetlock created one of the top forecasting groups in the country⁵. There’s just one problem.

We don’t listen to them.

The Cassandra Curse

Despite the success and accuracy of Tetlock’s group of superforecasters, the people who discuss politics on TV and dominate headlines about the economy aren’t the people who Tetlock has identified to be among the most reliable forecasters on the planet. It’s worse than that. Experts in politics and economics consistently violate all of the principles of good forecasting that Tetlock has identified in his work. First, they use heuristics and basic rules of thumb to extrapolate into the future, which would have been fine in the Middle Ages, but in today’s world it’s like bringing a crayon to a paintball fight. Second, they start with minutiae rather than from the base-up. Political pundits pay extreme attention to every gaffe, slip, and cough a political candidate utters in every speech without considering the broader likelihood of victory given the wealth of data that already exists for candidates in similar contexts. Third, experts don’t make predictions that are clearly measurable or falsifiable, and even when they do, they don’t update their predictions or admit that they’re wrong. It should come as no surprise, then, that Tetlock’s research into expert political opinion showed that the average expert performed no better than random chance at forecasting⁵.

When we talk about the future, we often think of it as impenetrable, full of things that are completely out of our control. Yet Fitzroy and Tetlock both show us that there are a great many things that we can know in advance, if not with full certainty, at least with some insight. This knowledge marked a radical shift in the possibilities available to humankind — the weather forecast was one of the first times that humans were able to peer into the future and take a seat among oracles and prophets. And there’s no better time to utilize that power than right now. Our future is changing faster than ever before. Technology is introducing new insights that promise to give us virtual assistants, take us to Mars, and enable autonomous driving. But our ability to adequately prepare for what’s ahead lies in our ability to reasonably forecast what’s to come. Unfortunately, based on the kinds of forecasters that are dominating the news, it seems we’re not much better off today than we were when people thought weather forecasts were crack science.

There’s one more wrinkle to this story. You might be wondering why we aren’t taking forecasting more seriously, why experts haven’t fundamentally changed the way they make predictions about the future. Why is it that when experts on TV are questioned about politics or economic prospects, they’re more likely to answer confidently with some half-baked guess rather than using sound statistics and empirics, or even just flat-out admitting that they don’t know? One explanation might be that our current views on expertise haven’t sufficiently adapted to better evidence about forecasting. Maybe current experts take cues from the experts before them, who took cues from the experts before them, and so on until you get all the way back to pre-forecasting times, when confidence was your best bet. Given enough time, they’ll come around. This is unlikely, for a number of reasons. First, statisticians and scientists have been using forecasting models for decades, yet they get little coverage relative to the pundits we see on the news. Second, appeals to rationality are often less effective than appeals to emotion — it’s why a statistic about world hunger is less likely to motivate you to donate than a picture of a starving child. Another explanation for our skewed preference in public forecasters might be that it’s easier to convince people that you have credibility if you sound like you know what you’re talking about, even if you don’t have a clue; for some so-called experts, it’s often better to say something wrong than to say nothing at all. It may be the case that experts are rewarded for confidence over accuracy, conviction over precision. The last hypothesis is perhaps the most chilling of all: maybe, deep down, we know that there are better ways to forecast the future, but we just don’t want to know.

This is the most dangerous of the three, because choosing not to know is an option that we take at our own risk. The future is coming fast, whether we like it or not, and the way in which we predict, understand, and interface with the future will determine how well we can adequately respond to it.

Make no mistake, a change is coming. The winds are coming in from all different directions, and we can feel it now. Technology can sink our ship just as fast as it can lift our masts, and it’s time that we take seriously our approach to assessing the weather. Now is the time to embrace rigorous, evidence-based approaches to forecasting, so we can be better prepared for what’s to come. But if we choose instead to follow the loudest voices, silence our forecasters, and turn away from our Cassandras, then we’d better be prepared to face the oncoming storm.

--

--

Jordan Lei

Neuro x Machine Learning x Art. PhD Student in Neuroscience @ NYU. Penn M&T 2020. www.jordanlei.com