Tuesday, February 12, 2013

Nate Silver, And Predicting The Future

The Signal and the Noise: Why So Many Predictions Fail — but Some Don'tI am in the middle of reading Nate Silver's excellent new book "The Signal and the Noise:  Why So Many Predictions Fail - but Some Don't"

You might of heard of Nate Silver during last year's Presidential campaign.  Silver is statistician, pollster, and author of the New York Times political blog FiveThirtyEight.com.

Throughout the summer, and going into the election, Silver consistently said that President Obama would handily defeat Governor Romney in their race for the White House.  Even in October, when most polls indicated that Romney was even or slightly ahead of Obama, Silver remained steadfast that he was "91% confident" that the President would be reelected, based on his polling work.

Despite being the target of often vicious attacks by other pollsters, Silver was of course proved correct.  There was more: Silver even correctly predicted the election results for each of the 50 states.

Silver's book is well-written, and full of interesting insights.

One of the most important points that Silver discusses is that predicting future outcomes - be it elections, sporting events, or the economy - is really more a matter of setting probabilities than making point forecasts.

In weather forecasting, for example, meteorologists will usually make their predictions in terms of probabilities, e.g. "a 70% chance of rain today" or "50% of snow later in the week".

What they are really saying is the current set of climate circumstances, combined with an extensive amount of computer modeling, leads to a probable weather outcome.

In our complex ecosystem, there is always the chance that some variable will change, and render the future weather patterns in a much different pattern than the odds would predict.  Rather than simply saying "it will rain tomorrow", it is better to say that there is "good chance of rain in some areas tomorrow".

Silver likes this approach, and notes how much the how science of meteorology has improved over the last 20 to 30 years.

At the same time, Silver notes that in other fields,  with similar sets of complex interacting variables,  predictions are often given with certainty, with only one final outcome given.

For example, economic forecasts are nearly always given as one number, e.g. "we think the economy will grow at +1.7% for the next year." , or "we think the S&P will gain +8% by year-end"

To Silver, this seems crazy: any modern economy has so many interacting variables that can unexpectedly change at any time to make the economy verve off in a totally different direction than previously expected.

In his book, Silver cites the fact that at the end of 2007 very few, if any economists, had foreseen the economic crisis that lay ahead in 2008.   Here's what he writes:

Now consider what happened in November 2007.  It was just one month before the Great Recession officially began. There were already clear signs of trouble in the housing market:  foreclosures had doubled, and the mortgage lender Countrywide was on the verge of bankruptcy.  There were equally ominous signs in the credit markets.

Economists in the Survey of Professional Forecasters, a quarterly poll put out by the Federal Reserve Bank of Philadelphia, nevertheless foresaw a recession as relatively unlikely.  Instead, they expected the economy to grow at a just slightly below average rate of 2.4 percent in 2008.  And they thought there was almost no chance of a recession as severe as the one that actually unfolded.

Silver goes on for several pages in this fashion.  He is confounded by the fact that economists (and market analysts) continue to publish forecasts that not only do not consider any more than one scenario, but that are also consistently wrong.

He cites a long talk with Jan Hatzius, chief economist of Goldman Sachs, about why economists have such a difficult time in performing their jobs in a satisfactory fashion:

As Hatzius sees it, economic forecasters face three fundamental challenges.  First, it is very hard to determine cause and effect from economic statistics alone. Second, the economy is always changing, so explanations of economic behavior that hold in one business cycle may not apply to future ones.  And, third, as bad as their forecasts have been, the data that economist have to work with isn't much good either.

I was reminded of all of this when I read an article written by Harvard professor Ken Rogoff in the London Guardian yesterday.

Rogoff was defending the Fed's actions in the same 2007-08 period discussed in Silver's book.

Critics of the Fed have said that the central bank should have seen the credit crisis that lay ahead, and acted accordingly, yet Rogoff points out that it would have been very difficult for the Fed to be proactive at a time when most forecasts were for positive economic growth in 2008:

The Fed was hardly alone. In August 2007, few market participants, even those with access to mountains of information and a broad range of expert opinions, had a real clue as to what was going on. Certainly the US Congress was clueless; its members were still busy lobbying for the government-backed housing-mortgage agencies Fannie Mae and Freddie Mac, thereby digging the hole deeper.

Nor did the International Monetary Fund have a shining moment. In April 2007, the IMF released its famous "Valentine's Day" World Economic Outlook, in which it declared that all of the problems in the United States and other advanced economies that it had been worrying about were overblown.

http://www.guardian.co.uk/business/2013/feb/11/federal-reserve-blame-financial-crisis

Famed Fidelity mutual fund manager Peter Lynch used to say that if an investor spent 10 minutes a year on economic forecasts it was 10 minutes too much, because the projections were so often in error.

Perhaps Nate Silver has a better idea, though:  maybe thinking about the outliers - the economic possibilities that seem remote - could yield significant gains.

No comments:

Post a Comment