Monday, April 05, 2010

Analysts, forecasts and anatomy

This weekend, Piper Jaffrey analyst Gene Munster confidently announced that Apple sold between 600,000 and 700,000 iPads on Saturday, the first day that it was available in stores. The press picked up the number and ran with it, right until Apple announced the real number this morning: 300,000. The headline on Henry Blodget's post on Silicon Alley Insider read "CONFIRMED: Apple Analyst Pulled 700,000-iPads-Sold Number Out Of His Ass". It then linked to another article where Munster explained why he was off by 50% or more, and confirmed that he made the same mistake when he forecasted first-day sales numbers for the original iPhone.

The beating that Munster's taking is self-inflicted. Analysts fight to get on CNBC, Bloomberg Television and Page 1 of the Wall Street Journal, and they know that mediocre, "middle-of-the-road" forecasts won't get attention. There's strong pressure to inflate forecasts in order to get press. Munster got his press coverage, but he's going to be spending a lot more time trying to rebuild his credibility.

My first job out of college was as a Product Manager for HP. I was asked to develop forecasts for a line of products that had never existed before, so we had no historical track record from which to extrapolate. I knew what the process was for creating a "bottom-up" forecast, of course, but I didn't have the budget or time to do any meaningful research. So, I turned to one of the "old hands" in the division for his advice, and he explained that the division's forecasting process for new products was based on two concepts: WAGs and SWAGs. WAGs were "Wild-Ass Guesses", and SWAGs were "Silly Wild-Ass Guesses". (I hope that you're seeing the anatomical theme here.) In HP's model, WAGs were forecasts where you at least had some historical or secondary research to work with, while SWAGs were forecasts where you had to make a bunch of assumptions and hope for the best.

Forecasts, estimates and projections are all guesses. One hopes that they're reasonably well-educated guesses based on solid research, but they're guesses nonetheless. No one should be surprised when a highly-touted forecast is wrong. In fact, what surprises me is the credibility that investors and the press give these forecasts. Editors and reporters pursue forecasts like a runner in Death Valley looking for a drink of water, but very few of them go back and analyze how accurate the forecasts actually were.

Some research clients actually do track the accuracy of forecasts, and assign correction values to the forecasts from various analysts and research firms. For example, a given research firm's forecasts for a particular market might be 40% higher than the actual result over time, so the client would divide that firm's numbers by 1.4 to get a more accurate estimate. (In some circles, 40% is actually pretty good; in the early days of the PC business, forecasts from some analysts were off by several hundred percent.)

Don't make the assumption that an analyst or research firm has better information sources or more sophisticated forecasting techniques than you do. A lot of research firms refuse to release their methodologies, not because they're sophisticated, but because they're either embarrassingly simple or indefensible. So, the next time that an analyst makes a grand pronouncement, take it with a grain of salt.

Reblog this post [with Zemanta]

No comments: