If the future could sue, it would be in for some big court awards. It is routinely abused by economic forecasters, fund managers and investment analysts, yet the only way it can answer back is by making them look stupid. Since that will happen long after anyone has stopped caring about what they predicted, the abusers have no incentive to reform.
JM Keynes identified the problem more than 70 years ago and talked about it in terms of risk and uncertainty. To say something is risky is to assert that it can be assessed using probabilistic measures, as an insurance company assesses the risk of a house burning down. All you need is enough of the right kind of data.
Finance theory and modern portfolio theory assume that historic data for returns and volatility are enough of the right kind of data to assess probabilities for future investment returns. Keynes disagreed. His view was that most of the events affecting the stockmarket were genuine unknowns, so instead of risk you have uncertainty, which is not measurable.
The issue is not just about unforeseeable crises. The problem for all forecasters is that the future contains far more uncertainties than probabilities, which is amply demonstrated in economic history and especially the history of innovation.
Keynes argued that while you can make relative probability statements about unknowns, such as “it is more probable that the recovery will continue than that there will be a double-dip recession”, the data gives you no grounds for saying it is 60 per cent or 80 per cent more probable. It is a judgment about uncertainty, not an assessment of risk.
Pretending you can use probabilistic assessment on scenarios containing “unknown unknowns” is virtually certain to gener-ate unpleasant surprises. It is a deeply flawed way of making decisions.
For that reason, almost nobody does it. Instead of taking historic data as inputs to portfolio optimisation, people adjust the inputs, often using mean-reversion to lower or raise prospective returns, correlations or volatility. Or they “tactically” adjust their strategic asset allo-cations or tweak their stochastic models so the ranges for variables conform with some independently forecast range.
All these replace probabilistic methods with imperfect human judgment and in almost all cases, practitioners regard this as an improvement.
This raises the question of why we still regard probabilistic measures of risk and return with such reverence that they form the core of professional finance qualifications. Here are several reasons, all or none of which you may consider valid:
- Quantification: Science is all about measurement and quantification. We want investment to be more scientific, so we have to use these methods.
- Starting point: You have to start the assessment process from somewhere and the past data is a better starting point than a blank sheet of paper.
- Orders: These methods are endorsed by our regulators so if we use them we must be doing the right thing.
At present, advisers can obtain diploma-level qualification by knowing a lot about the statistics of investment and virtually nothing about economic or investment history.
They would probably give better investment advice if they knew more about the history and less about the statistics. My confidence level in this statement is 100 per cent.
Chris Gilchrist is director of Churchill Investments and editor of The IRS Report