Chris Gilchrist: Robo-advice will not give much comfort

Chris-Gilchrist-MM-Peach-700.jpg

Astronomer Royal Sir Martin Rees and Stephen Hawking have both warned artificial intelligence could terminate the human race. In comparison, some advice ghetto dwellers’ worries that similar creatures could terminate financial advisers may seem trivial. My view is that we should focus, as the scientists do, on the possible harms to those served by the robots.

A vast increase in processing power has enabled engineers to “algorise” (create algorithms to form a guided path to a solution) financial and investment advice. Robo-advice is undoubtedly going to capture a large swath of the market in the next decade; just look at what is happening in the US. Promoters argue it will enable users to manage many aspects of financial planning without needing human advice.

What the robots will do well is assess attitude to risk and persuade users it is a useful part of the investment journey. It is, in fact, the least useful part: an attitude to risk assessment will not tell anyone how they are likely to respond when their investments plunge by 20 per cent overnight. When people do worry, they may not get much comfort from robotic empathy.

Personal responses to crises and panic will be triggered by circumstances and hugely affected by “herding” and other predictable deviations from the rational. Behavioural factors will be critical in determining whether the response is “panic and sell” or “stay calm, switch channels and watch the football”.

A robo-adviser that wants to do the best for its clients should use a traditional asset allocation engine to generate recommendations for investors in the accumulation phase, and assume risk is a function of allocation, particularly to equities. Additional volatility is of negligible importance to accumulators, so simple “percentage in equities” strategies driven mainly by timescale are fine.

For decumulators, though, a robo-adviser faces the problem that if it gives enough weight to capacity for loss, it is likely to put too little into equities to generate the inflation-beating returns retirees need. If it gives too little weight to capacity, however, it will allocate too much to equities and compensation claims will follow.

The robo-adviser is likely to try to dodge this problem by using a portfolio construction model based on volatility and correlation rather than asset allocation. But then it will have to recommend solutions that exactly match its methodology and assumptions. Any gap will open up the potential for a claim. So robo-advisers’ decumulation solutions will, it seems to me, have to link to specific funds or portfolios. That makes them a riskier proposition (riskier under UK than US rules) for anyone who can be fingered for responsibility for whatever it is that the robo-adviser does. Since a robo-adviser will be bad at assessing capacity for loss, hefty compensation bills could follow the inevitable (it is only a question of time) “malgorithm”.

Human advisers navigating the barbed wire jungle of decumulation face the same problems but have the advantage of being capable of a more robust and accurate estimation of capacity for loss.

I expect the creation of algorithms for the core of investment advice – the forward-looking judgement – to keep software engineers busy for a few more decades. In the meantime, robo-advice engineers will, if they are wise, follow Isaac Asimov’s prescription and strictly follow the first law of robotics.

Chris Gilchrist is director of Fiveways Financial Planning, a contributing author to Taxbriefs Advantage and edits the IRS report