Advisers are right to be vigilant about how robos’ decisions work, but do they highlight a lack of clear thinking on their part?
Few people would argue that self-driving cars will become commonplace at some point in the future. The only question is when. But while the focus is on the technical capability of the machine entrusted to drive us safely from A to B, New York University professor of psychology Gary Marcus has posed a more interesting question. One which has relevance to financial advice in the UK too.
Imagine you are on a narrow bridge with no room to pass either side. A school bus full of children hurtles out of control towards you. Should your self-driving car drive off the bridge and kill you to spare the children if that is the only option?
For an algorithm to work, it needs clear answers to every scenario. Unlike us, it cannot rely on instinct or make a decision in the moment. Your driverless car will inevitably have scenarios like this programmed into it. Would you like to know what they are programmed to do? Do you want to decide if it is you or the school kids now?
This example demonstrates how certain tech-driven solutions are not simply technical operations but moral ones too. Moral choices are built into them by design from the outset.
An IFA has always had the advantage of a great deal of freedom of choice, and for that reason the need to predetermine outcomes for situations which have not yet happened is not that pressing. It can wait until it happens.
Impact on revenue
A robo doesn’t have that luxury and, given the choices all have to be documented and coded into a system, the fingerprints of those moral choices are always there under the surface.
One robo takes the view that paying off debt is more important in certain circumstances than investing disposable income. Its typical customer is young, has little accumulated wealth and higher amounts of debt. As a result, it turns away more than half the people who visit its service and the revenue that would bring.
Another asks attitude to risk and basic capacity for loss questions and qualifies almost every visitor for one of its portfolios. The latter is more profitable, is a quicker and simpler buying journey and probably rates more highly in terms of consumer satisfaction. What incentive is there to enhance the algorithm in a way that is commercially less attractive?
Vertically integrated firms have a similar dilemma. If they are required to offer their in-house investment proposition first, they still can’t avoid the need for it to be suitable. This means they have to think about the potential clients for whom the service is not going to be suitable for and what their next option is when that happens.
It requires a lot more thought and effort than many IFAs have to bother with, and it is typically better documented because it is likely to be scrutinised. There is an assumption it will be reviewed at some point for evidence of bias.
Right to be vigilant
With both vertical integration and robo, the ability to secretly fix the results combines with extreme commercial pressures, and the history of financial services tells us that bad things happen when those things mix. We are right to be vigilant.
At the same time, though, it has prompted more thought into these areas than anyone had previously bothered with. Bias was and is no stranger to the IFA model, but it has always been harder to identify, given the more subjective case-by-case nature of independence.
A fault in an algorithm is a visible, repeatable, systematic failure which a regulator should have no problem locating if it is looking for it.
As a result, some of the questions raised by robos are perfectly valid ones which an old-fashioned adviser might do well to consider, and some have touched on them in recent years.
Should I be turning away clients who are at the absolute extremes of risk tolerance? What should I do if a prospect is obsessed with unsustainable levels of performance? What do I do about insistent clients? Should I allow execution-only business in high-risk investments?
These are all issues many advisers have been bitten by in recent years for lack of clear thinking well ahead of the event.
A decision in the moment is not always the best one, as the present clouds the long-term view.
In my limited experience working with firms on advice algorithms, it has raised more ethical than technical and regulatory concerns, and those have prompted the most challenging and interesting debates.
A lot of firms will still be avoiding them, or at least hiding them, but once accidents start to happen, as with self-driving cars, people will want to know what moral choices are buried within them.
Phil Young is managing director of Zero Support