Malcolm Kerr: Why artificial intelligence isn’t fool-proof


I have a deep-rooted and serious aversion to algorithms. I blame it on my first school. It was perfectly pleasant until we moved up to senior maths lessons and classes in ancient Greek.

To this day I have no idea why someone thought it would be helpful for a nine-year-old to study ancient Greek. In any event, no one explained it to me. Nor did anyone explain in my maths lessons what algorithms were for, let alone logarithms. I remember we were given a log book, which contained rows of long numbers supposed to help us do difficult sums.

I was also equipped with something called a slide rule. It seemed pretty high-tech at the time but I did not have a clue how to use it. It looked like a ruler with all sorts of tiny numbers and had a plastic thing that slid up and down.

Nowadays, algorithms seem to be driving everything from online portfolio construction to driverless cars (which, of course, still need qualified drivers behind the steering wheel). So I have confronted my aversion and done a little research – only to discover they were first developed in the 17th century and that my slide rule was some form of artisan computer.

Anyway, the wonderful thing is that algorithms are quite straight forward and can be relied upon completely to calculate what they have been asked to. In that sense they are fool-proof. In another sense, however, they are the exact opposite. The simple example is at the heart of the driverless car debate. Should the controlling algorithm be programmed to drive the car at the child in the middle of the road or should it drive the car off the cliff and kill the driver?

It is issues such as these which make me wonder about the concept that computers can pick up situations and words and make decisions that are better than those made by humans. More efficient? Probably, yes. More effective? The jury is out. Algorithm driven recruitment and selection provides an interesting case study.

An increasing number of employers in the US are scanning application forms and using algorithms to de-select candidates. In fact, I gather more than 50 per cent of applicant CVs are rejected without being seen by human eyes. Of course, this saves a great deal of time and money. However, this approach may be far from perfect.

First, there are questions around the rigour behind the processes. Second, and perhaps most interesting, is that employers are unable to observe what happens to the de-selected candidates. Do they end up as high fliers within another organisation or do they prove the efficacy of the algorithms and fail to add value?

With this in mind, how do the algorithms become refined? And why is this question important to professional financial advisers?

Well, algorithm driven risk assessments are moving from relatively simple propositions into support solutions based on a computer analysis of facial reactions to questions around investment risks and priorities. Most are based on Paul Ekman’s Facial Coding Action Standard first published in 1978 and used and supported by significant academic research and psychologist experience.

It is now suggested a short interactive video FACS experience can track responses to investment risk scenarios by analysing around 100 facial responses, ranging from “inner brow raiser” and “lip pucker” through to “nostril dilator” and “eyebrow gatherer”, plus a range of other indicators.

So we can now taxonimise facial movements and create algorithms that can generate the risk appetite of a potential investor at relatively low cost. Will these be used to provide guidance to potential investors without advisers? Or will they be used as support tools by advisers? I suspect the former not the latter.

It seems to me the core competency of a professional adviser is to read people; to somehow recognise what is going on in the client’s mind. I have heard this described as “physic radar” and it is probably the product of both education and experience. It enables a conversation to become honest and meaningful to both parties. Above all, it enables the adviser to create and demonstrate the empathy that is so important when suggesting to a client they need to take action against their instincts.

My guess – and it is a guess – is that computer generated risk assessments reflect the client’s instincts. And that might not create the best outcome. Actually, the likelihood is that it will not. For example, committing to an investment in equities and accepting the risk that entails is challenging for many clients. The challenge is even greater if markets have recently fallen sharply.

In situations like this, the professional adviser will look the client in the face, explain why the recommendations are appropriate to their objectives, underline the fact they fully appreciate their concerns and of course remind them of the inherent risks. This is the moment of truth that creates great relationships.

I think it will take a long time for algorithm driven artificial intelligence to replace the emotional intelligence of professional advisers. In fact, this technology might be a solution looking for a problem.

Having said all of this, the jury will be out for some time. Perhaps advisers will use FACS based technology to support some client attitudes to risk assessments. Perhaps, if the resulting analysis is aligned with that of the adviser, the technology solution will be considered sound. But then if the solution takes a different view perhaps not.

Malcolm Kerr is senior adviser at EY