View more on these topics

Malcolm Kerr: Why artificial intelligence isn’t fool-proof


I have a deep-rooted and serious aversion to algorithms. I blame it on my first school. It was perfectly pleasant until we moved up to senior maths lessons and classes in ancient Greek.

To this day I have no idea why someone thought it would be helpful for a nine-year-old to study ancient Greek. In any event, no one explained it to me. Nor did anyone explain in my maths lessons what algorithms were for, let alone logarithms. I remember we were given a log book, which contained rows of long numbers supposed to help us do difficult sums.

I was also equipped with something called a slide rule. It seemed pretty high-tech at the time but I did not have a clue how to use it. It looked like a ruler with all sorts of tiny numbers and had a plastic thing that slid up and down.

Nowadays, algorithms seem to be driving everything from online portfolio construction to driverless cars (which, of course, still need qualified drivers behind the steering wheel). So I have confronted my aversion and done a little research – only to discover they were first developed in the 17th century and that my slide rule was some form of artisan computer.

Anyway, the wonderful thing is that algorithms are quite straight forward and can be relied upon completely to calculate what they have been asked to. In that sense they are fool-proof. In another sense, however, they are the exact opposite. The simple example is at the heart of the driverless car debate. Should the controlling algorithm be programmed to drive the car at the child in the middle of the road or should it drive the car off the cliff and kill the driver?

It is issues such as these which make me wonder about the concept that computers can pick up situations and words and make decisions that are better than those made by humans. More efficient? Probably, yes. More effective? The jury is out. Algorithm driven recruitment and selection provides an interesting case study.

An increasing number of employers in the US are scanning application forms and using algorithms to de-select candidates. In fact, I gather more than 50 per cent of applicant CVs are rejected without being seen by human eyes. Of course, this saves a great deal of time and money. However, this approach may be far from perfect.

First, there are questions around the rigour behind the processes. Second, and perhaps most interesting, is that employers are unable to observe what happens to the de-selected candidates. Do they end up as high fliers within another organisation or do they prove the efficacy of the algorithms and fail to add value?

With this in mind, how do the algorithms become refined? And why is this question important to professional financial advisers?

Well, algorithm driven risk assessments are moving from relatively simple propositions into support solutions based on a computer analysis of facial reactions to questions around investment risks and priorities. Most are based on Paul Ekman’s Facial Coding Action Standard first published in 1978 and used and supported by significant academic research and psychologist experience.

It is now suggested a short interactive video FACS experience can track responses to investment risk scenarios by analysing around 100 facial responses, ranging from “inner brow raiser” and “lip pucker” through to “nostril dilator” and “eyebrow gatherer”, plus a range of other indicators.

So we can now taxonimise facial movements and create algorithms that can generate the risk appetite of a potential investor at relatively low cost. Will these be used to provide guidance to potential investors without advisers? Or will they be used as support tools by advisers? I suspect the former not the latter.

It seems to me the core competency of a professional adviser is to read people; to somehow recognise what is going on in the client’s mind. I have heard this described as “physic radar” and it is probably the product of both education and experience. It enables a conversation to become honest and meaningful to both parties. Above all, it enables the adviser to create and demonstrate the empathy that is so important when suggesting to a client they need to take action against their instincts.

My guess – and it is a guess – is that computer generated risk assessments reflect the client’s instincts. And that might not create the best outcome. Actually, the likelihood is that it will not. For example, committing to an investment in equities and accepting the risk that entails is challenging for many clients. The challenge is even greater if markets have recently fallen sharply.

In situations like this, the professional adviser will look the client in the face, explain why the recommendations are appropriate to their objectives, underline the fact they fully appreciate their concerns and of course remind them of the inherent risks. This is the moment of truth that creates great relationships.

I think it will take a long time for algorithm driven artificial intelligence to replace the emotional intelligence of professional advisers. In fact, this technology might be a solution looking for a problem.

Having said all of this, the jury will be out for some time. Perhaps advisers will use FACS based technology to support some client attitudes to risk assessments. Perhaps, if the resulting analysis is aligned with that of the adviser, the technology solution will be considered sound. But then if the solution takes a different view perhaps not.

Malcolm Kerr is senior adviser at EY



Man and machine: A glimpse into the future of financial advice

Technology is waiting in the wings that could overhaul the advice process and dramatically change the way firms talk to clients. These developments are not decades away, nor mere years away. Some of these radical systems are already here, making their way to the UK from US and European markets. They include systems that map […]


The future of advice is already here

In grappling with new technology, there is quite a lot of jargon to get to grips with: taxonomy, emotion recognition and dynamic segmentation are just some examples I have come across over recent weeks. But behind the buzzwords are ideas that could radically change the way advice is given. For example, who wants to sit […]

Guide cover

Guide: how to… communicate with your pension members

Effective communication of your pension scheme is a large part of getting auto-enrolment right. Delivering the same message to all employees is not necessarily the way to go. To assist you with the communication of your pension scheme, we have provided some key areas to think about, such as:

  • What to consider when segmenting your workforce
  • How to communicate to pension scheme members at the right time in their member lifecycle
  • What topics you should be discussing with your pension members
  • The new pension freedoms and the importance of communicating them


News and expert analysis straight to your inbox

Sign up


There is one comment at the moment, we would love to hear your opinion too.

  1. Money Guidance CIC 26th September 2016 at 2:51 pm

    psychic radar? (typo)

Leave a comment


Why register with Money Marketing ?

Providing trusted insight for professional advisers.  Since 1985 Money Marketing has helped promote and analyse the financial adviser community in the UK and continues to be the trusted industry brand for independent insight and advice.

News & analysis delivered directly to your inbox
Register today to receive our range of news alerts including daily and weekly briefings

Money Marketing Events
Be the first to hear about our industry leading conferences, awards, roundtables and more.

Research and insight
Take part in and see the results of Money Marketing's flagship investigations into industry trends.

Have your say
Only registered users can post comments. As the voice of the adviser community, our content generates robust debate. Sign up today and make your voice heard.

Register now

Having problems?

Contact us on +44 (0)20 7292 3712

Lines are open Monday to Friday 9:00am -5.00pm