Former FCA technical specialist turned consultant Rory Percival’s recent survey of risk profiling tools has stirred up yet more controversy regarding the suitability of asset allocation models within a risk assessment framework.
I have had a number of approaches from model portfolio managers asking for help with asset allocation modelling, in particular the process of testing their own asset allocations against a “benchmark” risk scale.
The catalyst seems to be the negative reaction to the somewhat diverse allocations being suggested by respective asset allocation tools for the same risk level.
There is an awful lot of tripe written about what is good allocation and what is not, generally involving haughty phrases like “academic studies”, seasoned with the odd “Nobel prize winning” (even though Alfred Nobel did not actually create a prize for Economics – but I would be splitting gossamer-fine hairs). It is hardly surprising advisers may not be entirely at ease with the asset allocation process.
At its basic level, asset allocation is about mixing portfolio ingredients in order to ensure the resulting risk is less than the weighted average of the constituents’ inherent risks. Returns are typically commensurate with that risk. At its simplest level, it is adding bonds to dilute the volatility of equities.
The complicated bit is attempting to use a wider range of assets to fine-tune these allocations to ultimately produce a range of “efficient” portfolios, where the portfolio producing the maximum return for any given volatility level can be generated.
Pretty much every asset allocation tool available to advisers uses a process called mean variance optimisation. Essentially, it is a relatively simple algorithm you can build in a spreadsheet using three inputs (expected return, expected volatility and a matrix of correlations between the constituent assets) combined to produce so-called “optimal” portfolios.
Of course, the three data inputs are estimates. The optimal portfolio will only be truly identified at some future date and vary depending on that date, so one could argue that a desperate search for efficiency is somewhat pointless and consequently so is the comparison of rival allocations’ weightings of various assets.
That they use historic data is a general criticism of these techniques. Since the long-term past may look nothing like the near-term future, then the potential for error is significant. Most models will claim not to use actual past data to model price behaviour but the return, volatility and correlation numbers are anchored in the past whether the modeller admits it or not. Numbers are not plucked out of the air.
The sense check is whether the proposed estimates look sensible versus what we know. Since what we know is based on our experience, history is influencing the price whether we like it or not.
We also know that the optimal proportions are extremely sensitive to the estimates of expected return values. Small changes in estimates for one or more of the three inputs, or increases in the number of portfolio ingredients, can produce tipping points where an entire asset class could be removed or become dominant.
We know that the statistical estimates of expected returns are very noisy. As a result, the model often allocates the most significant proportion to the asset class with the largest estimation error. The so-called “butterfly effect”, where small changes at one end of a process lead to disproportionate effects at the other, can then lead to completely adverse outcomes.
But then what are we estimating? One model’s UK fixed interest may be benchmarked completely differently from another. How much corporate, sovereign, index-linked or high yield is accounted for? How much should equity allocations reflect small-cap, value, growth and so on? If models are forward looking, why is Asia considered a satellite region when it is on course to overtake the US in GDP terms in two years?
Advisers need to reflect on their models’ propensity to account for this level of granularity as many do not, citing the statistical noise point above. As a consequence, advisers are left to sort out their allocations at that level, chiefly by fund by selections, leading to further potential estimation error.
All the proprietary models have the capacity to apply constraints; for example, to limit exposure to illiquid assets like property. Many advisers will not be aware of this, or at least what the constraints are across various models.
Some models will apply different expected returns for fixed interest in an Isa or pension environment because of the higher gross return. Some will not. Absolute return does not feature in models because there is no suitable benchmark. Taking all of these vagaries into account, it should not surprise us that the models might differ from one another significantly.
Of course, there is not necessarily a problem with diverse allocations if that diversity can be explained coherently to the client and the regulator. Advisers should be pointing out these potential anomalies at the start of the advice process.
To cap it all, there are many asset allocation model alternatives to MVO that are no less worthy. There are ways of involving return estimates based on adviser and investor views (Black-Litterman); subjective probabilities, based what adviser and client thinks is worst case returns, for example (De Finetti); and the now widely used naïve or neutral 1/n portfolio, where all assets have the same weight, hence no selection decisions have been made.
Simply adopting the asset allocation models offered by your favourite platform as the default solution, without considering a wider opportunity set, is no less risky in a business sense than not performing platform due diligence.
Graham Bentley is managing director of gbi2