Capital Modelling in Solvency II – Where Did it All go Wrong?

lost-at-sea_645x400

At a recent scientific seminar I was asked to talk about Insurance and Reinsurance In the Modelled World. I argued the case for capital modelling was clear-cut, and that in five years’ time every insurer of any size will likely to either have an internal capital model or be building one. A news story reported in my blog last week gave me pause for thought.

Hiscox is an insurer known for its technical acumen. They have invested heavily in modelling and scientific understanding, for example recently creating the post of Head of Catastrophe Research. Their CEO, Bronek Masojada, is a keen advocate of scientific and industry collaboration. In a previous life I had the pleasure of serving on the board of the Insurance Intellectual Capital Initiative under his chairmanship. Already that body is producing interesting work.

So it was surprising to hear that Hiscox is abandoning internal capital modelling for Solvency II, except where they are obliged to (ie their Lloyd’s operation). Bronek outlined his objections at a recent insurance seminar. His comments could be seen as an attack on capital modelling per se, but it’s not. It is a precisely aimed missile at one particular use of internal models – to set regulatory capital. He said: “Imagine if we all drove cars with individually set speed limits based on design of suspensions etc. It would cause absolute chaos on the roads. Models can never be that good.”

A Desire to Believe

As somebody who has spent entire his professional life building models and proselytising their use, I am very aware of their potential but also their inherent limitations. When we produced the first peril models in the early 90s it was touching how they were received as the answer to industry’s problems. There is a basic human instinct, a want to believe.

This in combination with another human trait, a desire for certainty, led to many otherwise intelligent people in business letting models dictate decisions – if the model says this, then we must do that. As modellers, we knew that the models themselves were based on large numbers of heroic assumptions, but hey, we were moving from the backroom to centre stage, with all that entails (profile, prestige, money) – let’s accentuate the positive.

Similarly what we then called DFA (Dynamic Financial Analysis) models were increasingly being used to inform decision-making. It is a fundamental truth of reinsurance modelling that data is always limited and flawed. Even with good data a variety of conclusions can be equally well justified. I remember having an argument with a young actuary many years ago who felt under-pressure to come up with a result that supported a business argument.

A Win / Win Situation?

However in that case it was very possible to use the data available to support the client’s case without bending the truth or making misleading statements. Broking a treaty to a reinsurer is a grown up business, as long as all assumptions are clear and can be legitimately justified. It’s up to the other party to argue against them. It would be negligent to present a case that worked against our client when another equally justifiable case could be presented that worked for them.

But the belief persists that with better data a single right answer can be reached. Yes the better data you have, the more confidence you have that your assumption is valid, but it is still an assumption based upon a limited data sample and a subjective set of modelling decisions. Models take combinations of assumptions and torture them to come to conclusions. Inevitably model results are compared to expectation and assumptions revisited if the results don’t fit with what people want to hear (often euphemistically called calibration).

Now this is not an attempt to knock modelling. In the pre-modelling era decisions were made based upon underlying assumptions that were normally unclear and often unstated. Modelling forces a light to be shone on assumptions underpinning the result. We can see which assumptions are key, how safe they are and how sensitive our result is to changes in those assumptions.

By this process we can focus our further analysis on reducing the uncertainty around the assumptions that matter and so improve the model and our understanding of the business. We are still learning how this process should work, but it has transformed our industry in my career, immeasurably for the better.

Informing the Answer, Not Providing it

BUT, and it’s a hell of a big but, we never will get to a point where we can say with absolute certainty that this is “the” correct result.  Therefore models only advise, models must never decide.  Intelligent humans, who understand the models and the business context they operate in, should use them to help make a business call, be it “should I write this risk?”, “should I buy this reinsurance cover?” or “should I invest in this stock?” The model is part of the decision process; it is not the decision process.

And that I think is the key to Bronek’s point. Solvency II tells insurers to calculate a single number that represents their 1 in 200 worst case result. Putting to one side for a moment the impossibility of predicting a result to such an extreme return period, the problem is that single number has such importance and can have such a profound impact that regulators feel obliged to pour over every aspect of the model parameterisation, build, validation and use. Firms wishing to use an internal model for Solvency purposes are subject to an extremely onerous model approval process.

Bronek noted in Hiscox’s 2010 annual report and accounts  that: “The challenges for all those involved in the process can be summed up by the fact that our application for approval under Solvency II is expected to reach 5,000 pages, and it is thought that the FSA will receive over 100 similar applications. One has to feel sorry for the teams who have to read and approve all the applications.”  The Willis Economic Capital Forum at Georgia State University will shortly be publishing a white paper on model validation, but it doesn’t need an academic study to see that it is madness to expect anybody to vet a 5,000 page document, let alone 100 of them.

Perhaps the European Commission and the European regulator EIOPA are losing sight of why they are encouraging firms to build internal models.  Surely it is entirely to encourage insurers to develop a culture where the risks the firm faces are robustly and transparently identified and managed. Model development and risk quantification is a means to an end, to encourage assumptions used in the business to be brought to the light and challenged.

Now I am sure that Hiscox will continue to develop their capital model and use it to inform internal decision-making; it’s just that they won’t use it to set regulatory capital.  But my concern is for smaller firms that are still at an early stage of this process. Rather than being encouraged by Solvency II to start an internal capital model development, smaller insurers will undoubtedly be put off by the sheer weight and cost of compliance. Thus all the collateral benefits of better risk management that comes with an internal modelling programme are lost.

A Different Way

There is a case for a much simpler risk adjusted capital formula, stripped of this pretence that the amount calculated represents a 1 in 200 number. Bronek argues for a simplified (but regularly updated) standard model for minimum capital setting, where firms know the regulator will come calling if they get within 150% of that. Individuals will have their own models to assist on their decision-making, but not ones which have been approved by the regulator.

Bronek acknowledged that this approach may be less “efficient” for individual firms, but it’s probably more efficient for the overall economy.  Returning to his differential speed limit analogy, a fixed universal speed limit may make life less efficient for individual drivers but it makes the roads more efficient for all. He cites the Lloyd’s RBC system as such a system that was that worked very well, with each managing agent additionally having their own DFA models for individual portfolio assessments without the need for regulatory involvement.

The trouble with a formula approach to setting regulatory capital is that if it is set at levels that matter to the firms (and if it is not, what is the purpose?) then it will influence their decision-making. This encourages decisions optimised to control the result of the formula, not ones that are optimal for the firm.  This particularly applies to reinsurance where factor based formulas cannot show the benefit of non-proportional covers, often the optimal solution to protect the business.

In Need of an Overhaul

Solvency II has tried to get around this by making the standard formula more and more complex, but it still fails the test – it is neither fish nor fowl, unsuited to purpose. Unfortunately the European Commission is uncomfortable with judgment, everything must be codified, described in writing in a variety of languages, and there is little room for flexibility or pragmatism.

My personal preference would be an approach similar to that currently prevailing in the UK, where  firms are asked to determine their own capital requirement. This then forms part of the process of the final capital guidance which is a more subjective decision taking into accounts factors beyond just the results of the model.

Let us hope that Bronek’s very public statement of Hiscox‘s position opens up the debate again.  The debate will not be welcomed in Brussels or Frankfurt (home of EIOPA) where too much political capital has been expended over the last ten years for an admission that Solvency II has been driving up the wrong track for years. Arguably the only way to get to the point of a more logical system will be for Solvency II to collapse under the weight of its own shortcomings – a result that still whilst still unlikely looks increasingly possible.


This post was originally published March 6, 2013.

 

About David Simmons

David is a Managing Director in the Willis Re Analytics team in London. He focuses on Enterprise Risk Management, i…
Categories: Reinsurance | Tags: , , , ,
  • http://www.rifconsulting.com Richard Fein

    Great article and solid warning on the extent of reliance on models. I have built models to assist in adding some structure to uncertainty. I have grown to appreciate the limitations of models as decision makers as opposed to the real value as decision support. In the instant article it’s the unintended consequences of the former that can add to the risk.

  • Jostein Amdal

    A lot of wise words here, and I couldn’t agree more on the futility of calculating a precise 1/200 number, or why 1/200 is indeed a more important return periode to care about than the others. Modelling resources are scarce, and one uninteded effect of S-II is that so much of these resources are now directed at getting the models approved, caring for all the whims of the regulators who are scared of the consequences of leaving the decision on/calculation of regulatory capital requirement to the companies themselves, instead of actually working to improve the decision making capabilities of the insurance companies.

    Part of the blame for the regulators scarce also lays with the financial sector at large, where banks until the last couple of years seemed in a mad drive of trying to leverage their companies as much as possible, tweaking their models to achieve this, thinking this was the best way to create shareholder value.