One of the reasons that so few people predicted anything like the financial crisis of 2007-2008 was the common wisdom that housing markets in different parts of the U.S. moved independently of one another, and that there was very little interdependence between the change of housing prices in one region with that in other regions.
Historical data tended to show that regional prices did not move at the same time or at the same rate; the risk of a truly nationwide crisis was largely ignored. Until it happened.
In the context of enterprise risk management, potential interdependence of risks should never be ignored: It’s built into the very name—the enterprise-wide nature—of the discipline.
But understanding how different risks relate to one another is a very difficult question. Often, there is insufficient information to quantify the degree of interdependence.
Independent vs. Interdependent Risks
Independence of risks is the opposite of interdependent. Totally interdependent risks need to be added together, while totally independent risks when combined are much less than the sum of the parts.
The key to proper management of a variety of risks is the understanding of the dependency characteristics of the risks. Many key practices of ERM rely heavily on this understanding of interdependence—especially risk measurement and the determination of risk capital.
Impact on Risk Strategy
Businesses are sometimes advised to concentrate on the one or two things that they are able to do best. For an insurer, that approach can lead to dangerous concentrations of risk. Insurers use the concept of risk interdependence when considering strategic alternatives.
The question about any new proposed action plan would be “Does it increase concentration where we are already highly concentrated, or does it increase our diversification of risk?”
The best strategy for many insurers is to spread their risk exposures among the less interdependent strategic choices where they have the skills and experience to perform well. Careful consideration of interdependence often requires measurement.
We can approach the question of measuring interdependence statistically, or structurally—or using other methods we’ll describe in a moment.
Statistical methods seek to use historical data series to assess the correlation between different risks. One major challenge is that there is rarely enough data to be statistically credible.
Another challenge is that even when we have a wealth of data, it stretches over a significant time horizon and so must be adjusted to reflect current conditions.
A common way to express the interdependency among several sources of risk is to create a correlation matrix.
While the term “correlation” has a specific mathematical meaning (in fact, there is more than one kind of mathematical correlation), we’ll leave that to the side for a moment and just work with the intuitive idea that:
- two risks that move perfectly in lockstep are assigned a correlation value of +1
- two risks that move exactly opposite to one another have correlation -1
- two risks whose movement is completely unrelated have correlation 0
- other possibilities fall along this spectrum accordingly
In fact, we don’t have to fill out the whole rectangle since the correlation between (say) auto liability and Workers’ Compensation is the same as the correlation between Workers’ Compensation and auto liability.
And filling out the diagonal is easy; since every risk is perfectly correlated to itself, the diagonal is all 1’s.
This indicates a strong positive relationship (+0.854, very close to +1) between auto liability and Workers’ Compensation, whereas auto physical damage has a significant negative relationship (-0.404) to Workers’ Compensation, and the homeowners line seems to be moderately correlated to everything except auto physical damage. It’s difficult to derive any overarching patterns from these results.
A Tillinghast study for the Actuaries Institute in Australia performed by Robyn Bateup and Ian Reed suggested somewhat more intuitive results. They found, broadly speaking, that “casualty type” lines such as professional liability, general liability, and Workers’ Compensation are moderately correlated (+0.25) with each other, “property type” lines are more weakly correlated with each other, with very little correlation observed between property and casualty types.
Diversification benefit is allowed for in many recent regulatory risk-based capital initiatives such as Solvency II in Europe. Nested correlation matrices are baked into the Solvency II standard formula.
For example, risk capital for market risk, default risk, life insurance risk, health insurance risk and life insurance risk have correlation factor of either + 0.25 or zero. Within non-life insurance premium and reserve risk, lapse risk and catastrophe risk again have either 0 or +0.25 correlation.
Then within each of these there are further correlation matrices, for example for premium risk each of 12 classes of business is correlated either +0.25 or +0.5 with other lines. A degree of geographic diversification benefit is also allowed.
Overall the diversification benefit is considerable, but smaller companies writing limited lines of business in one country are disadvantaged.
While the calibration of such matrices is a challenge, they can be quite useful in simulation.
However, the exact method used to implement the interdependence in the simulation is crucial. Behavior in extreme years may not follow the same pattern as in more normal years—just as we saw in the crisis of 2007-2008.
Non-linear dependency structures, such as copula, allow greater correlation at extremes than in “normal” years.
But do we have the data to properly calculate which copula is appropriate, still less to parameterize it? In reality a heavy dose of expert opinion, explicit or implicit, is required. Assumptions are difficult for the layman to interpret.
Structural analyses seek to understand the reasons why different risks might move in tandem. One type of structural analysis is the use of scenario analysis, which is the focus of a separate blog post.
Briefly, the idea is to identify events or scenarios that could affect multiple risks, and then test the quantum of loss that could arise from such events. This can be used to drive a common shock simulation.
As compared to matrix-based correlation methods, common shock models are less opaque (We had trouble interpreting a 6 x 6 matrix above—just imagine how much more difficult it is to interpret a matrix, or nested matrices, linking dozens or hundreds of risks!), but it can still be challenging to create the set of “shocks” to be applied in the simulation.
When a problem is very complex, sometimes a simple-minded approach is best. This at least has the advantage that we cannot let the complexity of our model fool us into thinking we know more than we do.
One such approach is used in the Lloyd’s market. Querying experts for their opinion, we can create a non-numeric correlation matrix by describing the correlation between any two risks simply as “high,” “medium,” and “low.” For conservatism, this approach does not permit negative correlations.
Then, the user can assign numerical values – perhaps 0.60, 0.20, and 0.05—and test the sensitivity of the resulting output.
Testing the Extremes
An even simpler approach is to begin by testing the extremes. First, assume all risks are perfectly correlated—when anything goes wrong, everything goes wrong—and calculate your preferred aggregate risk metric.
Then, assume all risks are completely independent of one another and recalculate. How great is the gulf between these two results? That provides useful information, since the true answer almost certainly lies between those two extremes.
One can then linearly interpolate between the extremes: What happens if you go halfway between the two, or 75% of the way towards perfect correlation? At what point along the range are crucial business thresholds (such as depleting a certain percentage of capital, or dropping to a lower rating) breached?
Suppose that assuming no interdependence at all yields a risk value of 300, whereas assuming total interdependence and just adding up all the risks produces a risk value of 500. Then we could review the interpolated values:
While it’s very unlikely that all the risks are either completely independent of one another or move absolutely in lockstep, the company might judge that “when it rains, it pours” and so in bad years things are more likely to go wrong together—i.e. closer to total interdependence than to no interdependence.
If the company has access to only 400 units of risk capital, this simple analysis might suggest additional mitigation, additional reinsurance, or additional capital is called for.
Assessing the results of such simple approaches in light of expert judgment and other benchmarks may prove even more useful than a highly complex algorithm.
This article was authored with David Simmons