In a previous blog I discussed how a reinsurer might arrive at a price to protect an insurer’s retained portfolio of property business against individual large losses. The so-called “exposure rating” method is a useful tool for determining non-proportional reinsurance pricing, but by no means is it the only tool.
Reinsurance training programmes and text books will always stress that there is no single right or wrong answer when it comes to excess of loss rating. Indeed, if you ask several reinsurers to provide quotations for an excess of loss reinsurance contract, you may well receive several different prices. These will depend on a number of factors, including:
- The reinsurers’ views on the adequacy of the original rates the insurer is charging
- The loss curve each reinsurer uses
- The reinsurers’ loadings for fluctuation, uncertainty, its own expenses and profitability requirement
- Prevailing market conditions
Both insurance and reinsurance are businesses that are driven by statistics. To a certain degree they use the experience of the past as a guide to the future. However, that can be a dangerous game unless we accept the possibility that the unexpected will occur. Exposure rating adopts a very generalised view of the distribution of losses by size, in relation to the overall loss burden of a type of business.
The so-called “First Loss Scale” illustrated in the earlier blog was in fact derived from the historical experience of the Lloyd’s market. Other reinsurers have built similar scales based on their own observations, some of which may include data covering a century or more of claims experience. It is hardly surprising then that several reinsurers, given identical information, will quote different rates for the same level of cover.
“Experience rating” also attempts to use past experience of claims in order to price reinsurance contracts, but here the claims experience is specific to the account in question, and usually covering a period of 10 years or less, rather than the observation of many similar risks over many years.
Smaller losses are much more frequent than larger ones. The higher numbers of individual losses make the statistics more reliable, so it stands to reason that experience rating works better for the lower layers of an excess of loss programme. For the higher layers, reinsurers will often use some sort of loss curve (typically a “Pareto” curve for the mathematically minded), calibrated by the experience-rated lower layer.
Experience rating typically employs a calculation of the Burning Cost of a layer of protection.
If you work in a reinsurance environment, you will probably have come across this term. But what does it mean? Quite simply, the burning cost of an excess of loss layer can be expressed as Incurred Losses to the layer/The Premium Income of the protected portfolio (expressed as a percentage).
So if a layer suffers losses of $1,000,000 and the Premium Income of the protected portfolio is $10,000,000 the pure Burning Cost is 10%.
What does that tell us? It means that if we had charged the Reinsured 10% of his income for this layer of reinsurance, the premium the reinsurers receive would be equal to the claims they have to pay. On its own, that’s not much use.
The Reinsurers would lose money on that basis, because there would be nothing to pay their own costs or to generate a profit. They need to add a loading factor in order to build expenses and profit expectation into their rates. The most common loading factor is 100/70th. Our 10% rate then becomes 14.29%.
How is this used? There are two ways of using Burning Cost to calculate rates; Retrospective and Prospective.
Sometimes an insurer needs a layer of protection that is almost certain to produce losses for the reinsurers on a regular basis. Reinsurers will want to charge a premium that will cover the expected losses, plus their own margins. They can achieve this by charging a variable rate that is calculated by loading the eventual losses to the contract by (typically) 100/70th.
This “Loaded Burning Cost” is then capped by expressing it as a maximum rate on the declared premium income. The idea of the cap is that once the losses reach a certain level (an abnormal result for the year), the Reinsured starts to benefit from the cover. The contract premium may be expressed something like this:
Minimum and Deposit Premium $1,000,000 adjustable on expiry at 100/70th of the incurred losses to this agreement, subject to a minimum rate of 2.5% and a maximum rate of 12.5% of the Reinsured’s Gross Net Premium Income (as defined).
This form of rating a contract is extremely useful because it can be used for any class of business. There are of course alternative structures that can achieve a similar result for a smaller premium outlay and we shall discuss this in a later blog.
For a low-layer catastrophe cover, the historical losses to the layer (adjusted for inflation to current values) will be expressed as a percentage of the premium income over the same period (adjusted for inflation and for any changes in original rate movements). This will give a pure Burning Cost which again, the reinsurers would typically load by 100/70th.
Care must be taken to set the upper limit of this layer at a realistic level. If the upper limit is set too high (where there is little or no historical loss activity), a distorted rate will be produced that effectively gives free cover, above the normal level of loss activity.
Having established the experience rate for this low layer of cover, underwriters next need to determine the Probable Maximum Loss (or catastrophe PML, which will typically be estimated with the help of Catastrophe Modelling software).
These, combined with a “suitable” loss curve, give the underwriters sufficient reference points to be able to quote higher layers of the programme, essentially by reading the price from the curve. It is a complex process, so I shall leave that one for another time.