The re/insurance industry used catastrophe risk models long before Hurricane Katrina, but Katrina challenged the standards of these models. It called into question the quality of exposure data, how the models were used and their suitability for various business applications.
Ten years on from Hurricane Katrina, Willis Re’s Prasad Gunturi speaks with Dr. Jayanta Guin of AIR Worldwide and Dr. Robert Muir-Wood of Risk Management Solutions (RMS) to discuss how catastrophe models have evolved since Katrina, and what influence this major loss event has had on the development of hurricane risk models.
Below is a summary of the podcast and a snapshot of answers to some of the questions put to the panelists.
We can confidently say that quality of exposure data has improved significantly since Hurricane Katrina. Companies are taking a more rational approach to model-based business decisions and many multinational re/insurance companies and intermediaries, including Willis, have since invested heavily in model research and evaluation. There is also closer scrutiny of the assumptions and science behind the models, resulting in better informed decisions.
Can you briefly describe the state of hurricane models before and after Hurricane Katrina?
The risk associated with U.S. hurricanes is probably the best understood of all the natural hazard perils thanks to the long historical record and the wealth of available claims data. Katrina did not fundamentally change our approach to modeling the hurricane peril. Still, there were lessons to be learned. The main focus fell on the problem of exposure data quality which has led to significant improvements. Katrina also paved the way for enhancements in the way storm surge—and flooding in general—is modeled.
Before Katrina, hurricane modeling remained strongly influenced by Hurricane Andrew, which was at the far end of the spectrum for having such a small proportion of storm surge losses. Katrina was the opposite of Andrew, creating more loss from flood than from wind.
The flooding of New Orleans was itself a secondary consequence of the hurricane and became a catastrophe in its own right – what we now call a ‘Super-Cat’ when the secondary consequences become larger than the original catastrophe.
What impact does Katrina have in terms of how you develop hurricane risk models, and how has the modeled risk to the Gulf coast changed since Katrina?
The understanding at the time was that intense storms at low latitudes were relatively small. Katrina, however, was enormous. That led us to make revisions in some of our assumptions.
Katrina also revealed insights into the vulnerability of commercial structures. A good example is the large number of casinos built on barges along the Mississippi coast. Today, there is much better recognition of the wide array of buildings that companies are insuring and our view of the vulnerability of commercial assets has increased as a result. In fact, I would say that overall our view of hurricane risk along the Gulf coast has increased.
The biggest change in the modeling agenda after Katrina related to the recognition that storm surge was not just some add-on to a hurricane loss model, which might generate an additional marginal 5% of the loss, but that in terms of ground up losses storm surge could be just as important as the wind.
The storm surge losses are also far more concentrated than the wind losses, which gives much more opportunity to employ modeling. This approach has been well validated in recent events such as Hurricane Ike and Superstorm Sandy, which further refined elements of our storm surge flood modeling capability, in particular around underground space.
Katrina is one of the key benchmark events for the quantification of storm surge risk to coastal properties. How have storm surge models improved since Katrina?
Storm surge modeling has improved very significantly. It is true that prior to Katrina, it did not get the attention it deserved because storm surge risk was not thought to be a major driver of overall hurricane losses. We’ve since learned otherwise, not only from Katrina, but from storms like Ike and Sandy.
So at AIR we’ve brought to bear new science in terms of numerically-based hydrodynamic modeling, the computer power necessary to handle high-resolution elevation data, and exhaustive analysis of detailed claims data to ensure that the model, the localized nature of the hazard, and improved exposure data combine in such a way to validate well with datasets from multiple storms—not just one or two. We, as developers of models, need to be cautious and avoid over-calibrating to a single headline event; doing so will result in a model that will not validate well across an entire 10,000-year (or larger) catalog of events.
The old ways of modeling storm surges simply did not work. In the Gulf of Mexico storm surges at landfall are commonly much higher than you would find by using the near-shore SLOSH model, because far more storms lose intensity in the two days leading up to landfall. To capture the storm surge at landfall one has to model the wind field and the surface currents and waves generated by the wind, over far more time in the life of the storm than just for the period before landfall. FEMA has identified that there are only two coupled ocean atmosphere hydrodynamic models up to the task of being good enough for generating storm surge hazard information along the US coastline: the ADCIRC model and MIKE21 developed by DHL.
Katrina underscored the issue of certain components of non-modeled losses such as damage due to polluted storm surge water, mold, tree fall, riots, etc. How are your models accounting for amplifying impact to claims from indirect effects of hurricanes?
Mold and the toppling of trees during hurricanes are, of course, nothing new. The model cannot be expected to resolve whether a particular tree topples at a particular location. As the losses that arise from these events are present in the claims data, used to calibrate the model’s wind and storm surge damage functions, it is reasonable to say that such sources of loss are captured implicitly.
However, we make no attempt to model other secondary sources of loss, such as rioting or pollution clean-up. The ability to model these sources of loss explicitly is highly questionable because of the inability to distinguish them in claims data.
The experience of Katrina triggered a revolution in our thinking about additional factors that drive up loss, from which emerged the structure of post-event loss amplification or PLA. In this structure we can identify four factors that tend to push up loss beyond the simple hazard exposure loss equation of Cat modeling.
First there is ‘economic demand surge’ – when excess demand leads to price increases in materials and labor.
Second there is ‘deterioration vulnerability’ – as seen widely in houses abandoned in New Orleans after Katrina. Even where a property was not flooded, if it had a hole in the roof, after a few weeks the whole interior was contaminated with mold.
Third there is ‘claims inflation’ when insurers are so overwhelmed with claims that they let through claims below some threshold without checking.
Fourth there is ‘coverage expansion’, when typically under political pressure insurers pay beyond the terms of their policies – waiving deductibles, ignoring limits, and covering perils like flood. When the level of disruption is so high that urban areas are evacuated, so that BI losses simply run and run as seen in the Christchurch 2010 and 2011 earthquakes, we call this “super-Cat”.
In terms of our broader modeling agenda we focus on trying to capture economic demand surge and claims inflation and recommend stress tests or add defaults around coverage expansion. We also apply super-Cat factors to the largest loss events affecting cities that could be prone to local evacuations.
With thanks to Dr. Jayanta Guin, and Dr. Robert Muir-Wood
Dr. Jayanta Guin, AIR Worldwide
Dr. Jayanta Guin is AIR’s Executive Vice President, responsible for strategic management of the AIR Research and Modeling group. Under his leadership, the group has developed a global suite of catastrophe models and continues to enhance modeling techniques. Jayanta also provides strategic input into AIR’s product development and consulting work for insurance linked securities. With more than 17 years of experience in probabilistic risk analysis for natural catastrophes worldwide, he is well recognized in the insurance industry for his deep understanding of the financial risk posed by natural perils. His expertise includes a wide range of natural and man-made phenomenon that drive tail-risk.
Jayanta is currently a member of the governing board for the Global Earthquake Model (GEM) initiative. He also contributes to the Research Advisory Council of Institute of Business and Home safety (IBHS).
Dr. Robert Muir-Wood, Risk Management Solutions
Robert Muir-Wood has been head of research at RMS since 2003 with a mission to explore enhanced methodologies for natural catastrophe modelling and develop models for new areas of risk. He has been technical lead on a number of catastrophe risk securitizations, was lead author on Insurance, Finance and Climate Change for the 2007 4th IPCC Assessment Report and lead author for the 2011 IPCC ‘Special Report on Managing the Risk of Extreme Events and Disasters to Advance Climate Change Adaptation’.
He is Vice-Chair of the OECD High Level Advisory Board of the International Network on Financial Management of Large Catastrophes and is a visiting professor at the Institute for Risk and Disaster Reduction at University College, London. He has published six books, written scientific papers on earthquake, flood and windstorm perils and published more than 200 articles.
Listen to the podcast or download it below.