Have we become complacent about managing catastrophe risk? Will the next major disaster cause significant disruption in the industry? As we enter another hurricane season in the Atlantic and Eastern Pacific basins, it is a good time for insurers and reinsurers—especially those in leadership positions—to reflect on these questions.
A Better Way to Manage Risk
Make no mistake; the ability of the industry to manage global catastrophe risk over the past 30 years has been a true success story. Prior to Hurricane Andrew in 1992, there was a limited perspective on the magnitude of the losses that could occur from a hurricane or earthquake.
Over the two decades prior to Andrew, there had not been a significant insured catastrophe loss yet exposure concentration had increased dramatically in high hazard areas. This led to the conventional wisdom that the worst that could happen was perhaps a USD 7 billion loss to the industry. Hurricane Andrew, of course, showed the folly in that thinking. Despite not making a direct hit in Miami, it caused more than USD 16 billion in loss and at least 12 insurers went out of business.
Catastrophe models did exist prior to Hurricane Andrew, but their value was not widely appreciated and adoption was limited. That all changed after Andrew; the use of catastrophe models took off and the insurance and reinsurance industry rapidly improved catastrophe risk management practices. If we fast forward to 2011, the industry was once again hit with significant loss, this time due to an aggregation of major events: the Tohoku earthquake in Japan, major severe thunderstorm losses across the U.S., earthquakes in New Zealand, and floods in Thailand. All told, the global insured losses from these events exceeded USD 110 billion. Despite the scale of loss, only one insurer went insolvent.
That is tremendous improvement during that nearly 20-year time span. Most importantly, the industry stood ready after these dramatic events to pay out claims to individuals and businesses to help them recover and get back on their feet quickly.
A False Sense of Security?
These were, however, hard-fought gains. The industry had dealt with a number of challenging years in 2004, 2005, and 2008. Catastrophe models themselves had evolved and improved, but so had the industry’s focus on collecting higher quality exposure data and being disciplined and vigilant in managing the risk in their portfolios. Unfortunately, a number of factors at work today significantly raise the likelihood that the next major event could be a rude surprise to many.
To start, the industry hasn’t been tested that much in the past decade. Even taking 2011 into account, catastrophe losses over the last 10 years have been far below the long-term average. Since 2006, the average annual industry loss has been about USD 50 billion, whereas AIR’s global suite of catastrophe models would produce a long-term average annual loss of closer to USD 75 billion1. We are also in the longest-recorded drought of major U.S. landfalling hurricanes, which we all know will end at some point.
Furthermore, industry turnover and expansion means many new people have joined us who may not have experienced catastrophic losses first-hand. During our recent Envision conference, a quick poll showed that roughly half of the catastrophe modeling professionals in attendance had joined the industry after 2005. If that is indicative of the general state of the industry, then the lessons learned in 2004 and 2005 are things of legend that we can only hope have been passed along.
On top of this, the widespread soft market conditions seem, from anecdotal evidence, to be reducing the time spent on assessing the underlying risk. Reports from some quarters indicate exposure data quality is diminishing and higher resolution exposure data sets are no longer being provided as part of the risk transfer discussion.
Back to Basics
At some level, none of this is a great surprise. All of us naturally suffer from recency bias, which is the tendency to overweigh recent experience and assume that the current state will persist into the future. Those of us who deal in catastrophe risk will always be vulnerable to this bias since, by their nature, catastrophes occur with low frequency. When other more immediate pressures intrude or we have not experienced less tranquil times, we can easily discount the need to properly manage such a risk.
Nonetheless, this should not be an excuse for inaction. There are enough of us in the industry who have experienced the lessons of prior catastrophes. So, before the next major event occurs, it is time for every organization to carefully consider how their catastrophe risk management processes stack up. A very useful, albeit lengthy, document produced by the Association of British Insurers provides more detail on best practices in this area. Key questions from this resource that every organization should be asking itself today include:
- What is the current quality of my exposure data? Is it improving?
- How have I selected and validated the models I am using?
- What assumptions am I making when running the analysis?
- What sources of loss are not covered in my modeling process and how am I adjusting for these?
Over the past few months, I’ve been asked what source of risk I think companies need to be focused on today.
My answer is simple—the risk of underestimating the current risk in your portfolio.
In this market, it is quite easy to be lulled into a sense of complacency around your modeling process if you don’t shine a spotlight on it regularly. Focusing time on these four questions can yield great dividends and help you avoid painful discoveries about your actual risk in the aftermath of the next major catastrophe.
1 Losses based on AIR’s current suite of catastrophe models.