October 10, 2018

After a string of low loss years, the devastating 2017 Atlantic hurricane season saw four hurricane landfalls in the United States for a cumulative insured loss exceeding USD 50 billion. Those who have been in the industry long enough know far too well that it is exactly this loss volatility that makes managing extreme event risk so challenging.

As we pass the midpoint of the 2018 Atlantic season on the heels of another catastrophic storm (read more about Hurricane Florence), and with the uncertainty of what lies ahead in the next few months, it may be an appropriate time to reflect upon a few modeling fundamentals. Here are five questions that I think every (re)insurance executive should ask about their hurricane risk management practices.

Am I keeping sight of the basics?

As hurricane models become more sophisticated and take advantage of better data and technology, it is easy to focus on the individual model features and their impact on estimated losses. However, there is a far more fundamental driver of loss output integrity that often needs attention—exposure data quality. High quality exposure data help ensure that the results generated by a catastrophe model accurately reflect the loss potential.

The first step to developing confidence in model output is to address any problems with the completeness and reasonability of the exposure data, including primary and secondary risk characteristics; geographic information; replacement value; and policy, layer, and location terms. The most accurate results require using location-specific exposure characteristics, rather than relying on a model’s assumptions of unknown characteristics.

Another basic is a full understanding of your modeling assumptions, which include choice of catalog; appropriate selection of sub-perils; and various resolution, demand surge, and uncertainty options. These modeling assumptions could have a significant impact on your interpretation of modeling results.

What time horizon am I using to express loss probabilities?

When losses are expressed in terms of a one-year timeframe, for example, a “1-in-100-year loss” or “loss at the 1% exceedance probability” there is a natural tendency to view them as being more remote than they actually are. Reframing a 1% annual probability loss as a nearly 10% probability loss over the next 10 years can shift your perspective on its likelihood, even though the loss is equivalent.

The result of always thinking of losses for a single upcoming year—in addition to being out of sync with how businesses are run—is that you may be overestimating your risk tolerance. Of course, there is nothing magical about a 10-year timeframe, so pick a length of time that makes sense for your business.

Taking it one step further, you may want to account for the increasing value of exposures. For example, assuming an annual average growth of 6%, a hurricane today that causes USD 25 billion in loss will cause more than USD 40 billion in loss 10 years from now. And the probability of a year with USD 150 billion or more in insured hurricane losses over the next decade goes from roughly 10% to nearly 17%. Your loss analysis should reflect the expected change in the number and value of exposures in your own book of business.

Am I evaluating the right loss measures?

Modeled losses can be calculated on an aggregate or occurrence basis. Just as in nature, each year can have zero, one, or multiple damaging events, and aggregate losses take into consideration all loss-causing events in each simulated year.

Occurrence losses, on the other hand, are based only on the largest loss in each simulated year, so if two identical losses occur within the same year, only one contributes toward the occurrence loss. For lower frequency perils such as earthquake, occurrence losses are often similar to aggregate losses. For hurricanes on the other hand, where conditions can be conducive to high or low activity and clustering is a real phenomenon, aggregate losses can be much higher than occurrence losses if multiple events impact your portfolio. While it may be useful to examine losses on an occurrence basis for certain scenarios, aggregate modeled losses usually provide a more meaningful risk metric.

Similarly, while national portfolios are often broken down by region, only analyzing regional losses may not provide a comprehensive picture of the total risk. Loss metrics for different regions cannot simply be added to calculate the combined loss metric. For example, the year corresponding to the 1% loss exceedance probability in one region is very unlikely to correspond to the same year in another region. Focusing solely on the region with the highest loss can underestimate the risk, whereas adding the 1% EP loss across regions would overestimate the risk. The most comprehensive approach is to run the loss analysis using combined regional portfolios.

Am I accounting for non-modeled sources of loss?

As we have seen numerous times in the past, some sources of loss during an extreme event may not be explicitly captured by a catastrophe model. These can include non-modeled secondary perils, non-modeled lines of business and coverages, and loss adjustment expenses. To help avoid surprises, executives should ask their modeling teams if possible sources of non-modeled insured loss have been considered and if appropriate efforts are being made to adjust the loss results. Model documentation will clearly indicate what each model covers.

As data collection improves and technology advances, models will be able to capture more sources of previously non-modeled loss. Users should regularly and systematically validate models and assess their assumptions about non-modeled losses using recent loss data.

Do I understand what drives my modeled losses?

After running losses using high quality data, a solid comprehension of modeling assumptions and non-modeled sources of loss, and selecting appropriate risk measures, it is prudent to take a deeper dive into the loss output to determine what is driving your modeled losses.

The factors that can be examined include geography, line of business, and construction and occupancy. Such analyses can be used to shape decisions about which regions present growth opportunities and which are costly relative to the premium income. Regional loss analyses can also be used to guide claims handling decisions and internal communications.

In addition, if losses are disproportionately driven by a particular segment of your portfolio (e.g., wood frame construction), you can further investigate to understand if the premium appropriately reflects the risk, or if there may be inaccurate coding of data or some other data-driven error that is responsible.

Closing Thoughts

While nothing can quite prepare us for the devastation wreaked by hurricanes, year-to-year volatility in Atlantic hurricane losses is something that should be expected by the industry—and putting in place a few modeling best practices can go a long way in helping you get the most out of your hurricane models. The promise of catastrophe modeling, after all, is to offer a stable, long-term view of risk to help companies weather the short-term ups and downs.

If you’re not already certain of your answers to the five questions above, I encourage you to talk to your modeling teams to ensure that you have the highest level of confidence in your modeling results.

Bill Churney

by Bill Churney
AIR Worldwide




You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.