By David Lalonde | March 1, 2014

Even as Southern England is cleaning up after devastating floods and the wettest January in nearly 250 years, many of us are likely wondering when and where the next catastrophe will occur.

Since their debut in 1987, catastrophe models have become an integral part of risk management. They help answer the questions like how often significant earthquakes, hurricanes, floods, and other catastrophes occur, where they are most likely to occur, and what will the financial consequences be when they do occur. Catastrophe models don't, however, provide "the answer" or "the number" but rather a distribution of possible outcomes, each associated with a probability and uncertainty.

The appeal of deterministic scenarios is strong. Much simpler to interpret are answers to questions such as "What would my losses be if Hurricane Andrew were to happen again?" or "What would be the impact on my business should the California Shakeout earthquake scenario actually occur, or the Lloyd's Realistic Disaster Scenario for Japan quake?" The problem here, of course, is that the probability of any single event occurring in a specific way (for example, a hurricane of a certain size making landfall at a specific location with a specific wind speed footprint) is near zero. We don't know which event will happen but we have to prepare for and protect from the whole range of future events.

In fact, managing to a small set of deterministic events can lead to a false sense of security if risk managers simply manage around zones of risk identified by such scenarios. We know that a given level of loss can result from many different scenarios so that a given set of events may be more or less useful for managing particular portfolios.

Still, scenario testing can provide useful insight into a company's resilience. And catastrophe models (in addition to providing fundamental outputs like exceedance probability curves)offer a wealth of scenarios to test-thousands, in fact, or, depending on the peril, even tens of thousands. For most events that you can dream up, if they are realistic, a similar event can be found within our stochastic catalogs. From within AIR's Touchstone® platform, you can select an event by filtering for peril, location, intensity, and a wide range of other event parameters, run it against your portfolio, and assess the likely losses from a variety of perspectives.

Our quarterly "Megadisaster" series in AIR Currents does some of the legwork for you by examining the impacts of select scenarios that produce 1% exceedance probability losses (the 100-year loss). Initially focused on "non-peak" events (the first in the series considered a magnitude 6.8 earthquake near Bogota, Colombia), which are perhaps more likely to give rise to unpleasant surprises, going forward we'll explore a wide range of high-impact, but entirely plausible events in both peak and non-peak regions of the world. The next in the series, due for publication in March, will look at a Japan typhoon. Are you ready?

What's important to remember, however, is that any number of scenarios could give rise to a given level of loss. That's what makes a probabilistic approach to catastrophe risk assessment-one that embraces the entire risk landscape-so valuable.

Is there a downside to using probabilistic models? If you think so, we'd like to hear your thoughts.


Don't miss a post!

Don't miss a post!
Subscribe via email:

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.