By Ivelin Zvezdov | February 17, 2017

Arriving at the pure technical catastrophic premium for flood insurance is not as straightforward as it might seem. One significant challenge is the availability and fidelity of historical and modeled data. Traditionally, historical flood maps and engineering vulnerability tables have been used to estimate the most likely loss outcomes.

Fig 1
Figure 1. Relative risk varies between properties, both within and just outside of flood zones. (Source: AIR)

Limitations of a Traditional Approach

For insured risks immediately outside of the 0.2% (500-year) and 1% (100-year) flood plains, no historical flood intensity information is available. So, when determining the appropriate cat premium for a risk within 300 feet of a 1% (100-year) flood zone, an underwriter has to make a largely arbitrary choice using an approximate scaling of the hazard intensity based on distance. Within flood zones, there is also a great deal of uniformity in intensity for large geographic areas. This makes it difficult to differentiate between properties based on vulnerability and overall riskiness, which in turn makes it difficult to accurately determine an appropriate cat premium (Figure 1). As with risks outside the flood zone, an underwriter must attempt to scale intensities to differentiate between properties at different distances from the underlying hazard.

Furthermore, there is currently no way to accurately account for all of the different types of flooding that a single location might experience. Since flood policies typically cover all types of flooding, underwriters need a solution that captures the risk associated with floods caused by both precipitation and coastal storm surge.

How Catastrophe Modeling Can Help

Consider a commercial property located in a low-lying area near the coast. Imagine we have assessed the risk and have determined that, in any given year, this location has:

  • A 0.1% chance of experiencing 1 foot of coastal storm surge
  • A 0.1% chance of experiencing 1 foot of rainfall

If we, as underwriters, are seeking to understand the total flood risk, is it accurate to say that the annual chances of 1 foot of flooding is 0.2%  and that the annual chances of 2 feet of flooding is 0.1%? Surely not, since despite the fact that the location is exposed to both perils, it would be highly unlikely for that risk to experience such extremes from both types of flood simultaneously. So the underwriter must scale down the risk to account for the non-overlapping component of the two flood perils.

A catastrophe modeling platform can resolve this type of dilemma by providing all of the data and modeled losses for both types of perils at a high resolution that accounts for variations in intensity across geospatial grid cells. Insured losses can then be aggregated across both types of flood peril.  Extreme loss scenarios (i.e., tail losses) are modeled and provide detailed assessments of risk down to the single location level that are highly sensitive to the geophysical properties of the area (e.g., the distance to nearby  bodies of water and the location’s coastal elevation). (Figure 2.)

Fig 2
Figure 2. Though the overall risk is correlated between these two locations due to their geographic proximity, the tail risk is substantially higher for Risk N, despite its close proximity to Risk 1. (Source: AIR)

Traditional approaches that rely solely on historical flood maps cannot provide a complete solution. To accurately assess the risk, all flood-related actuarial and pricing decisions should involve modeling to capture the associated uncertainty and interdependence in probabilistic outcomes. This process starts with defining the risk load on which to base the insurance premium (Figure 3). The underwriter needs a modeled solution to put historical losses into context and quantify the expected variance of future insured losses.

Fig 3
Figure 3. The total necessary premium charged to insure a risk can be determined by two pieces of information provided by the model: the mean loss, to establish a base price charged each year, and the year-to-year variability in losses, which sets the loading factor on top of the base to cover years in which the actual losses exceed the mean loss.

Additional Analyses

Insurers of large commercial and industrial lines face even more complex pricing decisions for single risks, multi-risk accounts, and at the business unit level. For estimating the probability of breaching insurance policy deductibles, and of having to pay out multiple insurance policies in full, the complete simulated insured loss distributions are required. Capital reserving is based on tail risk metrics, which also, by definition, can only be produced using full probabilistic distributions of insured losses. Furthermore, to accurately capture tail loss estimates in multi-risk analyses—for a single account or across an entire business unit—modeling of inter-risk dependencies and correlation structures is essential.

Accumulation is another task where average historical losses don’t sufficiently capture the true risk associated with a portfolio. In some cases, such as when risks are truly independent, accumulation workflows (i.e., portfolio roll-ups) can be done simply by adding up the potential losses. However, for many other portfolio accumulations, including solvency metrics like VaR and TVaR, simple summations aren’t appropriate. When risks are correlated because of geography, perils insured, or lines of business being underwritten, interdependencies must be accounted for. Often, these metrics can be challenging to produce, even when using nonlinear accumulation techniques. They can only be derived from a joint insured loss distribution using modeled output for the account, line, or business unit. 

Finally, for some product structuring and pricing decisions—such as deriving the technical catastrophe premium for multi-risk aggregate umbrella policies—dependencies among risks are critical to understanding the implications of risk aggregation. This is particularly important for policies that cover multiple locations in close proximity, as it is likely that a catastrophe that affects one location will also affect others. Accounting for these correlations is important when setting and maintaining sustainable and competitive commercial insurance premiums. This type of analysis can only be accomplished using a comprehensive suite of robust catastrophe models that provide a full set of loss distributions and account for correlations between locations to determine an appropriate price for an aggregate policy.

Categories: Best Practices , Flood

Don't miss a post!

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.