By Stuart Kyle | April 8, 2019
Blog post illustration

Many carriers want to develop a granular rating plan that will give them a competitive advantage in the marketplace. In high hazard regions such as Florida and the Gulf Coast, natural catastrophes are a major contributor to the overall average annual insured loss, and additionally pose a major aggregation risk for insurers and reinsurers.

One of the challenges of breaking into a new area or writing new risks in existing areas is that insufficient exposure data is available. AIR models can analyze any property data record in the model domain for which sufficient information is provided regarding the property’s location, structural attributes, and values. It is not necessary, however, that the record be identifiable as an actual structure.

The benefit of modeling imaginary structures is that there are an infinite number of ways to view risk anywhere in our model domain. Clients can examine how loss changes when characteristics vary. The characteristics can be anything from construction type to window protection. This ability to get a customized view of vulnerability is valuable for developing rating plans and selecting the best new risk.

There are many ways to determine the geographic location of our base structures, each of which has its own advantages and disadvantages that I describe in the following sections.

Overlay a Grid

One common method is to overlay a grid of a desired resolution. Advantages are that it is easy to generate and can cover the entire geographic region to be analyzed. A disadvantage is that this method can sometimes produce excessive information in areas that are not of interest and insufficient detail in areas that are of major interest.

Population-Weighted Exposure

A second method is to use a population-weighted exposure. One of its greatest advantages is the higher level of detailed results for those areas that have the greatest population. Although there are different methods for producing a population-weighted exposure, the general idea is the same. One way is to take advantage of U.S. census data. There are roughly the same number of people within each census block, so by using the coordinates for each block you can effectively achieve the desired result. An alternative is to find a map file for the population density in the area of interest and to create a variable resolution grid.

While there are numerous advantages to this method, one disadvantage is the difficulty in assessing areas with low but growing populations that are exposed to a high amount of hazard.

Hazard-Weighted

Another approach is the hazard-weighted method. Using it, we focus on areas that have the highest hazard and analyze more locations in those areas. If you want to examine hurricane risk in Texas, for example, it is more important to look at relatively lower population-density areas that are right on the coast than it is to look at El Paso. Or, when examining California earthquake, it may not be necessary to analyze the Sacramento area at the same resolution as, say, the East Bay suburbs.

Combination of Methods

Finally, to achieve the highest level of sophistication, we can take a combination of these methods. The advantage of considering population and hazard in conjunction is that we are now specifically focused on where our analysis results have the most impact on business while still including the entire geographic area.

This approach encompasses the most detailed information regarding a notional exposure set, but it can also be time-consuming and difficult to create. This level of detail needs to be assessed and the appropriate granularity of notional data used for the given workflow.

With this approach you can use relativity analysis to isolate the impact of the relative vulnerability of building characteristics or policy terms. If you input different records at the same geographic location, you can vary the features and get the impact on vulnerability independent of the hazard. When conducting relativity analysis, AIR can assist with exposure design so that bias is reduced, or even eliminated in the case of double counting. Our models make detailed assumptions about building features by year and geography, but these data can cloud your understanding of the hazard profile if they are not input in a uniform manner. For example, if secondary features are left unknown in Miami-Dade or Broward counties, then in newer years you will be evaluating buildings with the strongest building codes in the country and comparing them to relatively weaker buildings on the Gulf Coast of Florida.  

When using exposure data—even if the particular locations or values do not exist in reality—the ratio of coverages should accurately reflect a realistic policy. As losses scale with replacement value, the original weights of coverages should be reasonable in comparison to an actual policy. This allows for flexibility when establishing pricing and for you to account for various mitigation factors or additional secondary characteristics when testing the notional data set to accurately reflect all possible scenarios within your portfolio, which may not be possible when analyzing only an in-force book.


Read “The Aesthetics of Probability” for a discussion about making probability easier to envision


Categories: Best Practices

Don't miss a post!

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.