Perspectives

 

February 15, 2012

Of the many uses of catastrophe models, few have more important potential consequences than reporting to rating agencies. Agencies may rate both the claims-paying and borrowing ability of an insurer, and their views heavily influence an insurer's ability to produce business and attract capital. Their standards for evaluating a company's assessment and management of catastrophe risk are broad, yet rigorous—and insurers simply cannot afford to mishandle the use of catastrophe models in making their case. Here are a few questions that insurers must be prepared to answer.

What Models and Analysis Options Should Be Chosen?

While some aspects of a particular natural hazard and of building vulnerability are settled science, modeling firms act independently on innumerable assumptions and parameter choices. Even the application of complex policy conditions can differ. So it is not surprising that results from different modeling firms can vary greatly even when applied to the same exposures. Insurers have real choices about which models to use and what options to choose. Rating agencies want a coherent explanation of the choice of any one—or multiple—models to assess catastrophic risk.

To assist insurers in understanding the models, a surprising amount of information is readily available and, with little or no additional effort, apples-to-apples comparisons of models can be made. Such efforts ought not to be stifled by the misplaced perception that models are "black boxes." In fact, the validation requirements of insurers and regulators have led to considerable model transparency. The Florida Commission on Hurricane Loss Projection Methodology receives submissions running to several hundred pages, describing each model vendor's approach to modeling U.S. hurricane risk. The Reinsurance Association of America conducts comparisons across models annually and provides their conclusions to its members. Similarly, real-time loss estimates issued by model vendors are publicly available. A little research can go a long way toward justifying model choices for a particular enterprise.

In recent years alternate "catalogs" representing very distinct assumptions about hazard have opened up a broader range of analysis options for insurers. In AIR's case, hurricane risk can be assessed with climatological (standard) or conditional (warm ocean temperature) catalogs, and earthquake risk with time-independent or time-dependent catalogs. Rating agencies sometimes urge insurers to take what they interpret as the most conservative view—options that translate to the highest level of risk. AIR's view is that this is a dangerous oversimplification. The best answer for the rated entity is what is important. Agencies want insurers to review the scientific assumptions, in light of their exposures and concentrations. Insurers should run multiple catalogs and multiple models as appropriate, and synthesize a risk assessment that is transparent, defensible, and reflects the company's view of risk.

This raises questions of how to "blend" model results from multiple catalogs or event sets from several modeling firms—or even across old and new versions from the same vendor. Statistically, it is not valid to calculate a simple average as model results represent probability distributions of loss, and the losses themselves take into account uncertainty in both the hazard and the damageability of structures. One statistically sound technique is known as weighted-resampling. Ask your modeling firm to provide guidance and informed technical assistance on this technique and other model blending issues.

What to Do When Models Change?

Occasionally, major revisions to model parameters or model design are necessary to respond to emerging scientific consensus or breakthroughs in financial analysis. Change can be difficult, but carefully understanding and managing change will make things easier.

Modeling firms expect to be challenged by insurers and rating agencies as major model updates occur. Companies and rating agencies should consider the implementation of updated models only after thoroughly vetting them at a granular and portfolio-specific level. This should include analysis of the effect of model change on all downstream applications and functional areas of the enterprise. There are several alternatives to a simple go/no go decision; one is a "relay" approach, whereby old and new models are run in parallel and both models are installed in the workflow before passing the baton to the new model of record. Further options include blending old and new model results to generate combined "consensus" metrics installed in the workflow, and perhaps weighted toward new models gradually over time.

It is tempting for both insurers and rating agencies to adopt a new model but "factor up" or down its results to stabilize the impact on downstream applications. This is dangerous for several reasons. First, it can bring illusory comfort and allow avoidance of the proper vetting of the new model's scientific validity as well as its impacts. Second, across-the-board adjustments can obscure highly regional or otherwise granular changes in model results.

Ultimately, if a new model fails to convince after careful vetting, it may be appropriate to increase reliance on another model, or switch altogether. As long as a company can demonstrate why the new model does not reflect its catastrophe risk profile, it should be able to explain to the satisfaction of rating agencies the rationale for such a move.

What Metrics Should the Dialog Focus On?

Rating agencies are deeply concerned with the "tail" of the model's loss distribution, usually dominated by a few extreme events. Insurers and rating agencies must carefully choose the metrics used to evaluate these solvency-threatening scenarios, maximizing robustness and minimizing the sensitivity and uncertainty that can skew an otherwise valuable risk assessment.

With today's range of model outputs, it is needlessly limiting to focus risk assessment on specific percentiles, such as the proverbial "100-year" loss. The entire event-level set of results can be easily converted to more robust metrics such as Tail Value at Risk (TVAR), defined as the average total loss for events that exceed a certain threshold, and Risk Managed Layer (or Window Value at Risk) defined as the average total loss for events that fall between two probability thresholds, such as between 2% and 1% annual exceedance probability. Actuarial theorists have demonstrated that these more robust measures pass tests of "coherence," having certain common-sense arithmetic properties, where simple percentile values do not.

It is also imperative to consider the time horizon associated with aggregate financial impacts, not simply the impact of single extreme events. Occurrence exceedance probability (OEP) losses can be significantly lower than aggregate (AEP) losses at the same thresholds, especially when multiple regions are exposed to possibly multiple perils at once, and when multiple events can occur with some frequency (such as within a single Atlantic hurricane or U.S. severe thunderstorm season). So it is important to provide both OEP and AEP values.

Rating agencies and chief risk officers are also concerned with multiple-year horizons for planning and risk assessment. In this regard, the design of AIR models allows the outputs to be readily resampled to build multi-year scenarios and probability distributions.

Finally, it is myopic to use any tail metrics without consideration of uncertainty. Ask your modeling firm about indicators of uncertainty in the results, such as the ability to show confidence percentiles associated with a loss value, and the entire OEP or AEP loss curve with consideration of "secondary uncertainty" in damageability.

How Can Good Exposure Data Handling Be Demonstrated?

Beyond models and outputs, don't forget that the quality of catastrophe risk assessment is inescapably linked to the quality of input data. First, best practices include validation of internal consistency and adherence to common sense. For example, ten-story wood frame buildings are unlikely. Second, opportunities for data augmentation should be considered. For example, known weak sources of replacement cost values (such as public tax rolls) can be upgraded with often economical checks against outside knowledge bases. AIR and other vendors provide these resources—AIR's in the form of TruExposure. Third, test the sensitivity of missing data (coded as "Unknown") and defaults against alternatives.

Do not overlook the importance of workflow that maximizes data fidelity and minimizes opportunities for error. While AIR models use open, transparent data formats, some models use proprietary data formats. In any case, all models should be loaded in parallel with data directly from the source; data should not be converted from one model-ready form to another as a shortcut, as fidelity will be lost and the resulting "error" can be significant.

Often the availability and quality of data cannot be controlled, and enhancements can be costly, but best practices can still be followed. Insurers have opportunities to discuss rules and workflow and demonstrate best practices in their responses to rating agency questionnaires, as well as in management meetings with agencies.

Conclusion

Rating agencies rightly question the choice of model and model assumptions, the metrics used to assess catastrophe risk, and the handling of input data and model results. The stakes are often raised further when models change. Insurers demonstrating best practices in the areas discussed above can be rewarded with rating agencies' confidence and greater stability in financial ratings, minimizing one source of business uncertainty and easing the path to growth.

In my dialogs with rating agency executives, our shared thinking is that insurers should "own" the models they use. Insurers should take responsibility for understanding key model options, data quality, and interpretation of results—all with the considerable help of their model providers, of course. Best practices in using catastrophe models for any purpose, and specifically in rating agency communications, should be nurtured within the enterprise, supported by the modeling firms, and vigorously overseen.

S. Ming Lee  

by Ming Lee
CEO and President
AIR Worldwide

 

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.