Beyond "The Answer": Embracing Uncertainty in the Management of Catastrophe Risk
April 06, 2011
Rare but severe catastrophic events can impact business performance to the point of insolvency. Insurance executives, in particular, recognize that unpredictable and even improbable events can have a lasting negative impact on their businesses. This drives their desire for simple answers to questions such as how often these significant events will happen, where they are most likely to occur and what the financial impact of an improbable but severe scenario will be.
To prepare for the improbable, companies increasingly rely on output from catastrophe models to help answer these questions. Yet, I continue to be surprised at how many executives adhere to the idea that point estimates out of the model—“The Answer”—are sufficient for making business decisions critical to the performance, and potentially even the survivability, of their organizations.
The fundamental outputs of catastrophe models are not single numbers. They are risk profiles based on a scientifically tenable range of outcomes, each linked to its estimated probability. Models are not black boxes, but those organizations generating simple point estimates from models and inputting them into critical decision processes without consideration of the entire risk profile are treating them as such.
"What's the Number?"
If there were no risk, there would be no need for insurance. Ironically, in an industry founded on the principle of indemnity for uncertain hazards and one rife with expert statisticians, most executives are trained to employ traditional accounting frameworks fed by deterministic inputs.
“What’s the number?” is an all-too-familiar question. Buzz about black swans—highly improbable but consequential events—notwithstanding, psychology, work flow and business transactions still demand use of single numbers rather than distributions of outcomes.
Uncertainty is a fact. Measuring it is difficult and acting on its metrics even more so. Rather than wish uncertainty away, however, leading insurance organizations orient their decisions around it and ultimately profit from it. Proper interpretation and deployment of catastrophe-model results by energetic leaders can help.
Leading organizations encourage healthy debate on such topics as which metrics should be evaluated, how to stress-test conclusions and, most importantly, how to use this knowledge to make critical business decisions. In the end, “The Answer” may still be required, but decisions will be much better informed.
Three Faces of Uncertainty
Actuaries, who tend to be power users of catastrophe model results, like to speak of three classes of uncertainty.
Process risk is the fluctuation of results (e.g., insured losses) from one “trial” or period to the next because the process is subject to a degree of randomness, even if the parameters governing the process are known. Nobody can do much about process risk except increase the number of simulations in a model run. Fortunately, rapid advances in computing power allow ever-more trials in reasonable amounts of time.
Parameter risk is the additional uncertainty introduced by the fact that the governing parameters are actually estimated rather than known. Users can control parameter risk to a large degree by ensuring the high quality of property exposure data. Input data is one of the biggest sources of uncertainty and increasing numbers of executives are making the assessment and improvement of data quality a high priority. Carefully selecting appropriate analysis settings will also reduce parameter risk.
Model risk is the extra, shadowy form of uncertainty associated with the fact that our basic view of the process, not just the parameters governing it, may be wrong. Model risk is reduced as modeling firms improve the underlying science and engineering of the models, and as users augment results as needed to reflect additional nonmodeled sources of loss.
The inexact endeavor of estimating insured losses from natural disasters has improved steadily and will continue to provide more reliable results. For model users to effectively mitigate losses and to identify business opportunities, it is important to be able to recognize and understand uncertainty—whether inherent in the model or introduced by input exposure data—and to incorporate the most comprehensive and robust view of risk into their decision-making processes.
Beyond "The Answer"
Rather than ignore or lament pervasive uncertainty, leading executives recognize it by explicitly incorporating it in their decision-making. One manifestation is in the choice of analytical metrics. For example, many unit-cost and profit formulae are based on average losses. It’s always treacherous to use averages to measure any process containing many benign and a few disastrous outcomes. Actuaries can attest to this, having endured endless jokes about putting their heads in the oven, feet in the icebox and attaining a pleasant “average” temperature.
Other decisions, particularly those involving compliance with capital standards and constraints on growth, are often made using percentiles of a loss profile (for example, 1-in-a-100-year loss) often mislabeled as “probable maximum losses” or PMLs. These metrics appear as seductively precise values, but can vary widely with just minor tweaks to the assumptions or data used in the model run. Better than a single percentile, many companies use tail value at risk, which incorporates the probability-weighted average of all scenarios above a threshold on the exceedance-probability curve.
Perhaps even more suitable for catastrophe risk management is “window” value at risk, which uses the losses within a range of percentiles (for example, between 1% and 0.1%). This metric generally exhibits less volatility, is less sensitive to model change than other metrics, and using it reflects the reality that it is not economically feasible to protect against extreme tail losses. Chief executives should discuss with their actuaries the most appropriate metric or metrics for their companies.
A robust deployment of models requires both inputs (data about individual properties, as well as analysis options chosen by the user) and outputs (averages, percentiles, conditional metrics etc.) to be scenario-tested. What is important is not the elegance of formulae but the impact on consequential decisions. Discerning executives inquire about the quality of the input data used and the assumptions that were made, and ask themselves whether they would have made the same decision under the alternative assumptions and metrics.
This robust approach to employing models can be embedded in both analytical exercises and production activities. Transactional underwriting, product and rate filings, periodic financial reporting and enterprise risk management studies can all benefit from the broader view of model results.
Executives can use catastrophe models, among other tools, to shift the analytical approach toward the entire profile of risk, rather than just statistics representing the lowest common denominator of comfort and understanding. Once the culture is established, it can be embedded in work flow throughout the parts of the organization that deal with hazard risk.
Rating agencies, too, must deal in the realm of uncertainty. The penchant for propagating the worst-case scenario or the most conservative view is counter-productive to running an organization in the business of risk. Companies that embrace uncertainty and properly use appropriate tools will make decisions that are more robust in an environment that is rife with uncertainty in execution, in business assumptions and in the models themselves. When the black swan emerges, these groups will relatively prosper, rather than suffer, from surprises.
While catastrophe risk management has undoubtedly come a long way, translating model results into informed decision-making requires that executives not only attain an awareness of the limitations of modeling. They must also have a balanced understanding of how decision-making is impacted by model risk, uncertainty in input data and the choice of analysis options and output metrics.
Best’s Review: March 2011
Text copyrighted A.M. Best Company, Inc. 2011
All Rights Reserved, Reprinted with Permission
All Rights Reserved, Reprinted with Permission