Perspectives

 

October 29, 2010

From the moment we wake and check the weather forecast, we make daily decisions based on metrics we hope are stable. Insurance company chief executives, as they navigate the landscape of catastrophic hazard risks and make myriad operating decisions, may equally desire stable benchmarks for questions like "how often will significant events happen?", "where are they most likely to occur?", and "what is the financial impact of an improbable but severe scenario?" Yet, the fact is that the freshest and best measures of catastrophe risks are often a moving target, dependent on continuous advances in science and engineering, and new and detailed information about companies' loss experience. Change is inevitable—preparing for it and profiting from it is optional.

All of us can appreciate the challenge of continuously adopting the best risk and reward indicators without frequent rewrites of guidelines and upsets to incentives and workflow. None of us can control the pace of change in our knowledge of risk, nor would we really want to delude ourselves by clinging to stale information and letting our competitors capitalize on superior understanding. But we can adapt to the flux of our environment by accepting that risk models occasionally change, being familiar with the drivers of changes, objectively determining when changes should be adopted, and quickly synthesizing the results of changes and acting to capture the opportunities they create.

Managing change in a committed, alert, detailed, and complete fashion is possible and, in fact, mandatory for insurers seeking to maintain financial stability and competitive position.

S. Ming Lee  

by Ming Lee
CEO and President
AIR Worldwide

Why Bother?

The challenging question for end users of risk models is always: When is new information "better enough" to justify a change in the metrics used in running your business? Catastrophe models add value precisely because they supplement sparse historical data regarding infrequent, severe, and unpredictable occurrences of cat events. Physical science, engineering, statistical and actuarial techniques are integrated and implemented in a technological platform that offers risk information on demand at macro levels, such as global insurable losses, and micro levels, such as for an individual property or single event. Model results are better estimates than the results of rules of thumb or extrapolations of history, but they are not without uncertainty, and even the best estimates evolve over time. As each embedded discipline matures, a major job for modeling firms is to constantly incorporate new theories, techniques, and observations without rocking the boat with unwarranted, abrupt changes.

New indications can emerge in many ways. Even "settled" historical records are reinterpreted by new methods—famously, Hurricane Andrew was upgraded in Saffir-Simpson category ten years after it made landfall in South Florida. Brand new data sources also emerge as observations become more widespread and instruments more sensitive. Academia is constantly revising basic scientific models in light of both new data and new thinking; hundreds of papers on any narrow topic are published each year.

All aspects of a catastrophe model must be thoroughly validated. Analysis of insurance claims files showing contractual losses paid relative to the physical hazard observed at the locations is "where the rubber meets the road" in validating risk models. The most elegant algorithm for modeled losses is not terribly useful for decision-makers without being credible to stakeholders, and credibility ultimately comes from demonstrating that the model results reasonably conform to reality.

Two examples of recent major improvements in risk information are associated with 2009's new generation of earthquake models, and 2010's update of AIR's U.S. hurricane model. The earthquake model upgrades were largely driven by the consolidation of new seismic data and core scientific models of ground motion contained in a next-generation report from the U.S. Geological Survey (USGS). AIR released a comprehensive update of the hurricane model based on new science and data regarding hurricane wind fields and vulnerability of structures, as well as the fruits of analyzing massive numbers of insurance claims filed as a result of U.S. hurricane landfalls in 2004-2008.

Insurers expect and pay for rigorous science from modeling firms. Often, it is appropriate to resist the temptation to fervently embrace the latest findings, knowing that investigation is still in its preliminary stages. However, at watershed moments in the constant flow of scientific and empirical activity, it is essential that modelers incorporate new data and thinking expeditiously. The payoff to the end user is that updated tools should move modeled outcomes closer to the true impact of future events and narrow the range of uncertainty associated with model estimates.

How Insurers Can Get the Jump on Changes in Risk Models

First, maintain a hazard risk center of excellence, where professionals interpret the communications of the modeling firms as they emerge and translate indicated model changes into likely impacts to company risk metrics, workflow, and strategy.

Second, conduct studies as soon as tools are available. Do not stop with the changes in raw model results, but recompute the preferred risk metrics, paying special attention to changes that may trigger contractual or reporting modifications.

Finally, install new models quickly as they become available, and in parallel with existing systems, so that an overlap period may be used to transition calculations and metrics from those based on the previous model results to those using the updated results.

Making the Transition

The often drawn-out nature of scientific inquiry can be a blessing for CEOs who depend on catastrophe models. Periodic model changes that result from the adoption of new science or the availability of new claims data tend to be communicated well in advance. Pre-release, modeling firms extensively analyze the magnitude and direction of likely changes, but major updates tend to produce changes that vary significantly by location and risk type. No firm can divine all client-specific effects or understand how those effects might impact internal, often proprietary strategic planning. Insurers should follow several best practices to capitalize on change with minimal upheaval (see sidebar).

Model vendors dedicate significant service resources to help clients quickly and completely accomplish these goals. Insurers who use the pace of change as an advantage in preparing for its impacts are rewarded with the ability to efficiently adapt goals and workflow, maintain organizational buy-in, and reap the competitive edge of using the state of the art.

Is It Worth It?

Early adopters of next-generation risk models benefit from more accurately classifying and pricing individual risks as well as maximizing returns on enterprise capital.

One dynamic reflects the age-old challenge of managing a risk portfolio: success is driven by both the expected profit of risks in isolation, and their contribution to portfolio results. Put simply, a risk that is "good" by itself could be less good or even "bad" if it contributes disproportionately to high-impact scenarios for the whole book. Conversely, less profitable risks that diversify the portfolio may be worth writing. A corollary for managing model changes is that there is no shortcut—every book must be reviewed against target metrics at both the macro (enterprise-wide) and micro (individual risk) levels. The flip side is that model changes rarely impact portfolios "across the board" in predictable ways, and likely indicate new capacity in pockets where there was none before. Proactive insurers seize the advantage rather than bemoan constraints.

Paradoxically, underwriting workflow often requires that risks be priced in isolation, using accept/reject rules and rating plans that don't keep up as the portfolio evolves, particularly in regulated environments. But even here, capitalizing on model changes is a must—modeled expected losses depend on high-resolution geography and several construction and occupancy features. Updating rates and guidelines to reflect better risk models yields superior risk classification and selection, and sometimes a persistent competitive advantage to the early adopters.

Finally, a big-picture review of changes in model results may spark fresh strategies in deploying capital to new frontiers— perhaps to new lines of business or geographic areas. Insurers operate in an intensely competitive environment and must constantly search for incremental returns. Updates to risk models often represent the proper moments to plan initiatives, so that new opportunities become lasting capabilities.

The Payoff

New facts and research mean continuous change in the science, engineering, and technology underlying catastrophe risk models, and periodic changes to the models themselves. The goal of model change is to improve modeled outcomes in demonstrable ways, at both aggregate and granular levels. Better risk estimates create advantages for insurers committed to aggressive testing and early adoption into their management metrics.

Prudent CEOs are proactive, creating a culture that is opportunistic rather than frictional when risk indicators evolve, insisting on forward-looking plans for dealing with changes, and comprehensively assessing the impact on deliverables and decisions throughout the enterprise. These leaders assemble and motivate teams and redeploy capital to position their organizations for success—before their competitors do.

 

You might also be interested in:


Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.