Today's catastrophe models are more scientifically robust than ever before. Our understanding of the meteorology and geology of natural catastrophes is continually improving, as is the sophistication of the models we use to represent them. A critical decision for the catastrophe modeler is when to incorporate new scientific theory: if before a scientific consensus is reached, the updated model may result in unnecessary instability in the loss estimates; if new findings are incorporated after the wider scientific community comes to a general agreement, the modeled losses may change—perhaps significantly—but the change can be justified.
For the layman, there is sometimes a tendency to regard every new "discovery" or finding in the latest published paper as an inviolate fact. In reality, rarely is there ever a last and final word. Rather, science is a dynamic process in which researchers not only make new discoveries but also reexamine earlier knowledge and try to improve, build upon, or extend it. This can be done, for example, by reanalyzing old data using new techniques, or by integrating new data on an event or phenomenon with the old data to draw a more complete picture.
In the natural catastrophe realm, these processes of reanalysis and integration may give scientists a clearer picture of precisely how past events (e.g., hurricanes, earthquakes, wind storms) unfolded. Armed with improved estimates of the frequency and severity of historical events, for example, catastrophe modelers are better equipped to assess the probability of future such events.
It is the job of scientists to investigate and posit theories to explain physical phenomena. Competing theories nourish scientific debate, but arriving at a broad consensus can be a lengthy process. Of course, even the consensus view may change over time. Nevertheless, until some general agreement is reached by the community as a whole, the modeler should proceed with caution.
Taking a Closer Look at the Past
Examples of the dynamic nature of science abound in catastrophe modeling. Over the last 10 years, the US Geological Survey (USGS) has twice revisited the series of earthquakes that occurred near New Madrid, Missouri in 1811-1812. As recently as the USGS 1996 Open File report, the largest of the series was considered the largest historical earthquake in the continental U.S., surpassing the 1906 San Francisco earthquake. As the New Madrid quakes occurred before the invention of modern seismologic instrumentation and techniques, their true magnitude is unknown. To fill this knowledge gap and estimate the quakes' return periods, scientists must blend contemporaneous written accounts of the shaking and damage caused by the quakes with more recent data on the geology and seismology of the region (including the results of paleoseismic studies, which look for evidence of seismic activity in prehistoric times, which is usually on the order of thousands of years).
In their 1996 report, the USGS estimated that the New Madrid event had a return period of about 1,000 years and a magnitude of 8.0. However, a reexamination of the data began in 2000. As part of their own due diligence, preliminary findings were distributed to the scientific community for discussion and comment. Finally, with the release of the new seismic hazard maps in 2002, they concluded that the mean frequency should be halved—to about 500 years—but that the magnitude of the historical event (and therefore any future such event) was also lower, on the order of 7.7. The report also adopted a logic tree approach to the magnitude to account for the uncertainty in the estimate.
As a result of the new consensus view, the New Madrid event dropped to second place behind the M7.9 San Francisco earthquake of 1906.
The results of other scientific reanalysis efforts are less likely to impact modeled losses. A National Hurricane Center (NHC) initiative called the Atlantic Hurricane Database Re-Analysis Project is charged with systematically reevaluating their archived data on all known Atlantic hurricanes since 1851. In 2002, armed with a more mature understanding of hurricane eyewall structure, the project scrutinized 1992's Hurricane Andrew—the storm credited with giving rise to the catastrophe modeling industry as we know it, and which caused the greatest insured loss of any weather-related disaster until Hurricane Katrina. The reanalysis prompted NHC, in 2004, to revise its estimates of Andrew's intensity when it made landfall in southern Florida, elevating it from Category 4 to Category 5—and making it one of only three Category 5 hurricanes to strike the US since 1900.
Yet hurricane models' damage functions have been validated against billions of dollars of actual claims data, including claims from Hurricane Andrew. Whether Andrew was a Category 4 or Category 5 has no effect on historical claims data and therefore minimal impact on modeled losses.
Taking a Closer Look at the Future
Scientific debate becomes even livelier when the risk landscape itself may be changing—as in the case of climate signals and their impact on hurricane activity. Unable to extrapolate exclusively from past experience, scientists must develop forecast models that are characterized by considerable uncertainty. Since 1995, tropical cyclone activity in the Atlantic basin has been elevated over the long-term, or climatological, average. Scientists at the National Oceanic and Atmospheric Administration (NOAA) have linked this above-average activity to elevated sea surface temperatures (SSTs), which are in turn linked to the positive, or warm, phase of a naturally occurring cycle that oscillates over periods of decades; the Atlantic Multidecadal Oscillation, or AMO.
The current consensus at NOAA is that the current warm phase is likely to continue "for years to come." Therefore, it might seem reasonable to assume that hurricane losses will be similarly elevated and that models should adjust accordingly. However, there are significant problems with this argument. One is that, within any given window—such as a near-term (five-year) time horizon—there are a number of climate signals other than the AMO that influence Atlantic hurricane activity and that may indeed dominate and even counter its impact. The second reason for circumspection is that the primary focus to date of scientific investigation into climatological influences on tropical cyclones has been on basinwide activity. Making the leap from increased hurricane activity in the Atlantic to increased landfall activity and, ultimately, to the effect on insured losses requires significant additional research before radical changes are made to the model methodology that has provided the industry with reliable results for 20 years.
Catastrophe model users expect and pay for rigorous science from the modelers in at least equal measure to advanced technology. The most important job of the scientists and engineers at AIR is to keep abreast of the scientific literature, evaluate the latest research findings, and conduct original research of their own—to determine whether competing scientific approaches are credible and how much weight to assign to them.
Sometimes, a new scientific consensus will emerge that necessitates model enhancements that result in significant changes in modeled losses. As a result, the model will be more reflective of the true risk and enable scientifically-sound risk management decisions.
Sometimes, it is appropriate to resist the temptation to fervently embrace the latest findings, knowing that the investigation is still in its preliminary stages. In accordance with this more measured approach, AIR is continuing its research into the relationship between climate signals and hurricane landfalls and is submitting both the methodology and findings for peer review. Ultimately, AIR is committed to bringing not only the most advanced science, but also the most reliable models to market.