AIR Currents

Apr 19, 2021

Editor's Note: This article is the last in our series that describe AIR's groundbreaking new framework for climate risk modeling, which will enable our suite of atmospheric peril models to capture climate signals at all scales—from local correlations to global teleconnections—to create a new paradigm in the way the insurance industry quantifies this risk. (Please see Part I, Part II, Part III, Part IV, and Part V.) Part V of this series laid out the details of AIR's new framework—one that allows for the creation of large, physically consistent, and globally correlated catalogs of extreme events. This final article, Part VI, discusses the suitability of the new framework for capturing not only the financial impacts of today’s climate, but also the impacts of climate change.

The motivation to create a new paradigm in the way the insurance industry quantifies financial risk due to atmospheric perils was straightforward: How can we capture not just the physics of local or regional weather systems but also the physics of the larger, planetary-scale circulations that often operate on longer time scales and typically have a strong influence on the evolution of the small-scale weather phenomena that cause the damage? Only by understanding (and modeling) these “climate dynamics” will we achieve a robust representation of the most extreme events across multiple regions and perils. Not only will we be better positioned to capture atmospheric blocking events—such as the one that caused Hurricane Harvey to dump more than 50 inches of rain on Houston in 2017—but we’ll be able to more confidently estimate the future frequency and geographic distribution of Harvey-like events worldwide. Because the physics of the atmosphere are not quick to change, models developed using the new framework should provide a robust view of risk for a time frame of 10 years or more.

But it is not only today’s climate dynamics that concern us. The world finds itself at a tipping point when scientific investments in addressing climate change can still make a material difference. It was therefore important to choose a framework that could also be used to evaluate climate change risk on short- and long-term time horizons.

Moving from the Current to a Future Climate: Short- and Intermediate-Term Time Horizons

Part V described our groundbreaking framework, which blends AIR's current hybrid of physical and statistical approaches with machine learning; the result is that we can reap the benefits of general circulation models (GCMs) while circumventing their computational cost and reducing model bias. The framework makes million-year physically based catalogs possible—global catalogs that capture all types of dependencies, from global teleconnections to local correlations across all weather-related perils and across all regions. For certain perils on which the impact of climate change is more certain and quantifiable, such as extreme precipitation, we can make explicit adjustments to create forward-looking views.

But how do we make the leap from modeling the current climate to a future one? AIR's machine learning algorithm is trained on reanalysis data. While these data represent a statistically, physically, and dynamically consistent recreation of the history of Earth's weather and climate, they are, in the end, historical data. One solution is to create a climate change–conditioned catalog by subsampling from the inventory of simulated years (samples) that comprise the catalog representing the current climate. We are currently using a subsampling strategy in many of our climate change studies using existing AIR models. It is an approach that, for a couple of reasons, is particularly appropriate for short- to intermediate-term time horizons (10 to 30 years into the future). First, anthropogenic warming is happening slowly; this means that a climate influenced by it will likely not differ so much from the current one for at least the next one to three decades. Second, the new framework will produce a catalog large enough to enable us to choose a sufficient number of samples that could occur in a short-term time horizon, albeit with event frequencies representative of a climate change “target”—that is, a selected future year given a selected greenhouse gas emissions scenario, or RCP.

Peter SousounisPeter Sousounis, Ph.D.
Vice President and Director of Climate Change Research

Boyko DodovBoyko Dodov, Ph.D.
Vice President and Director of Atmospheric Peril Models

Jayanta GuinJayanta Guin, Ph.D.
Chief Research Officer

Edited by Sara Gambrill, CEEM

There are, of course, some differences between using our existing models and using the global catalog produced by our new framework to subsample a climate change–conditioned catalog. For the studies AIR has conducted to date, the future climate catalogs are, like our models, peril- and region-specific. Thus, in creating a climate change target, one has only to be concerned with how a single peril (and associated sub-perils) may change over a single region, such as a country, ocean basin, or continent, but not the entire planet. While there may be a temptation to subsample by randomly selecting individual events that are reflective of our target, the end result may not be physically consistent. For example, it may not make sense to draw hurricane events from both positive and negative Atlantic Multidecadal Oscillation (AMO) indices and combine them in the same year. More combinations that are similarly egregious could also result.

It thus makes more sense to draw entire years at a time so that global teleconnections are preserved, although this is also not without complication. For example, currently observed intra-annual correlations between Atlantic hurricane activity and winter storm activity over Europe might be different in a future climate. Because our strategy would only apply to short- to intermediate-term time horizons, however, that will likely not be the case. The tremendous advantage of the new global catalog is that it would represent the relevant parts of the actual physics of the atmosphere. Thus, in specifying a climate change target for one region and one peril, the other perils for other regions would, by default, appropriately reflect the climate impact as well.

In addition to creating climate change–conditioned catalogs to reflect short- to intermediate-term climate change targets, subsampling can also be used to reflect climate variability for future climate scenarios. For example, we could create an Atlantic hurricane catalog that reflects the impact of the La Niña phase of the El Niño–Southern Oscillation (ENSO) in conjunction with a positive AMO index, without first having to determine the change in hurricane frequency and then drawing years. That’s because, in our new catalog, large-scale circulations would already be consistent with particular phases of climate oscillations. Indeed, each year in the catalog could be tagged with corresponding indices for a variety of climate oscillations/signals. Presumably, the appropriate hurricane frequencies, intensities, trajectories, etc., would then be captured as well.

One other way that subsampling can be used with the new model is that after the “current climate” version of the model has been in operation for 10 years or so, the extent to which climate change has had an influence on weather systems can be evaluated in much the same way AIR does it now—by analyzing historical data for physical (and statistically significant) trends to quantify the climate change effect. We can then compare the new current climate to the one that the model represents from 10 years or so ago. If we see signals—e.g., storm tracks are showing systemic changes—we can dive into our superset of events and subsample annual seasons that reflect the changes in underlying conditions. This is not an easy task, but relative to current practice we have a physical basis to conditionally create a new set of event simulations. This approach is sustainable for many years, circumventing the need to build a new model almost from scratch.

Moving from the Current to a Future Climate: Longer-Term Time Horizons

At some point, though, we will likely have to revisit building a new model almost from scratch. In Part V, we described the key components of our new framework as being: 1) a coarse resolution general circulation model (GCM) debiased to represent the observed climate; 2) a source of high-resolution information representing fine-scale weather features, and; 3) a machine learning algorithm trained on the second component to “learn” how fine-scale weather features depend on coarse-scale ones; the rules and dependencies learned are used to debias the GCM. For the model representing the current climate, the second component is reanalysis data.

To simulate a future climate, the coarse-resolution physics-based component could still be provided by a GCM, but one that is simulating a future climate. Any one of the models from the Coupled Model Intercomparison Project Phase 5 or 6 would be suitable. But what of the second component? Reanalysis data, which is in essence observational, can be thought of as “ground truth” (ocean and atmosphere truth, as well). But there is no ground truth for a future climate; the observations don’t yet exist.

Hope is not lost, however, if we assume that the bias that exists under current climate conditions between the reanalysis (ground truth at fine scale) and the GCM (modeled truth at coarse scale) is the same for the future climate. For example, if the model runs 0.2° Celsius too warm for the historical period, can we reasonably assume that it will continue to run 0.2° Celsius too warm in the future? The answer is yes. This is a typical strategy employed by thousands of climate scientists: evaluate impacts from climate change by considering the delta(s) between future and current climate GCM runs to make statements such as, “Model X shows that, under RCP 4.5, global atmospheric temperatures will increase by 2° Celsius.”

In the absence of reanalysis data, however, the same GCM output used to define the coarse-scale component of our framework must also be the source of the fine-scale information. Ideally, we would want the same high-resolution output as we do for historical reanalysis data covering about the same length of time. That means having 40 years’ worth of future climate GCM output at a resolution of 30 km.

At this point one might ask why, if we have that kind of future GCM output, would we even need to build a future climate version of our new framework? The answer is the same as for the present climate. Forty years’ worth of data is simply not enough to capture the full range of possibilities that might occur under that climate. It is the whole motivation for generating 10K, 100K, or even million-year catalogs. Indeed, it is the whole motivation for catastrophe modeling. But while generating 40 years’ worth of high-resolution future GCM output may well be computationally inexpensive in 10 years, that will likely not be the case for generating 10K or 100K (or more) years of such data. The promise of AIR’s new modeling framework is that it circumvents these costs while taking care of any biases.

Our second assumption is that the dependencies between the coarse and fine scales identified by our machine learning algorithm for the current climate also hold true for the future climate. This allows us to, in effect, back out “future reanalysis” data. We can assume, for example, that if there is a large-scale high-pressure ridge over the U.S. Pacific Northwest and deep low pressure over Nova Scotia in winter, then the possibility exists for a powerful Nor’easter to impact the eastern U.S. Again, it’s a reasonable assumption. The Nor’easter may have a different strength in the future than it does in the current climate, it may move faster or more slowly, and the pattern may have a different frequency, but the basic configuration/orientation of large- and small-scale features should be the same. Any biases in the small-scale future climate GCM output may be corrected by using information obtained by comparing reanalysis data with high-resolution GCM output from a high-resolution re-simulation of the historical climate.

It’s important to note that once we begin evaluating longer-term (>30 years) climate change impacts using our new framework, we must be circumspect. We should not read more into the data than is there or overextend the state of science. It is easy to be seduced by solutions that offer false precision, even while knowing that consensus in the scientific community is transient at best, particularly as it relates to the impact of climate change on individual atmospheric perils.

The Way Forward: Building a GCM that Captures Extreme Events Under Future Climate Conditions

While we may have solved our problem conceptually, much has yet to be done. When we undertook this project nearly two-and-a-half years ago, the goal was to create a model that would, for the first time, capture the planetary-scale atmospheric waves that can drive small-scale local extremes under current climate conditions. When complete, the model will be physically consistent across multiple regions and perils, so stakeholders can evaluate the global risk to their assets and portfolios for the next 10 years. It is also worth noting that in a time horizon of up to 10 years, the occurrence of extreme events will be driven more by natural climate variability than by climate change—a circumstance that will continue to be the case in future decades. Each new decade of data on climate variability, which would be used to update the model, would include the effects of climate change that have already taken place.

But it has become increasingly apparent over the last couple of years that clients want that kind of knowledge now to make business decisions at longer time horizons. Just as we have engaged in a collaborative effort with the scientific community to build a current climate version of our new model, so too will we likely need to rely on support from the community on a grander scale. Right now, AIR is ahead of the curve, but as new studies are conducted and published by academia and other research organizations we will assimilate the findings and capabilities into our evolving plan to build a global climate model that captures extreme weather events from different perils in different locations under future climate conditions. At AIR, we are confident that our new framework will serve a multitude of purposes on a multitude of time scales.

This article is the last in our series about AIR’s new paradigm for modeling atmospheric perils. To read the rest of the articles in this series, please see Part I, Part II, Part III, Part IV, and Part V.

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.