AIR Currents

Dec 16, 2020

Editor's Note: Part III of our Climate Change series introduced the reader to climate models of various types. These models numerically simulate the atmospheric circulation, often combined with the oceanic circulation and land and ice processes, at scales ranging from local to regional to planetary. They are used for everything from producing today's local weather forecast, to predicting the development and movement of tropical cyclones over the next couple of weeks, to projecting how Earth's climate might respond to accumulating greenhouse gases in the atmosphere over the coming decades.

For this fourth article, we are again honored to have Professor Henk Dijkstra as a co-author, as we did for Part III. Dr. Dijkstra is an internationally renowned scientist, expert in the field of dynamical systems methods to problems in climate modeling, climate variability, and climate change. He is the author of several books and numerous articles covering different aspects of climate dynamics and climate change in the biosphere, hydrosphere, and atmosphere. Dr. Dijkstra is a member of an international team of leading scientists supporting AIR in our effort to build a new generation of catastrophe models capable of providing a global view of all weather-related perils in a manner efficient from a technological point of view.

This article takes a closer look at how catastrophe modelers have employed these models to develop large catalogs of potential extreme events. And we'll look at why the limitations of the models in a catastrophe modeling context have motivated AIR to develop a new framework for climate risk modeling.

Climate Modeling at AIR: A Brief History

At their inception in the late 1980s, catastrophe models were constructed based largely on locally reported observation data. In the case of hurricanes, for example, these data might include central barometric pressure at landfall, forward speed, and angle of storm track. Probability distributions were fit to the data and simulated events were created by randomly drawing storm parameters from these distributions, taking care that the resulting draw was deemed physically plausible by meteorologists. It was a purely statistical exercise; the construction of these simulated storms was otherwise divorced from the global atmospheric and oceanic flows that give rise to the regional climate conditions that spawn and propel actual hurricanes.

This approach remains quite useful and largely valid for regions of limited domain where observational data is abundant and for perils that are physically cohesive and well defined, such as hurricanes. It is a less robust approach for regions where data are relatively scarce and for more amorphous weather systems characterized by considerable internal variability at fine scale—systems that cannot be neatly defined by a handful of parameters. Extratropical cyclones and severe thunderstorms are good examples of the latter.

Numerical Weather Prediction (NWP) and Reanalysis Data: A Regional Approach

To overcome the challenges of a purely statistical approach, AIR first introduced climate modeling—and, in particular, numerical weather prediction—into the AIR Extratropical Cyclone Model for Europe.

As we learned in Part III, numerical weather prediction (NWP) models are used for weather forecasting over relatively short time frames—typically, not more than 14 days. Just as the validity of an NWP forecast depends on the accuracy of the inputs, so too does the quality of catastrophe model output depend on the quality of the input data. In building a large catalog of simulated storms, AIR used NOAA reanalysis data of the environmental conditions (sea surface temperature, air temperature, wind speed, humidity, and atmospheric pressure) present at the time of roughly 1,500 historical "seed" storms affecting Europe over the last 40 years. These storms were then perturbed stochastically by employing robust statistical algorithms to create tens of thousands of potential future storms.

One potential limitation of such an approach is that the resulting catalog comprises what are, in effect, “siblings” of their historical counterparts. They are different, but the approach raises the question of whether we can be confident that we have captured all potential extremes. If we perturb the storms too much in an attempt to free ourselves from historical constraints, we may end up with results that no longer consistently and coherently represent the dynamic nature of the atmosphere.

Figure 1
Figure 1. A historical seed storm is perturbed to create a set of possible realizations of such storms. (Source: AIR)

Another limitation of this approach is that NWP models, which currently run at a horizontal resolution of about 10 km, are typically regional in scope; they lack any relationship to other regions. Yet we learned from Part II in this series that planetary-scale motions are the drivers of local weather extremes.

Henk DijkstraHenk Dijkstra, Ph.D.
Professor of Dynamical Oceanography, Institute for Marine and Atmospheric research, Utrecht; and Director, Centre for Complex Systems Studies, Department of Physics, Utrecht University

Boyko DodovBoyko Dodov, Ph.D.
Vice President and Director, AIR Worldwide

Edited by Sara Gambrill, CEEM

The Promise of General Circulation Models

If the goal is to produce global catalogs that capture all types of dependencies—from global teleconnections to local correlations across all weather-related perils and across all regions—it would seem intuitive that the next step is to use a global general circulation model (GCM). Unfortunately, GCMs come with limitations, too.

While some of the processes in GCMs, as in all numerical models, are based on the laws of physics, the primitive equations, which include terms for the conservation of mass; a form of the Navier-Stokes equations governing fluid flow; and thermodynamic terms, as we discussed in Part III—there are other key processes in the model that are approximated, some of which are not based on physical laws. Recall from Part III that the “dynamical core” of a GCM is the part of the model that numerically solves the equations for wind speed and direction, temperature, humidity, and atmospheric pressure. The key to their solution is their spatial and temporal discretization by using various numerical methods. Depending on the spatial discretization, there are two major types of dynamical cores: (a) spectral, where the discretization is on waves of different length (i.e., bands in the frequency spectrum); and (b) gridded, working on spatial grids of various geometries. Both types are common in the climate modeling community and each has its pros and cons in the representation of the “true” continuous equations.

Climate processes represented by the dynamical core are referred to as being “resolved” by the model, as we discussed in Part III. Because of the relatively coarse spatial and temporal resolutions of the GCM grids, however, there are many important processes in the climate system that occur on scales that are smaller than the model resolution and contribute significantly to extreme weather on small scales. Examples include thunderstorms, tornadoes, convective clouds, and rainfall (Figure 2). Such unresolved, sub-grid scale processes are represented by “parameterizations,” which are simple formulas based on observations or derivations from more detailed process models. The parameterizations are “calibrated’’ or ‘’tuned” to improve the comparison of the GCM’s outputs against historical observation, and the parameterization formulas employed (which vary with the scientist(s) involved) introduce uncertainty and potential bias.

Figure 2
Figure 2. Resolved (dark blue) and unresolved (light blue) phenomena and processes in a GCM. (Adapted from Climate Change in Australia.)

Considering the potential biases introduced by discretization and parameterization, and the fact that the equations describing the resolved processes are to a large extent a limited view of reality, it is important to stress that GCMs can only approximate the physical processes they are designed to represent. While the large-scale dynamics are resolved in a GCM, the inaccuracies at smaller scales and their feedback on larger scales lead to some of these biases.

There are more than 20 international climate modeling groups, and there are thousands of different choices made in the construction of a GCM (resolution, type of dynamical core, complexity of physics parameterizations, etc.). Each set of choices produces a different model with different sensitivities and, most importantly, different statistics of the model output. Furthermore, different climate modeling groups focus on different interests—for example, long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, or the carbon cycle. Given these different interests and many others, limited computational resources are directed toward one aspect of simulating the climate system in each case, at the expense of others.

To date, no GCM has been built to simulate the small-scale processes that produce the extreme weather events that the catastrophe modeler is interested in. In fact, most of the parameterizations tend to replace highly non-linear natural processes with their “average” response. As a result, the natural variability of the climate system tends to be lessened, thus missing the extremes. And while the resolution of GCMs has increased greatly over the last 10 years, the computational cost of generating very large (million-year) global catalogs of extreme weather events remains prohibitive.

Biases in GCMs: Examples

As we have described, GCMs have strong biases in simulating large-scale atmospheric phenomena relevant to the genesis of extreme events. Although visually these phenomena may look reasonable in GCMs, their statistics are often incorrect. For example, in evaluating the period 1961-2000, GCMs generally underestimate the frequency of wintertime blocking events over Europe. (Atmospheric blocks were discussed in Part II.) Blocking frequencies at lower latitudes are generally overestimated.

It’s important to note that, despite the inherent model errors and biases, GCMs still do a reasonably good job of simulating general climate behavior: storms develop and move in realistic ways; temperatures change according to time of day and day of year in realistic ways; and precipitation falls where and when it should—generally. But the details—how intense storms will become, exactly where they will track or stall, and how heavy the precipitation will be—are not captured well enough to satisfy the catastrophe modeler.

Regarding the polar jet (between 45°N and 50°N), most of the GCM models can reproduce seasonal variations of the jet latitude, but many overestimate the amplitude of the maximal wind speed. Figure 3 compares the daily mean wind speed in the polar jet as simulated by 11 GCM models from the World Research Programme’s Coupled Model Intercomparison Project Phase 5 (CMIP5) to historical (reanalysis) data shown in the bottom right panel. In most cases, the CMIP5 GCM models produce greater variability than the historical.

Figure 3
Figure 3. Boxplot of daily mean wind speed (m/s) of the polar jet from simulations of 11 CMIP5 models over the period 1980-2004 (acronyms above each plot) and historical (reanalysis) data over the period 1957-2002 (ERA40 in bottom right panel). (Figure from Iqbal, W., Leung, W.-N., and Hannachi, A. (2018). Analysis of the variability of the North Atlantic eddy-driven jet stream in CMIP5. Climate Dynamics 51:235-247.)

GCMs also do not provide the necessary detail desired for extreme event forecasting on longer time scales, such as the prolonged periods of drought in many parts of Australia ahead of and during the bushfire season of 2019-2020. To capture that detail, regional general circulation models (RCMs) are often used. As their name would suggest, RCMs represent the climate over a limited region (such as Australia), and their resolution is typically much higher (down to 1 km) than a GCM’s. These RCMs are connected to (nested within) the coarse-resolution GCM at the boundaries of the region. While these models provide more detail over the region of interest, the biases from the GCM cascade through to the RCMs. A GCM bias in the polar jet, for example, has a large effect on the regional atmospheric flow and can destroy the validity of a regional long-term forecast—in particular, regarding local extreme events.

In recent years, AIR has employed a hybrid solution for building atmospheric catastrophe models, one that nests a regional NWP model within a GCM. The high-resolution NWP model is connected to the coarser GCM at the boundaries of the region, then downscaled to a very high resolution using statistical algorithms followed by local climatological adjustment. For scientific questions on climate change, the GCM biases may not be a serious problem, as one is often interested in the difference between a future projection and the current climate simulation. For addressing questions related to the occurrence of extreme events, however, these biases pose a problem, as they can materially influence the spatio-temporal statistics—the patterns—of these events. Such pattern biases are critical in the context of loss occurrence when aggregated at a portfolio level.

Extreme Event Modeling for a Future Climate

Over the last decade, new ideas to better represent the unresolved sub-grid processes that drive extreme weather have emerged, such as stochastic parameterization (a probabilistic approach to unresolved processes) and super parameterization (building in a simplified high-resolution sub-model for cloud formation, for example). Although these approaches may improve the underlying climate model output, they do not provide explicit representation of the small-scale processes—a key requirement in a catastrophe modeling framework. From a catastrophe modeling perspective, the best approach for developing global simulations within which we can model the extremes may be using a GCM that has been debiased.

Some very recent research in the field of machine learning provides a promising solution in compensating for the biases introduced by the missing unresolved climate processes in a GCM, thus serving as a sophisticated parameterization scheme that can narrow the gap and render a coarse GCM output close to the reanalysis at the GCM resolution. Similarly, recent attempts have been made to use machine learning in downscaling—that is, in explicitly simulating unresolved processes in terms of the resolved ones. These ideas, when combined, have the potential for being implemented in efficient high-resolution climate simulations—and they represent the solution that AIR is developing as the foundational framework for our atmospheric peril models. They will also be the subject of the next article in this series.

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.