AIR Currents

Jun 22, 2020

Editor's Note: This is the first article in a yearlong series about AIR’s motivation for and approach to modeling and quantifying climate change risk.

Many parallels have been drawn between the unfolding crises of COVID-19 and climate change. Of the lessons that can be learned from both, perhaps the most important is that science matters. The countries that have weathered the pandemic best—Singapore, Hong Kong, and South Korea, for example—did so because they understood the threat early on, having battled the SARS and MERS epidemics of 2003 and 2012, respectively. The scientific investments these countries made in the aftermath of those outbreaks better prepared them to deal with COVID-19. Had the rest of the world recognized the inevitability of a global pandemic and invested tens of billions of dollars at the right time,  tens of trillions of dollars in economic damage might have been saved, not to mention the enormous societal damage that COVID-19 has inflicted.

For some years, we’ve been at a tipping point where scientific investments in mitigating the impacts of climate change can still make a material difference. We know that the cost will be orders of magnitude higher the longer we delay meaningful action. I’m sure that I am not alone in hoping that our experience with COVID-19 will inspire the world to finally reckon with the implications of inaction on climate change and spur the necessary investments and fundamental policy changes at the global, sovereign, and societal levels.

Here at AIR, we also feel a tipping point and are taking bold steps to fundamentally change the way we quantify insurance and other financial risks due to climate change. This article is the first in a series that will describe our motivation and approach.

A Brief History of Cat Modeling and Climate Change

In the mid-1980s, AIR introduced a fundamental change in the way extreme event risk was quantified: the first stochastic hurricane model for the insurance industry, which initially failed to garner much attention. Hurricane activity had been below normal for the decade prior and the model’s suggestion that the industry could see losses several times larger than had ever been seen before was met with skepticism. Hurricane Hugo, which in 1989 produced the largest hurricane loss to date, opened some minds, but it wasn’t until Hurricane Andrew in 1992 that the industry fully embraced the new technology.

At the time, few outside academia and government were concerned about global warming. Early generation cat models produced robust results by relying on decades of historical observation data (augmented by scientific expertise), assuming a stationary climate. The first-generation models, which were a novelty at that time, were used to create the very early sensitivity studies of changes in the frequency and severity of hurricanes, but it’s fair to say that it was not done in earnest. The first real concerns about climate change impacts on hurricanes were expressed after the record-breaking 2005 Atlantic hurricane season when the National Hurricane Center had to reach deep into the Greek alphabet for storm names. That was also the year that Hurricane Katrina devastated New Orleans, and Hurricane Rita not only achieved the lowest central pressure on record in the Gulf but also the largest radius of maximum winds. In response, catastrophe modelers introduced the first “climate conditioned” catalogs a year later.

Still, climate change largely remained an afterthought to the insurance and cat modeling industries. The 10 years with no single Florida hurricane landfall that followed—often referred to as the hurricane “drought”—helped push the issue to the background yet again (despite the possibility that climate change may have been responsible for the drought). For many years, the interest emanating from the insurance industry was sporadic and mostly reactive to individual events.

Today, large-loss weather events are almost guaranteed to produce headlines attributing them to climate change. And thanks to the relatively new science of event attribution, there is growing justification in doing so. On such occasions we can see with our own eyes the effects of ever-increasing greenhouse gases. Memories are short, however, and the effects remain largely invisible—except, perhaps, for those areas of the coast that are experiencing more and more frequent sunny day flooding. For most of us, climate change is a long, drawn-out catastrophe unfolding just beyond our line of sight.

This presents a fundamental challenge to maintaining focus on the issue. But all of us, as stakeholders, must overcome the challenge and recognize that we are at another inflection point or “Hurricane Andrew moment,” although a far more momentous one. Whether climate change is truly an “existential” threat may still be debated, but its costs, which we are already beginning to experience and which will only increase over time, can no longer be ignored. It is no longer acceptable to think about climate change only after an extreme event occurs. And we cannot allow market pricing cycles, like the soft market that, in part, resulted from the 10-year hurricane drought, to lower our guard or commitment.

What Needs to Be Done?

We must be guided by the science in our modeling of climate change risk, not rhetoric or headlines. We must recognize the inherent uncertainties in projections of future climate states and navigate them using rigorous analytics. We must thoughtfully explore the known knowns and the known unknowns, and we must at least imagine and speculate about the unknown unknowns.

It is time for us, as model developers, to go beyond a piecemeal approach to addressing climate change, beyond incremental updates or extensions to existing models. As model developers, we must take the best-of-breed science coming out of academia and leading research institutions and translate it into fit-for-business-purpose climate change models and analytics. In a very real sense, we should be thinking of climate change as a new peril, and the models must be capable of answering different kinds of questions.

What Are the Questions that Only a New Breed of Climate Models Can Answer?

Since the introduction of the first cat model, we’ve been trying to answer questions such as, “What is the probability of a Category 4 hurricane making landfall in Texas?” Today, the relevant questions are much more complex, for example: “What is the probability that a Category 4 hurricane will make landfall in Texas, stall over Houston, and drop more than 50 inches of rain?”

In fact, AIR’s existing physical-statistical-hybrid approach can get us quite close to an answer. What our existing approach cannot answer with confidence is, “What is the probability of that same scenario happening over Mobile, Alabama?” Similarly, we might speculate, “We saw Category 5 Hurricane Dorian stall over the Bahamas; what is the probability of that happening over southeast Florida?”

The only way we can begin to answer these questions is by modeling the physics that give rise to such occurrences. We must better understand how small-scale features in the atmosphere can have disproportionate impacts on large-scale planetary features (and vice versa) and how these non-linearities and teleconnections between scales and across distances drive weather extremes. Although the quality and detail of reanalysis data sets for the last 40 years has improved by leaps and bounds (the latest holds more than 500TB of data), they can only tell us what has happened historically. They cannot answer the question of where and how frequently a break in a planetary wave will set up a large stationary high, such as the one that caused Hurricane Sandy to make its notorious (and anomalous) westward turn into northern New Jersey; they cannot tell us the frequency of anomalous jet-stream behavior that allows the polar vortex to split and sag southward, bringing frigid temperatures to North America for long stretches; or omega blocks that can bring tropical temperatures to Europe and Greenland, as one did in the summer of 2019. Even if we spatially perturb the reanalysis data to create new scenarios, we do so with many unknowns, leaving us with lower levels of confidence in (probably biased) results that no longer consistently and coherently represent the dynamical nature of the atmosphere. And we are unlikely to produce simulated events that present surprises for us—surprises that we know a changing climate will bring.

Why Is the Problem So Difficult to Solve?

At its simplest, the answer to this question is clear: The atmosphere is chaotic. Weather, which refers to short-term atmospheric conditions experienced at a location over the course of hours or days, is highly variable. The reliability of weather forecasts falls off after about a week, if that long. Climate, on the other hand, refers to the statistics of weather over decades, often over 30 or 40 years. While the climate tends to change quite slowly, we do experience shorter-term fluctuations; El Niño Southern Oscillation (ENSO) and the North Atlantic Oscillation (NAO) are familiar examples of features in our ocean-atmosphere system that drive variability. When discussing “climate variability,” we're describing natural (i.e., not man-made) processes that affect the atmosphere. When we introduce anthropogenic climate change (caused by greenhouse gases), it becomes quite challenging to distinguish its signal from climate’s natural variability, particularly when it comes to short-term weather phenomena such as individual storms.

The task for the catastrophe model developer is to build large catalogs representing ensembles of future climate states at different timescales. In theory, we might create such an ensemble using global circulation models (GCMs) which have become quite powerful. But while some very recent GCMs are capable of running at high enough resolution to explicitly simulate small-scale features such as hurricanes, the computational cost of running them at that resolution long enough to generate tens of thousands of years of simulated hurricane activity remains impractical. Furthermore, no current GCM attempts to capture all the smaller-scale atmospheric processes and their interactions that give rise to the full range of extreme weather events. Those processes that are not resolved are parameterized, which can introduce additional bias. Careful analyses of GCM output in the recent climate reveal their limitations in accurately representing the statistics of some of the larger-scale dynamics in the atmosphere, which we know often contribute to the conditions leading to the occurrence of an extreme event.

But this leads to another question: If we successfully build new models suitable for different timescales, how do we validate them? We know that history is unlikely to be representative of the future, so what does validation of a future climate state really mean? In fact, validation will necessarily take on a new meaning. The key is to break the problem into smaller constituent parts and develop an approach whereby we can validate the recent past and then employ those features of future model projections in which we have more confidence. For example, the impact of climate change on sea level rise, Arctic amplification, and temperature and precipitation patterns are better understood than the impacts of climate change on tornado activity. But if we can better understand the atmospheric conditions that drive tornado activity, we will have more confidence in what the models tell us about where and how frequently those conditions might arise in the future.

At AIR, we believe we’ve found a solution, in which transparency and staying true to the science are key; we cannot allow ourselves to read more into the data than is there or overextend the state of science. We should not be afraid of getting something wrong, but we should be prepared to incorporate new knowledge as it becomes available and update our view of the risk accordingly.

So What Is AIR’s Solution?

We see an opportunity to blend our traditional hybrid, physical, and statistical approaches with a new set of tools that come from the world of artificial intelligence—specifically, machine learning. Our approach, which represents the efforts not only of AIR scientists but of partnerships with research institutions at MIT in the U.S., Magdeburg University in Germany, and the University of Utrecht in The Netherlands, combines a novel approach of de-biasing large-scale features in a computationally fast GCM, with analysis of fine-scale features from historical data to learn the “rules” of atmospheric behavior that produce weather extremes. Million-year catalogs are suddenly possible—global catalogs that capture all types of dependencies, from global teleconnections to local correlations across all weather-related perils and across all regions.

The result will be a new framework for climate risk modeling, one that is deeply rooted in high quality data and deep domain knowledge of weather and climate physics. The framework will allow us to answer not only today’s new climate questions, but also tomorrow’s. The business benefits are substantial, starting with a view of risk for regional perils better rooted in science and a new quantification for diversification benefits, leading to better allocation of risk capital. In short, for those in the insurance industry, you can perform all the key functions you do today, but with greater confidence in how you will manage the uncertainties due to a changing climate.

AIR undertook this important project two years ago, but there is still much to do on this multi-year journey—a journey that requires patience, tenacity, and commitment. We’ll be publishing a series of articles over the coming year on topics ranging from the fundamentals of climate and climate variability, to what climate models can and cannot do, to the latest thinking on the contributions of machine learning techniques, to what insights we might gain into the potential impacts of climate change on the locations, frequencies, and intensities of extreme events around the globe.

Also critical is your engagement and support to drive innovation in climate risk analytics, which the industry will need for the future. The modeling approach represents a long-term, sustainable strategy for managing climate change risk for the coming decades. In the meantime, we are executing on shorter-term strategies that include increasing our efforts on how we evaluate the impact of climate change on each peril region, development of climate sensitivity event-ensembles, and developing capability in our products so that you can do what-if scenarios and establish your own view of climate risk. Now that this article has provided the motivation for our investment, we hope you will read the upcoming articles in this series, which will provide the details.

Jayanta GuinJayanta Guin, Ph.D.
Chief Research Officer

Edited by Sara Gambrill, CEEM

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.