The headlines are everywhere, from the insurance trade publications to the Wall Street Journal. U.S. insurance companies are scrambling to explain higher quarterly losses than they’ve seen in years. But should the losses from severe weather in the first quarter of 2017 really come as a surprise? Insurance companies are staffed by highly quantitative folks like actuaries and underwriters who, by education and experience, are trained to think in terms of probabilities. But the types of statements we’ve seen in the press in recent weeks would suggest a disconnect.
Those statements include references to unusually high cat activity, a troubling upward trend in storm frequency and, in one article, a reference to tornadoes being “quadruple the three-year average.” Let’s unpack some of these comments and see if we can put them in the proper context, one in which we are thinking probabilistically.
Let’s start with the number of tornadoes. As of April 30, 2017, there were 616 reports of tornadoes in the U.S., which is indeed higher than the very low activity we experienced in 2015, 2014, and 2013 and, for that matter, higher than the overall long-term average. However, if we go back just one more year to 2012, we see exactly the same number of tornadoes, 616, by April 30. If we go back a bit further, 2008 saw 705 tornadoes by this date. Hail tells the same story—while significant hail (≥ 2") activity this year has dwarfed the last three seasons, 2008 and 2012 had similarly strong early season hail activity. So we have a classic case of recency bias at work here. People tend to form judgments of what is “normal” based on the very recent past. None of us should be shocked by results that have been observed twice before in the past 10 years. Perhaps more importantly, we should all know better than to cite as meaningful a three-year average for tornadoes or hailstorms.
But of course what is more impactful to insurers is not the number of events, but the dollar amount of insured losses. AIR’s models can be used not only to find a probabilistic distribution of event frequency, but also more importantly to provide a probabilistic view into insured losses. Using the models, let’s put this year’s Q1 losses into perspective. Severe thunderstorm losses as reported by Property Claim Services® (PCS®) for Q1 were USD 6.05 billion. Our models state that there is a 21.6% probability of that level of loss or higher happening in a given year. Yes, 21.6% is not a likely outcome, but that doesn’t make it unexpected. To put it in context, imagine throwing a fair six-sided die. The probability of rolling any single value is 16.7%. In the first quarter, we experienced a result that is more likely than, say, getting a “6” from a single roll of the die. Would you react with horror if you rolled a die one time and saw a 6? No, because you know that type of stuff happens all the time. But somehow we forget what probabilities actually mean when we put them in a different real-world context, and not the highly tractable one of games. It’s the point Taleb made more articulately than I in his 2001 book Fooled by Randomness.
One final point. High activity in the early part of a season is not a good predictor of the rest of the season. Again, we have very good examples of this in the recent past. The year 2012 started strong but at about this time, the season became less active and ultimately landed well below the long-term average. On the other hand, 2011 had only USD 2 billion in losses in Q1, but ended up being the costliest season in history with losses of USD 22 billion in Q2 alone. If you want to dive deeper into this topic, I highly recommend this blog post. Nothing in this post suggests that insurers should take their eye off the ball of severe thunderstorm losses and prudent risk management. But in order to do that, one needs to view current events in the proper probabilistic context and resist the temptation to spin flawed narratives that our brains love to create. Let’s not be fooled by randomness.