By Shane Latchman | October 8, 2015

When it comes to the future, we can be certain about very little. We are all familiar with the success rate-or lack thereof-of weather forecasts. Travel times predicted by GPS navigation devices have improved enormously, but they still can't factor in the latest road closures and they can't predict accidents.

As a boy, I was fascinated by man's ability to predict events-the precise time of tomorrow's sunrise, the year Halley's Comet will return, and large-scale planetary motions come to mind. There seemed to be so much certainty about these large-scale events, yet meteorologists couldn't figure out if my home in Trinidad would be hit by a much smaller-scale tropical cyclone-even a day ahead of a possible landfall.

I was puzzled be the fact that some things were more foreseeable than others until two important concepts helped me understand difficulties in prediction: one was a demon, the other chaos.

The Demon

In 1814 Pierre Simone de Laplace wrote of an "intellect which at a certain moment would know all forces that set nature in motion. "Laplace's demon, as this first published articulation of causal or scientific determinism is known, is in essence a thought experiment that shows the limits of modeling. True prediction is only possible with a machine (the demon) able to observe and process all variables. For such a machine, "the future, just like the past, would be present before its eyes."

Reading this, I started to realize that perfect prediction of the future-say, a weather event-would be impossible because you couldn't observe at a fine scale the vast amount of data necessary or have enough processing power to solve all the equations in a reasonable time.


Then, one rainy day in Port of Spain, I picked up The Essence of Chaos. In this influential 1993 book Edward Lorenz wrote about Laplace's demon. He argued that tiny differences in the input to a weather model could lead to huge differences in its forecasts. This helped me understand the sensitivity of a model to inputs (the butterfly effect) and the difficulty of predicting the future.

The Best We Have

Shifting to a consideration of catastrophe modeling, what we are doing, or attempting to do, is actually quite incredible. We are estimating financial losses from events that have not yet happened. Think about that for a second: Not just the footprints of events, but much more derived, uncertain results.

More often than not, cat models are able to estimate a range of losses from actual events in real time that will correspond well with reported figures compiled months after the event. That this is possible at all seems to me a modern marvel, but we should always be mindful that models have their limitations.

While we do our best to validate the models and to calibrate them to historical events, tail losses have not been observed or recorded in many regions. When presenting at the AIR Institute in Hyderabad recently I was asked how I could prove that a model was mathematically correct. I'm sure there are fancy mathematical proofs of a model's correctness, but I feel that the wider concept we need to accept is that our knowledge has limits.

Today's models are the best yet, but as with tail losses, it is not possible to test everything. There are some things we simply cannot prove; some things will remain unknown.

The science of catastrophe modeling continues to advance. Models will continue to change. We need to embrace change and to manage it. For one thing is certain-the future is change.

Don't miss a post!

Don't miss a post!
Subscribe via email:



You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.