By Suilou Huang | July 17, 2017

Try as you may, you can no more lift yourself up by your bootstraps than you can by pulling on your hair. The term “bootstrapping” has come to mean doing something apparently magical or impossible, and means different things in different fields. In statistics, bootstrapping is any test or metric that relies on random sampling with replacement. It is a computer-intensive resampling technique that involves a relatively simple procedure repeated many times. Bootstrapping was first introduced by Bradley Efron in 1979 and with the availability of ever-more computer power it has become widely used.

Understanding the uncertainty involved in a loss estimate is always important in the financial sector. However, quantifying the uncertainty is often limited by data availability. For example, we generally use all our data to construct an exceedance probability (EP) curve of losses. To put error bars on the EP curve, we either need to know or assume the distribution of the losses (e.g., a normal distribution), or we need to have several similar data sets for the estimation. If neither is available, how do we estimate the upper and lower bounds of each loss value on the EP curve?

Bootstrapping comes in handy for estimating the uncertainty bounds with just one data set. This is called one sample estimation in statistical terminology. To illustrate the technique let’s take a look at how we prepared a table for the Florida Commission on Hurricane Loss Projection Methodology when the AIR Hurricane Model for the U.S. was submitted for approval earlier this year. AIR was required to provide the uncertainties of the personal and commercial residential probable maximum loss for various return periods. Table 1 displays the figures taken directly from the submission.

Table 1. Personal and Commercial Residential Probable Maximum Loss  for Florida (Annual Aggregate). (Source: AIR)

Return Period (Years)

Estimated Loss Level (USD Millions)

Uncertainty Interval (USD Millions)

Top Event

341,541

297,016 to -

1,000

150,697

137,537 to 158,540

500

120,232

111,883 to 127,384

250

97,022

93,396 to 101,145

100

64,718

61,614 to 67,017

50

44,076

42,253 to 45,549

20

22,339

21,541 to 23,048

10

11,212

10,908 to 11,572

5

4,028

3,901 to 4,157

The estimated annual losses (Column 2 in Table 1) for various return periods (Column 1) were obtained by using the AIR Hurricane Model for the U.S. along with the AIR 50K stochastic catalog. Note that this catalog is considered as one sample in the sense that we only “know” the annual losses in a 50K year period. If we believe that these 50K values (i.e., one annual aggregate each year from Year 1 to Year 50,000) are all we can get in another 50K year period, each year is uncorrelated to another, and the probability of getting each value is equally likely, then we can view the population as one large pool with infinite supply of each of these 50K values.

With this consensus, we can draw many 50K year samples (say, 100K) from our existing 50K year sample and compute Column 2 for each of these 100K (bootstrapped) samples. For each return period (say, 1,000-year return period), we sort the 100K values, from low to high, to obtain the 2.5% (i.e., 2,500th) and 97.5% (i.e., 97,500th) values. These values become the 95% uncertainty intervals (Column 3). Note that since there is infinite supply of each of the 50K values, it is probable that we will draw one value many times in a sample. Therefore, we need to set the sampling scheme to “sampling with replacement”. In this way, we construct the distribution of the annual loss for each of the return periods and obtain the 95% confidence values.

Questions about uncertainty

Bootstrapping can be used to answer questions about the uncertainty of a loss estimate for any return period without making assumptions about the loss distribution. For example, if an AIR client tries to validate an AIR model with their own 35-years’ claims data set, we could construct a 35-year EP curve from the AIR model with uncertainty bounds using this method. Then we can assess whether the client’s data fall into the uncertainty bounds of the AIR model. In addition to annual aggregates, uncertainties of other test statistics (such as median annual loss, annual occurrence, or even a ratio of AAL for 1,000-year return period to that for 100-year return period) can be constructed using the bootstrapping technique.

We do not make any assumptions about the distribution of the test statistic (the annual aggregate in our example); we construct the distribution of the test statistic, and we assume that the values in our sample are good representatives of the entire population. Note that there are bias correction techniques for the uncertainties generated from bootstrapping that can occur when a bootstrap median deviates from the original sample estimate, but this topic is beyond the scope of this short blog. Finally, the bootstrapping technique tends to produce larger uncertainty intervals (i.e., more conservative) than those obtained by using parametric methods (i.e., distribution-based).  But it’s still pretty cool.

Categories: Best Practices

Don't miss a post!

Don't miss a post!
Subscribe via email:

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.