By Robin Wilkinson, Christian Hughes | October 24, 2019
Blog post illustration

Editor’s note: This article first appeared in Insurance Day on July 12, 2019.

The casualty insurance landscape is constantly evolving as new risks assume greater prominence and new insurance products are developed to help organizations mitigate those emerging risks. One insurance product that arose more than 30 years ago and continues to grow today is employment practices liability insurance (EPLI).

EPLI was originally introduced following a societal shift in attitudes toward, and awareness of, new regulations governing employment practices. Looking at the history of how this area of risk developed and how the insurance industry responded can provide insights into the industry’s options as a new type of global employment risk emerges—one rooted in the intrinsic biases of the supposedly impartial algorithms developed to help employers identify and hire the “right” people.

EPLI was first developed in the late 1980s and early 1990s as a supplement to general liability policies. At the time, most companies probably did not recognize the need to purchase this type of coverage, probably because they were unaware of the exposure to liability risk stemming from their employment practices.

However, the potential for liability became more evident when several federal laws were enacted addressing employment-related issues such as the Whistleblower Protection Act of 1989, the Americans with Disabilities Act in 1990 and the Family and Medical Leave Act in 1993. This created an opportunity for insurers, which seized the chance to provide protection for a sector of the market that was substantially underserved.

In 2019 the EPLI market is projected to reach USD 2.7 billion in gross written premium, with the coverage being significantly more common among larger firms. Some 41% of firms with more than 1,000 workers report having a plan to cover sexual harassment and discrimination. Conversely, only about one-third of companies with between 500 and 1,000 employees carry such coverage and as few as 3% of companies with fewer than 50 employees have this type of insurance.

Waking Up

While employers were waking up to the fact that their own internal processes toward hiring and promoting employees might need to change, machine learning algorithms were being developed to help companies automate many of these same activities—with an aim to speed up processes, cut costs and reduce human errors with data-driven decision making. Machine learning algorithms are software programs that can learn how to make decisions based on data and improve with experience without additional human intervention.

Human resources departments and recruiting agencies quickly recognized the power these algorithms provided, enabling them to assess large groups of candidates quickly and impartially. Today, these algorithms are used to help companies with everything from advertising open positions, to screening résumés, to testing applicants during the hiring process and even to identifying internal candidates for promotion within organizations. It is estimated 35% of recruiters in the U.S. are using some degree of machine learning technology and this number is expected to grow to 75% by 2021.

Initially, it was thought the use of these types of algorithms by employers would effectively combat biases in the recruiting, screening, hiring, retention, and promotion of employees.

However, their growing use has begun to raise significant concerns they may not always be playing fair. Machine learning algorithms can inadvertently incorporate the biases of their human creators and, even more so, the biases of the historical data sets on which they are trained. The algorithms continue to learn from these data sets, without the need for additional explicit instruction to guide their decisions. So, if the underlying data sets used for training the algorithm are biased toward a specific gender or other group, the algorithm will naturally provide biased results that mirror the data set it learned from—perpetuating historical discrimination on a mass scale in a way that is nearly invisible.

Systemic Bias

These inherent biases can exist virtually anywhere. A 2014 Carnegie-Mellon study found Google’s online advertising system showed high-income jobs to men much more often than to women. In 2018, Amazon was forced to scrap an AI recruiting tool that demonstrated bias against women.

When discriminatory biases that run afoul of either federal or state law are uncovered, algorithms and training data sets may potentially be evidence of systemic bias that could assist in providing the basis for a class action-type lawsuit against an organization.

These types of claims present a new source of risk for providers of EPLI coverage, and it remains to be seen how the insurance industry will react as this formerly latent source of risk rapidly becomes more visible. We estimate economic losses from claims based on algorithmic hiring biases are expected to be USD 3 billion over the next 10 years and non-economic losses—from punitive damages and legal costs, for example—could easily be orders of magnitude higher, according to research from AIR. Concerns about these high loss levels could lead to some insurers trying to exclude such claims from their standard EPLI or directors’ and officers’ policies to protect their balance sheets.

However, just as we saw with the rise of EPLI coverage 30 years ago, with an emerging risk comes a new potential opportunity.

Forward-thinking insurers are instead likely to opt to educate their clients to increase their awareness that, despite the employers’ best attempts to combat bias using algorithms, they may not be aware of the unforeseen biases lurking within the algorithms and therefore are not safe from discrimination allegations, which would be covered under an EPLI policy.

Insurers may need to adjust their limits, retentions and pricing to respond to this risk and focus on making insureds aware of algorithm biases. These EPLI policies will likely play a key role in ensuring otherwise profitable companies are not faced with outsized losses from inadvertently using biased algorithms to assist them in their employment-related decision-making processes and give these companies time to effectively manage this new and insidious type of discrimination exposure.


Check out Arium. Finally, casualty insurers can benefit from modeling risk, just as property insurers have for the past 30 years



Categories: Casualty

Don't miss a post!

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.