By Serge Gagarin, Lucian McMahon | April 29, 2021

As companies increasingly look to technology to automate many of their existing business processes, it should come as no surprise that they would look to automate parts of their human resources  processes. In 2018, more than two thirds of recruiters and hiring managers reported that algorithms are saving them time during the recruitment process.

While these processes have traditionally been very hands-on, the allure of ceding some of the tedious parts of the hiring process to automated systems holds significant appeal. As a result of this desire to not only minimize discrimination but also save time and drive down costs, machine learning algorithms are increasingly being used to support decision-making in recruitment.

Built-In Bias

Introducing man-made algorithms into a decision-making process, however, adds another element of risk for employers. In the past, we’ve discussed how developers, even those with the best intentions, can inadvertently build in or codify biases into the algorithms being developed for advertising jobs, recruiting candidates, and screening resumes. To summarize, if the underlying data sets used for training the algorithm are biased toward a specific gender or group (potentially due to biased historical hiring practices), the algorithm could provide biased results as it seeks to replicate that biased data set, and hence potentially discriminate against one or more of these groups.

These algorithms are typically considered proprietary “trade secrets” and are generally inaccessible or inscrutable to outsiders. In some cases, the algorithm developers themselves may not know why an algorithm makes the decisions it does—so these are truly “black boxes.” Algorithms can therefore potentially perpetuate these biases in a non-transparent way over multiple years.

Because some of these potential biases, such as those regarding gender and race, may have a discriminatory impact and thus be in violation of both federal and state laws, employers could break the law without even knowing it. If it becomes clear that the algorithms being used by these companies display inherent biases in the candidates they target, then employers can find themselves in court staring down massive employment discrimination lawsuits that will undoubtedly highlight their employment practices liability insurance (EPLI) coverage.

The potential for bias may not be limited to a handful of companies. If recruitment algorithms in general are found to exhibit bias, this could potentially give rise to significant risk accumulation, in which discrimination against protected groups is found to occur (even unwittingly) on a widespread and systemic scale by many companies across many industries.

Scenarios for Estimating Potential Losses

As of this writing, there have not yet been any reports of large-scale litigation or losses due to algorithmic bias; however, to help (re)insurers) understand and manage their potential exposure to such an event if it were to emerge, Arium is developing a set of scenarios that can quantify an organization’s potential liability. This group of scenarios estimates the potential losses from gender discrimination in recruitment against women arising from hidden algorithmic bias in the U.S. for the years 2015 to 2019. Given the uncertainty around how the risk may develop, these scenarios provide a range of potential estimated losses, from hundreds to several billions of dollars in economic damages. Arium is also developing additional scenarios to estimate potential losses for algorithmic bias that could occur in future years.

Please contact Arium for more information about this and other emerging risk scenarios.


Quantify the impact of liability accumulations to your portfolios with Arium, the scenario-based loss assessment framework



Categories: Arium

Don't miss a post!

Loading...

Close

You’re almost done.
We need to confirm your email address.
To complete the registration process, please click the link in the email we just sent you.

Unable to subscribe at this moment. Please try again after some time. Contact us if the issue persists.

The email address  is already subscribed.

You are subscribing to AIR Blogs. In order to proceed complete the captcha below.