Artificial intelligence tools are becoming increasingly popular for human capital tasks such as hiring and promoting employees. In fact, more than half of the human resources managers surveyed by CareerBuilder in 2017 revealed that they had plans to make AI a regular tool in their arsenal within the next five years.
However, the popular AI technique of pattern matching, or prioritizing familiar traits, can result in institutional bias that holds women back by emphasizing traditionally masculine traits.
How Do AI Tools Promote Bias?
AI tools in the workplace rely on historical data about hiring, employee evaluations and promotions to recognize patterns and make assessments. In the case of AI human capital tools, a function called natural language processing can inject bias into the analysis by favoring data that contain words associated with masculinity.
Amazon introduced AI bias when it explored automating hiring. Computer models were tasked with discovering patterns in previously submitted resumes to determine which metrics lead to a hire. The AI recruiting engine would then compare new resumes against the indicators of previously successful resumes and provide a determination.
Since the majority of the resumes were from men, the tool “learned” that resumes containing words that appeared more commonly on men’s resumes such as “executed” and “captured” were more likely to be those belonging to successful candidates, while resumes that included the word “women’s” (as in women’s basketball or women’s studies) didn’t appear as often in the successful category; it then scored them accordingly.
The Root of AI Bias Is Humans
These results may not come as a surprise in the tech industry, which is no stranger to gender gaps. But any industry can be susceptible to this kind of disparity.
Career-related decisions such as hiring, compensation, performance reviews and promotions are all open to bias. These bias can stem from individuals or the organizational culture itself and can be difficult to recognize.
When the data that AI uses is based on bias, the results will reflect that bias. Biases that are transferred into the analysis may then go unnoticed since artificial intelligence is often thought of as inherently unbiased.
Limitations of Machine Learning
Human bias poses an issue for machine-learning algorithms that are designed to identify patterns in the datasets. Bias in research means bias in outcome, so if the computer is fed data about past hires and human biases have led to hiring disparities, then the underrepresented candidates will be underrepresented in the resulting recommendations as well. With Amazon, the AI picked up on a long-term pattern of unbalanced hiring and ran with it, amplifying and continuing the gender gap.
Mitigating AI Biases
A number of tools aim to solve the problem of AI pattern matching bias. Pipeline’s analytical platform demonstrates the financial impact of closing gender equity gaps by detecting costly biases in human capital decisions and then making recommendations that are more beneficial to the financial outcomes of the business.
Organizations must examine and adapt the way they make human capital decisions. Uncovering the biases in industry and company cultures as well as individual employees is key to ensuring that the AI used to assist in decision making isn’t relying on skewed data that widens the gender equity gap.
© 2019 Pipeline Equity, Inc.