Neutral Technology or a Deeper Entrenchment of Human Bias? The Hidden Dangers of Artificial Intelligence
Artificial intelligence is a set of code designed to perform specific tasks, including capabilities to rapidly understand and analyze patterns to make judgment calls, predict the future, and draw conclusions. Artificial intelligence can analyze large data sets in a way beyond human capability at a rapid pace. One area where artificial intelligence is often used is in employment decision-making. Sophisticated employers often capitalize on artificial intelligence abilities to statistically analyze the behavior of successful employees and screen through new applicants to find those that might have similarly successful careers. This seems like an easy way to build a powerful workforce, however, there are often unintended results.
Businesses are faced with a complicated web of federal, state, and local regulations related to the workplace. Whether actions take place through human employees or other means, employers of all sizes must remain cognizant of the consequences of those actions (intended or not). Some employers mistake the incorporation of artificial intelligence into decision making as a means to perfectly avoid discrimination, promote neutral results, and achieve the best outcomes; however, artificial intelligence can reflect systemic biases more strongly than human counterparts in ways that go undetected. There are risks involved in automatically sorting, ranking, and eliminating applicants without human oversight. For example, a lawsuit alleged PricewaterhouseCoopers disproportionately screened out older workers by filtering out applicants that did not have “.edu” email accounts common among young students and recent graduates.
Employment laws were not designed to police the use of artificial intelligence in hiring decisions. Generally, employers cannot invoke policies that result in disparate treatment or disparate impact of a protected class of individuals; however, machine learning can find correlations in data sets that go unnoticed by human capabilities and assess results of past discrimination to interpret correlations where there’s a lack of causation. Related to these types of concerns, proposed bills in Illinois would prevent employers from inadvertently using automated machine learning for credit or hiring decisions in a way that correlates with an applicant’s race or zip code.
Stephen Schwarzman, billionaire and founder of Blackstone investment firm, recently donated the equivalent of $188 million U.S. dollars to the University of Oxford to fund research into the ethics of artificial intelligence. Artificial intelligence has a lot of upsides, including the elimination of repetitive tasks, contribution to smarter decision making, and reduction in human error; however, there is tension between current law and the rapidly evolving artificial intelligence landscape. Employers need to be vigilant in monitoring and analyzing outcomes when artificial intelligence is utilized and ensure there is diversity of thought in the population of people designing such sets of code. While a human’s decision to reject a candidate may be traceable, artificial intelligence tool’s reasoning for rejecting a candidate may be unknown or even untraceable by the tool’s developer. How will a court assign fault to technology capable of acting autonomously? There are currently no laws aimed at addressing injuries caused solely by artificial intelligence. If your organization has adopted artificial intelligence capabilities or plans to adopt these types of capabilities in the future, be sure to consult with an attorney to determine whether your use is likely to comply with the law or expose you to liability for its actions.
If you have any questions about this post or any other related matters, please email Graham Simmons, Co-Chair of the Norris McLaughlin Business Law Group, at gsimmons@norris-law.com.