AI in the WorkplaceApr 28, 2023
AI decision-making systems are becoming regulatory hotspots.
The increasing use of artificial intelligence (AI) to perform various “human resource” tasks is rapidly becoming a focus for the regulatory community. Employers should become aware of the federal and state initiatives about the use of AI in the workplace, and take steps to ensure that their AI programs and policies comply with emerging guidance and regulations.
What is AI?
Artificial Intelligence (AI) is a computer-based system that manipulates algorithms to perform tasks that would otherwise require human beings. According to the Equal Employment Opportunity Commission (EEOC), Congress defines AI in the National Artificial Intelligence Initiative Act of 2020. In that legislation, AI is described as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” For the EEOC, “machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and autonomous systems” are examples of AI decision-making tools that are becoming integrated into workplace environments today.
Is AI bias-free?
Integrating AI into the workplace is a significant step for everyone involved. And, as in any transition period, new questions come to light. People make mistakes. Litigation is inevitable. New regulatory ground is being broken.
For example, many automated employment decision-making tools (AEDT) advertise themselves as “bias-free.” Are they? What does that mean, and how do they accomplish it? What about privacy? Where does the data come from? Who has access to data?
The federal government and several states are asking these questions, developing guidance and tools for employers, and passing legislation — at least at the state level. A review of this activity provides a window into how our regulatory agencies are approaching this topic.
Last year, the EEOC issued technical guidance about using “software, algorithms, and artificial intelligence to assess job applicants and employees.” There are concerns at the EEOC about possible violations of Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, and several other Equal Employment Opportunity (EEO) laws.
The Biden Administration also issued an AI Bill of Rights, which makes several recommendations to ensure that the “American public is protected from potential harms, and can fully enjoy the benefits, of automated systems.” The recommendations included in the AI Bill of Rights provide a framework for implementing systems that “protect civil rights, civil liberties, and privacy” and “assist governments and the private sector in moving principles into practice.”
Most recently, in late February, the Federal Trade Commission (FTC) Division of Advertising Practices released new guidance about the use of AI in advertising. This addendum to FTC guidance published in 2020 and 2021 warns against AI washing — the sales practice of mislabeling technology and suggesting that the product delivers AI when it doesn’t. The FTC’s 2020 guidance concerns the ethical use of AI and algorithms, consumer transparency, and compliance with the Fair Credit Reporting Act (FCRA) and the Equal Credit Opportunity Act (ECOA). The 2021 guidance focuses on how to use AI’s benefits without “inadvertently introducing bias or other unfair outcomes.” The FTC explicitly warns employers to avoid violations of Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices, including “the sale or use of racially biased algorithms.”
Washington D.C.’s Stop Discrimination by Algorithms Act of 2021 “would prohibit algorithmic decision-making in a discriminatory manner and require notices to individuals whose personal information is used in certain algorithms to determine employment, housing, healthcare, and financial lending.” While this bill has yet to pass, it likely will soon.
The American Data Privacy and Protection Act contains similar regulatory language as Washington D.C.’s bill and has just been introduced in the U.S. House of Representatives.
There is also significant regulatory activity at the state level. Over the past year, Alabama, Colorado, Connecticut, Illinois, Mississippi, Vermont, and Virginia have pending or already enacted privacy legislation governing automated decision-making.
What should employers do?
Given all the regulatory attention that AI is receiving these days, it would be wise for employers to take an inventory of the AI and other computer-based tools used to make decisions about employees, consumers, or job candidates. This kind of inventory can get complex, particularly when doing it the first time. In larger companies, it’s unlikely that one person will have the entire picture. Also, many AI systems are a mix of open-source and proprietary components.
Once the employer completes an AI inventory, assessing risk involves identifying the points of possible exposure when various AI decision-making systems are used. One tool available to help employers perform this analysis has been developed by the “Independent High-Level Expert Group on Artificial Intelligence,” set up by the European Commission. The Assessment List for Trustworthy Artificial Intelligence is designed for self-assessment. It may be a useful resource for helping employers manage their AI systems and mitigate risk. Also, as AI decision-making technology evolves, a record of the employer’s efforts to minimize the risk of unlawful discrimination introduced via AI could make a big difference — should the employer become involved in litigation — in demonstrating good faith.
If you have any questions or concerns about the AI decision-making tools you are using now — or planning to implement soon — don’t hesitate to contact Orr & Reno for assistance.
About the Authors: Steven L. Winer and Lindsay Nadeau