Managing AI in the Workplace

The federal government is concerned about the increasing use of Artificial Intelligence (AI) technology and other automated systems in the workplace, particularly those systems used in the hiring process. Several government actions this year have provided a window into how regulating agencies are approaching the AI phenomenon. The government is actively engaged in the process of creating a common vocabulary for discussing the issue, as well as a methodology for managing the technology to ensure fairness.

Federal agencies are also figuring out their enforcement plans, and employers are being asked to ensure that their automated systems don’t violate the law. Title VII of the Civil Rights Act of 1964, which prohibits employment discrimination based on race, color, religion, national origin, or sex — including pregnancy status, sexual orientation, and gender identity – is front and center when it comes to avoiding illegality in using AI in the workplace.

Monitoring Automated Systems

In late April, four federal agencies — the Equal Opportunity Employment Commission (EEOC), the Consumer Financial Protection Bureau (CFPB), the Federal Trade Commission (FTC), and the United States Department of Justice (DOJ) — published a joint statement affirming their authority to monitor automated systems, and pledging to use their authority “to protect the public from bias in automated systems and artificial intelligence.”  The government is defining “automated systems” quite broadly. The phrase covers AI systems and “any software and algorithmic process used to automate workflow and help people complete tasks or make decisions.”

This joint statement from the EEOC, CFPB, FTC, and DOJ highlighted three modes of potential discrimination:

  • Data and Datasets. Automated systems use lots of data, looking for patterns and correlations and then applying those patterns to new data sets. The potential problem rests in the source and integrity of the underlying data. Data sets can also have embedded bias, leading to discriminatory outcomes when applied to new data.
  • Model Opacity and Access. Most automated systems are extraordinarily complex and challenging for anyone — even the developer — to evaluate for fairness. If we can’t see what’s happening, how can we manage it?
  • Design and Use. Developers may design a system based upon flawed assumptions about users, context, and the underlying practices or procedures it may replace.

Enforcement Focused on Hiring Practices

In early January, the EEOC published a draft of a Strategic Enforcement Plan (SEC) that notes explicitly the agency’s plan to focus “on the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact a protected group.” The EEOC also notes in the SEC that it plans to keep an eye on restrictive employment application processes or systems, including “online systems that are difficult for individuals with disabilities or other protected groups to access.”

It has also been reported that all EEOC staff is being trained to identify AI-related issues in their enforcement investigations.

Technical Assistance from the EEOC

On May 18, 2023, the EEOC released a technical assistance document to help employers understand and manage the intersection of their AI and automated systems with Title VII of the Civil Rights Act and ensure they’re not violating it. How do the critical aspects of Title VII apply to an employer’s use of automated tools and systems?

The EEOC provides several examples of AI tools used in employment hiring procedures that require scrutiny, including:

  • Resume scanners that prioritize applications using specific keywords
  • Employee monitoring software that rates employees based on their keystrokes (or other factors)
  • Virtual assistants or “chatbots” that ask job candidates about their qualifications and reject those who do not meet pre-defined requirements
  • Video interviewing software that evaluates candidates based on facial expressions and speech patterns
  • Testing software that provides “job fit” scores for applicants or employees based on their personalities, aptitudes, cognitive skills, or perceived “cultural fit” based on performance in a game or a more traditional test

The document explores the concepts of disparate treatment and adverse impact and how seemingly neutral employment policies and practices —applied to all employees and applicants — can sometimes disproportionately exclude women or minorities. The only exception will be if the employer can demonstrate that using a particular algorithmic filter is “job-related” and “consistent with business necessity,” a legal standard that has been used in other, non-AI, employment discrimination contexts.

Stay Informed

Employers using or considering automated tools in recruitment and hiring are advised to follow this topic as it unfolds. The federal government and numerous state and municipal governments have made their intent to regulate very clear. Employers are encouraged to analyze their automated systems thoughtfully.

If you have any questions or concerns about the automated AI decision-making tools you are currently using, or planning to implement soon, don’t hesitate to contact Orr & Reno for assistance.

About the Authors: Steven L. Winer and Lindsay E. Nadeau

Steven L Winer Lindsay Nadeau

Print this entry

^ Top

Clients. Colleagues. Community.

Since 1946, Orr & Reno has strived to provide our clients with high-quality, ethical and valued legal services; foster a collegial work environment; support professional and personal balance; and invest in our community.

Contact Us