The Algorithmic Accountability Act of 2019

AI and other advanced technologies are rapidly changing the workplace as well as the marketplace. The ever-increasing sophistication of robotics, the use of biometrics (retinal scans, fingerprint information, facial recognition) for identification, wearable technology (smartwatches/clips, helmets, exoskeletons), autonomous vehicle development, the use of AI for human resource functions, and other advanced technologies are revolutionizing how businesses are being managed today, as well as being an important aspect of how businesses strategically plan to meet goals, increase profitability and become more competitive in the future.

For regulators, all these rapidly emerging technologies also raise concerns about privacy, ethics/discrimination, and safety.

An important development in understanding how the federal government is working to develop some guidelines for employers to follow — and also enhance federal oversight of AI initiatives, data management, and data privacy — is the Algorithmic Accountability Act of 2019 (AAA). The AAA was introduced by congressional Democrats last spring and has little likelihood of getting through Congress now, but it does provide a fairly clear sense of what could be on the near horizon in this new regulatory area.

The Algorithmic Accountability Act, as introduced, is very similar to the European Union General Data Protection Regulation (GDPR), which is the EU law on data protection and privacy. The GDPR aims to create a broad, mutually agreed-upon understanding of data management and privacy expectations among EU members in order to simplify the regulatory environment for conducting international business.

Similar to the GDPR, the AAA, as proposed, covers any “automated decision system” that “makes a decision or facilitates human decision-making.“ This broad definition can be interpreted to include many business processes — including marketing and management systems, as well as fraud detection and loan processing systems commonly used in the finance industry.

Under the provisions of the AAA, businesses of a certain size and capitalization would be required to conduct an impact assessment that covers the risk associated with accuracy, fairness, bias, discrimination, privacy, and security of its AI systems. The impact assessment would also cover the decision system’s purpose, how it was designed, and how the data that it uses was collected.

There is currently strong support for AI development at the White House. The Trump Administration issued an executive order earlier this year about remaining a leader in AI and has launched whitehouse.gov/ai to promote the concept. Also, in July, the Office of Management and Budget issued a request for information from government agencies about their AI initiatives and asked for comment about what models and practices are most important In AI-governed research and development (R&D).

States are also starting to act, and in 2018 California passed the California Consumer Privacy Act, which goes into effect in January 2020. It’s comprehensive and ambitious legislation, but its ultimate scope and impact on business practices remains unclear. That’s something to watch in the months ahead.

As employers increasingly rely on AI and other advanced technologies to remain competitive, we can expect increasing regulatory activity as well. If you have any questions or concerns about your organization’s implementation of new technology — and compliance with emerging regulations — feel free to contact me.

About the Author: James Laboe

Print this entry

^ Top

Clients. Colleagues. Community.

Since 1946, Orr & Reno has strived to provide our clients with high-quality, ethical and valued legal services; foster a collegial work environment; support professional and personal balance; and invest in our community.

Contact Us