All Artificial Hands on DeckNov 15, 2023
The President’s recent Executive Order on AI mobilizes the federal government for action, and has implications for employers
President Biden’s recent Executive Order (EO)— entitled the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” — is wide-ranging in scope and ambitious in its goals. It is also a much-needed starting point for framing our rapidly evolving national — and international — discussion on this topic.
The EO, published in the Federal Register on November 1, 2023, is just the latest in a series of initiatives over the past year reflecting concern at the White House and in Congress about AI technology. Last October, the administration released its Blueprint for an AI Bill of Rights, which outlines how the federal government is approaching the regulation of AI from a policy perspective. The intent of the Blueprint is to “help guide the design, use, and deployment of automated systems to protect the American Public.” It is aimed at the corporate developers of AI technology — entities who aren’t accustomed to having restrictions placed on their activities.
This most recent EO focuses on establishing guidelines, standards, and best practices to ensure AI safety and security throughout the development and deployment process while adding details about how federal agencies will accomplish this herculean task in the coming months.
Biden also signed Executive Order 14091 in February 2023, directing federal agencies to “root out bias in artificial technologies used by the federal government and combat algorithmic discrimination.” To accomplish this goal, the Executive Order included $140 million for the National Science Foundation (NSF) to use to create AI research institutes across the country.
These recently established AI Institutes are currently conducting foundational AI research that promotes the development of “ethical and trustworthy AI systems and technologies.” The AI Institutes are also charged with developing novel approaches to cybersecurity, contributing innovative solutions to climate change, expanding our understanding of the brain, and leveraging AI capabilities to enhance education and public health. In addition, the Institutes are conducting research to support the development of a diverse AI workforce “to address the risks and potential harms posed by AI.”
However, it was the launch of ChatGPT on November 30, 2022 — and its universal (it’s free!) availability— that has made the regulation of generative AI systems a central and urgent priority at the White House. With ChatGPT, the concern about AI technology became public in a way it had never been before. Anyone who has experimented with ChatGPT can quickly see both its exciting potential usefulness and its alarming potential for abuse.
A massive regulatory effort
Broadly summarized, the President’s newest EO directs agencies across the federal government to immediately begin working on best practices and regulations to promote AI safety, cybersecurity, privacy, fairness, and competition. Guidance for how the White House expects agencies to accomplish this is outlined in eight sections, each focused on a different area of operational concern: ensuring the safety and security of AI technology; promoting innovation and competition; supporting workers; advancing equity and civil rights; protecting consumers, patients, passengers, and students; protecting privacy; advancing the federal use of AI; and strengthening American leadership abroad on this issue.
Over the next year, various federal government agencies and departments will be developing guidance on the responsible use of AI in areas like criminal justice, education, health care, housing, and labor. Agencies will also establish guidelines that AI developers must follow as this evolving technology is built and deployed. New reporting and testing requirements for AI development companies are on the near horizon.
The whole thing feels like an “all hands on deck” approach to getting up to speed on this issue. Government mobilization efforts on this scale are akin to President Franklin Roosevelt’s pre-war mobilization efforts during the 1930s.
Below are a few of the highlights from each operative section of the President’s most recent EO:
- In accordance with the Defense Production Act, developers of the most powerful AI systems must now share their safety test results and other critical information with the government. More specifically, the EO charges the Secretary of Commerce to use the Defense Act to require companies based in the United States to report on developing “dual use” AI foundation models or “large-scale computing clusters.”
- The National Institute of Standards and Technology will establish rigorous standards for extensive “red team” testing to ensure safety before public release. In this context, the “red team” is a group that pretends to be an enemy, attempts a digital intrusion against an organization at the direction of that organization, and then reports back so that the organization can improve its defenses.
- The Department of Homeland Security is directed to establish an AI Safety and Security Board to advise on security improvements related to AI usage in critical infrastructure. Heads of all Sector Risk Management Agencies are directed to assess AI risks associated with operational failures, physical attacks, and cyber attacks.
Promoting Innovation and Competition
- The Secretary of the Department of Health and Human Services (HHS) is directed to prioritize grantmaking and cooperative agreement awards to support responsible AI development and use in the healthcare sector, health data quality, and health equity.
- The Secretary of Commerce and the United States Patent and Trademark Office are directed to develop and disseminate guidance on patentability and copyright issues related to AI. The Secretary of the Department of Homeland Security is directed to investigate intellectual AI-related property theft incidents and develop enforcement protocols.
- The EO charges all government authorities to work together to expand the ability of highly skilled immigrants and nonimmigrants with expertise in critical areas to study and work in the United States.
The President calls on Congress to pass bipartisan data privacy legislation to protect all Americans, especially children. The EO also directs several related federal actions, including:
- Strengthening research into privacy-preserving technologies — cryptographic tools — that promise to preserve individual privacy. This will be accomplished by funding Research Coordination Networks to facilitate rapid development and breakthroughs in such cryptographic tools.
- Evaluating how agencies collect and use commercially available information now — including any information procured from data brokers — and strengthening privacy guidance. All federal agencies are now required to account for possible AI risks.
- Developing guidelines for federal agencies to use in evaluating the effectiveness of any privacy-preserving technique, including those techniques used in AI systems.
- Of particular interest to employers and employees, the Department of Labor is directed to publish best practice guidance for employers to address various AI-related issues, including job displacement, workplace equity, health and safety, and employee data collection. The EO specifically warns against “the dangers of increased workplace surveillance, bias, and job displacement.”
- The Department of Labor is directed to produce a report on AI’s potential labor-market impacts and study identify options for strengthening federal support for workers facing labor disruptions.
Advancing Equity and Civil Rights
- In another set of actions with potential direct impact on workplaces, the Attorney General is directed to evaluate and report on how existing laws address AI-related civil rights violations and discrimination. The Attorney General must also meet with the heads of federal civil rights offices to discuss comprehensive prevention of AI-related discrimination.
- The Attorney General is directed to prepare a report about how AI is currently used in the criminal justice system and identify best practices for using AI in the criminal justice system.
- The Secretary of Labor is directed to publish guidance for federal contractors about nondiscrimination when using AI hiring systems. In addition, the directors of the Federal Housing Finance Authority, the Consumer Financial Protection Bureau, and the Department of Housing and Urban Development are asked to “use their authorities to address discrimination and bias in AI tools used in the housing and consumer financial markets.”
Protecting Consumers, Patients, Passengers, and Students
- As a way to advance the responsible use of AI in healthcare and in developing affordable life-saving drugs, the Department of Health and Human Services is charged with monitoring this activity, including the establishment of a safety program equipped to receive, evaluate, and investigate reports of unsafe healthcare practices involving AI.
- The Department of Education is charged with creating resources to support educators deploying AI-enabled educational tools.
- Independent regulatory agencies are charged with addressing various possible risks associated with AI, including risks to financial stability. The EO directs independent regulatory agencies to clarify responsibilities, conduct due diligence, monitor any third-party AI services in use, and clarify expectations and requirements related to the transparency of AI models.
- The Secretary of Transportation is charged with directing the Nontraditional and Emerging Transportation Technology Council (NETT), which will assess the need for information, technical assistance, and guidance about the use of AI technology in our nation’s transportation infrastructure. The Council will also support existing and future initiatives that pilot transportation-related applications of AI and will establish a new Department of Transportation Cross-Modal Executive Working Group to coordinate all AI-related projects.
- The Federal Communications Commission (FCC) is charged with studying how AI affects communication networks and consumers, including evaluating the potential of AI to improve spectrum management.
Advancing Federal Government Use of AI
- The Director of the Office of Management and Budget (OMB) has been instructed to convene and chair an interagency council to coordinate the development and use of AI throughout the federal government. The OMB will be issuing guidance to govern the use of AI at government agencies, advance AI innovation, and manage risks.
- The EO directs all federal agencies to hire and retain AI talent. The Office of Science and Technology Policy and Office of Management and Budget will establish AI hiring priorities and recruitment plans.
- The Assistant to the President and Deputy Chief of Staff for Policy will convene an AI and Technology Talent Task Force, and the Secretary of Defense will prepare a report identifying gaps in AI talent for national defense.
Strengthening American Leadership Abroad
- Establish an international framework for understanding and managing the risks and benefits of AI — and encourage our allies to make voluntary commitments similar to those made by businesses in the United States.
- Develop a plan for global engagement on AI standards, covering topics like AI terminology, best data practices, trustworthiness of AI systems, and AI risk management.
- Create a Global Development Playbook for AI Technology, incorporating the AI Risk Management Framework principles and applying them in various international scenarios. Working closely with our international partners will help ensure appropriate “translations” of certain concepts.
- Establish a Global AI Research Agenda to guide the objectives and execution of AI-related research beyond the United States. The Global Agenda will address the safety, responsibility, benefits, sustainability, and labor market implications of AI adoption in different international contexts.
Are you overwhelmed yet?
This most recent Executive Order about AI presents a sweeping set of provisions for managing and benefiting from a technology most of us are still struggling to comprehend. The EO emphasizes safety and security throughout the document. Can the government successfully support innovation while simultaneously taking effective precautionary action? We shall see.
Employers — and concerned individuals everywhere — must pay close attention as this AI regulatory drama unfolds in the months ahead. Different directives have different timelines, but most of the work articulated in this EO is to be completed and reported out by October 2024.
“I don’t think ever in the history of human endeavor has there been as fundamental potential technological change as is presented by artificial intelligence,” Biden said at a news conference in early October. “It is staggering. It is staggering.”
If you have any questions or concerns about the AI decision-making tools you are using now — or planning to implement soon — don’t hesitate to contact Orr & Reno for assistance.