Executive Order on Artificial Intelligence Opens Door to Possible Regulations
- Associate Director of Media Relations
- 412-268-8652
On October 30, President Joe Biden issued an Executive Order (EO) on the "Safe, Secure, and Trustworthy Use of AI" to position the United States as a trailblazer in AI, while managing the challenges this new technology presents. While the EO has garnered support from major players in the tech industry, it also leaves many questions unanswered, creating uncertainty about its long-term impacts.
Derek Leben, Associate Teaching Professor of Business Ethics at the Tepper School of Business, and an expert on the ethics of emerging technologies, reviewed the EO and shared his insights.
The EO identifies several key areas of focus, including what Leben categorizes as impacts on individuals (deceptive practices, non-discrimination, equity), data ownership and security (privacy, cybersecurity, intellectual property), and impacts on national security and the economy (market competition and safeguards against replacing human labor).
Leben said that among the crucial points addressed in the EO is preventing deception and misinformation generated by AI systems that produce realistic audio, video, images, and text. It suggests developing standards related to "watermarking" and verifying that content, but this approach raises two big questions. First, there's the technical challenge of developing effective methods for stamping or verifying AI-generated content. Leben noted that OpenAI's experience, discontinuing its "AI classifier" due to low accuracy, and Google's acknowledgment of the difficulty in testing AI-generated content, highlight these challenges. Second, the EO leaves open the question of enforcing rules and regulations, raising speculation that enforcement may mirror how the FTC oversees "false advertising."
The EO further calls for developing standards and practices regarding discrimination and equity in AI-based decision-making. However, Leben made it clear that defining and measuring terms like "discrimination" and "equity" is a contentious issue. There are a range of standards that the federal government could apply to AI discrimination cases, from minimal (e.g., simply omitting protected features like gender and race from training data and inputs) to more extensive (e.g., enforcing some level of equality in approval rates across protected groups).
If the more extensive standards are applied, then many AI systems currently deployed in areas like hiring and lending will be classified as discriminatory. This is because AI systems are based on historically unequal datasets and replicate historical inequalities in several categories, like race and socioeconomic groups. This is what critics like Albert Fox Cahn, the founder and executive director at Surveillance Technology Oversight Project, mean when they say, “A lot of the AI tools on the market are already illegal."
The question is: Which legal standard of discrimination and equity does the federal government plan on enforcing?
Leben also pointed out that the EO remains silent about the conditions for using publicly available data in training AI systems.
“The EO overlooks the vast amount of data accessed without explicit consent from users,” he said. “Strict requirements on data collection raise concerns about the industry's sustainability.”
While Google and OpenAI have extensively used public online data for training their AI models, including content protected by copyright and intellectual property (IP) rights, the EO calls upon federal agencies responsible for IP enforcement to propose standards for IP usage in training data.
Leben said that while the EO outlines broad objectives, it primarily directs federal government branches to produce a set of policy proposals within relatively short timeframes, ranging from 90 to 240 days. Its historical significance will largely depend on the specific proposals that emerge from each federal government department.
The EO builds on the National Institute of Standards and Technology (NIST) framework and the White House AI Bill of Rights, both of which were introduced throughout the last year. However, Leben underscored that any changes will be incremental. While the regulators have a clear direction in mind, how they plan to impose restrictions on companies remains uncertain.