News
In the AI Loop

Six Leading AI Experts Weigh in on the White House Executive Order

The Biden administration issued an Executive Order (EO) establishing rules and guidelines on the use of artificial intelligence. Our Experts weigh in.
By
No items found.
November 21, 2023
Share this post
Six Leading AI Experts Weigh in on the White House Executive Order

Last month, the Biden administration issued an Executive Order (EO) establishing rules and guidelines on the use of artificial intelligence. Through an assortment of benchmarks, appropriations, pilot projects, and reporting requirements, the order’s stated goals are to preserve privacy, protect vulnerable groups, promote competition, and advance civil rights, among other ideals.

Our Directors of research, health, life sciences, education and law, and a professor of public policy reviewed and provided perspectives. The summary? The EO is a step in the right direction — and particularly necessary in the areas of health and life sciences — but it lacks the legal, legislative, or budgetary pressure needed to ensure the best interests of vulnerable groups and the environment.

The EO could, for example, have taken more concrete measures to advance ethical cornerstones like fairness, explainability, robustness, transparency, and privacy. That its subsection on equity and civil rights focuses narrowly on discrimination in housing, healthcare, and justice—to the exclusion of other domains, such as education or employment—reflects an incomplete remit of civil rights and the mandate of governments to protect them. As is, the executive order largely places the onus of safe and responsible use on developers.

“I wish this directive came with more teeth in it. It’s not enough to just direct agencies to pay attention to something or develop standards. It’s more important to have a budget allocation with a forcing function that translates to work, achievements, and projects where people will be held accountable. I think that was definitely within the remit of the White House. Don’t get me wrong; I am grateful that we are thinking about this, establishing thought leadership, and urging agencies to do the right thing, but I think we could have required them to do more through budgetary directives and funding-based requirements."

-  USAMA FAYYAD, EXECUTIVE DIRECTOR AT THE INSTITUTE FOR EXPERIENTIAL AI

“This executive order is a good wish list to improve the responsible use of AI. However, it needs concrete actions and a significant budget to move the needle. Regarding standards for AI safety and security, I applaud the requirement of sharing safety test results. However, this should apply to all AI and non-AI systems used by people, not just the “most powerful.” Advancing equity and avoiding discrimination is a must in the U.S., although only two use cases are mentioned: housing and justice. They also promote the advance of responsible AI in health, but why only health? We should use responsible AI in any application that can affect human lives.”

-  RICARDO BAEZA-YATES, DIRECTOR OF RESEARCH AT THE INSTITUTE FOR EXPERIENTIAL AI

“This EO is also of geopolitical significance, in that it gives the United States a position among major national powers in their efforts to regulate AI at the national and international levels. Recently passed regulations in China and the likely soon-to-see-passage EU AI Act threaten to leave the U.S. in the dust in the race to regulate AI. The EO is nowhere near national legislation in terms of durability, but it was very likely the best geopolitical move available to the administration in this moment of governmental disarray.”

-  MICHAEL BENNETT, DIRECTOR OF EDUCATION PROGRAMS, AI LAW AND POLICY AT THE INSTITUTE FOR EXPERIENTIAL AI

“Healthcare is a highly-regulated industry, as it should be. If AI is going to play a major role in advancing healthcare, it needs to be regulated, as well. How to go about this? That’s the big question. While I believe that regulation is a good thing, we have to be sure that it won’t create bottlenecks in the development, deployment and utilization of AI technology for health and healthcare. AI can give us groundbreaking tools to advance health. It’s all going to be about striking a balance between safety and ethical considerations and the advancement of the technology itself.”

-  EUGENE TUNIK, DIRECTOR OF AI + HEALTH AT THE INSTITUTE FOR EXPERIENTIAL AI

“From precision oncology and targeted gene therapy to pandemic prevention and high-density agriculture, artificial intelligence is already powering a second golden-age in biology. President Biden's recent executive order recognizes the potential of AI to transform the biosciences, while acknowledging the potential risks associated with bioengineering. By integrating our ethical and responsible AI practice into our life sciences work, the Institute for Experiential AI is poised to deliver on the promise of AI + life sciences while mitigating the risks.”

-  SAM SCARPINO, DIRECTOR OF AI + LIFE SCIENCES AT THE INSTITUTE FOR EXPERIENTIAL AI

“While there’s a lot to like here, we have to ask: Are we focusing so much on the risks that we are failing to invest in and maximize the potential for AI to do good? To be sure, the executive order mentions positive goals such as promoting American competitiveness. The order alludes to the fact that AI can transform education through personalized tutoring, and offers the promise of increased productivity. But the narrative surrounding AI — and most of this 111-page executive order — is cautionary.”

-  BETH NOVECK, CORE FACULTY MEMBER AT THE INSTITUTE FOR EXPERIENTIAL AI AND DIRECTOR OF THE BURNES CENTER FOR SOCIAL CHANGE