Recap: Rob Reich on the Urgent Need for a Code of Ethics in AI

By: Tyler Wells Lynch 

The unfortunate thing about ethics is that something has to go wrong before people start paying attention to them. It’s a familiar pattern found in the history of biotechnology, medical research, even nuclear physics: A technology materializes in academia or along the fridges of the private sector, private capital swoops in to commercialize the product, the market consolidates around a few key players, and once the tech reaches a critical threshold of users, “negative externalities” begin to emerge. These unintended consequences stir a crisis, awakening regulators in an attempt to steer further development of the technology.

For Rob Reich, professor of political science and director of the Center for Ethics in Society at Stanford University, that’s where artificial intelligence is right now. The professional norms that would govern the responsible use of AI are, according to him, “wildly immature.”

Professor Reich is not alone in viewing AI as one of the two most revolutionary and consequential scientific developments of the 21st century, the other being bioengineering through gene editing. A Distinguished Lecturer in the Fall Seminar Series at EAI, Reich explained how both of these technologies call into question what it means to be human, and the stakes could not be higher.

“That's the task that we collectively face,” Reich said. “It's a task that means we have to embrace more than just a technical or scientific orientation. We need to embrace a skill set that comes from outside of technology itself.”

Ethics in the Workplace

 

Ethics is the only skill set that stands a chance of bridging those orientations. More than just a moral framework, ethics incorporates a wide variety of philosophical distinctions that directly impact the chance of success in applied AI. Putting aside the legal and legislative question, Reich targets professional ethics as perhaps the most robust area of opportunity.

Professional ethics refers to the standards that bind together the people of a given profession. In that way, it’s a form of self-governance occasionally backstopped by actual laws and regulations. Contrast that with, say, personal ethics, which focuses on individual moral agency, or social ethics, which is more concerned with political intervention.

The challenge with professional ethics is that public policy tends to trail well behind scientific innovation, and professions often govern their behavior according to what’s strictly legal. Reich pointed to the example of the 1996 Telecommunications Act, which was purposely designed to create a “regulatory oasis” for tech companies, ascribing next to no liability for the content on their platforms. The idea was, in the words of the Clinton administration, “to win the race to pave the information superhighway.”

Regulatory indifference may have kindled  innovation in Silicon Valley in the 1990s, but it doesn't serve the interests of people today who are impacted by biased algorithms, addictive social platforms, and the spread of misinformation. There’s simply too much money to be made in AI and not enough laws governing its use. Is it any wonder professional ethics, too, trail behind?

Whether you put the blame on markets or legislative torpor, the point is that ethical standards in AI are long overdue, and professional norms represent one fruit ripe for picking. So where to begin?

Taking a Page from Biomedical Science

 

Reich pointed once again to the field of biomedical ethics, where the long-standing Hippocratic oath instructs practitioners to do no harm. Institutional review boards (IRBs) govern the scope and conduct of research, and a single regulatory agency, the FDA, prohibits rogue experimentation. There are also professional licensure requirements, the violation of which can bar professionals from practicing healthcare. Finally, the highly developed institutional footprint of biomedical ethics spans universities, hospitals, committees, and corporations.

Now what does computer science have in comparison? For starters, there’s no equivalent of an IRB. Data scientists allegedly don't work with human subjects, so they’re more or less free to do whatever kind of testing they want—no permission required.

Reich implicates a divide between carbon- and silicon-based ethics: “[In biomedical ethics,] the main object of concern is a human being, a patient. But for silicon ethics, we don't have patients. We have users. Instead of being subjects of care, the people are most often objects of prediction, and there is no equivalent of the Hippocratic oath—no kind of ritual that tries to bind people together as professionals.”

As to how to steer the ship of professional ethics, an instructive example can be found in bioengineering, specifically the revolutionary gene-editing tool CRISPR. Jennifer Doudna, one of the co-discoverers of CRISPR, realized early on how powerful the tool she created could be. She brought together some of the top minds in the biomedical community to discover the most prudent ways for the technology to develop, ultimately publishing a paper calling for a voluntary ban on the use of CRISPR on human embryos.

That ban was widely embraced by the scientific community, and a few years later, when a single scientist used CRISPR on a human being, no journal would publish his research and he was disinvited from professional conferences. Here, Reich raised an intriguing question for AI researchers: “Can you think of a single individual in AI who did something beyond the scope of responsible conduct and suffered a professional cost for it?”

The Future of AI Ethics

 

It’s not enough to simply adopt standards and hope for the best. Professional norms that serve the public but go against the grain of private interests have to have meaningful repercussions for violating them. 

“If we have a hundred different AI ethics frameworks for one hundred different companies, functionally we have no AI ethics,” Reich explained.

The toothlessness of the company-by-company approach can be seen in the example of Clearview AI, a company that has flouted standards set by Amazon, Microsoft, and IBM not to sell facial recognition data to police departments and law enforcement agencies. However, when Clearview AI came along offering to do just that, the race to the bottom was all but assured.

There are other examples of toothless standards with minimal professional repercussions, but as long as there are companies willing to do what other companies will not, then the standards set might as well not exist, because the outcome is the same.

So what needs to happen? Reich sees a few paths for professional ethics in AI:

  • Some kind of ritual or oath for people graduating with a technical degree
  • Greater repercussions for violation of standards like the ACM Code of Ethics
  • An ethics review for research at the time of publication
  • Regulation by the FTC
  • Ethics and policy modules within all core courses of computer science majors

There are other options, as well. In August, for example, the Institute for Experiential AI launched the AI Ethics Advisory Board—an on-demand, multidisciplinary team of more than 40 AI experts. The Board provides a solution for organizations that would like to use these tools in an ethically sound manner, allowing them to draw upon the counsel of experts with decades of combined experience in AI practice and ethics. Coupling this approach with a robust regulatory and cultural framework—such as in the biomedical sciences—offers the best chance of AI becoming a valuable tool for humanity, rather than a harmful burden.

Watch a replay of Rob’s talk here and register for upcoming seminars here