RAI Radar

RAI Radar: On Deepfakes and Discontent

By Ricardo Baeza-Yates, Director of Research
No items found.
March 25, 2024
Share this post
RAI Radar: On Deepfakes and Discontent

This week in our Responsible AI (RAI) Radar series, we’re talking about the increasing prevalence of deepfake technology and its larger societal impact. There have been several recent examples of deepfake technologies being used for malicious and often illegal means. Here, we’ll examine a few such cases, and explore how organizations can implement responsible and ethical AI practices within their organizations to avoid similar situations.

Deep(ly) Fake

Deepfake technology refers to the usage of deep learning to build an algorithm that generates an output, often a picture or video, that is nearly identical to the originals it was trained on. While similar in concept to tools such as Photoshop or Snapchat filters, deepfakes differ in that they can manipulate images and video much more realistically and in a more versatile manner. This often makes it difficult, if not impossible, to identify whether the images are fake.

Deep(ening) Fakes - Swift, Carlin, and Hong Kong

Due to their large media presence, celebrities tend to be targets of deepfakes. Whether it was the usage of George Carlin’s likeness for an AI-generated comedy special or the legislative action against AI-generated nude photos of Taylor Swift, usage of deepfakes imitating the likeness of others brings into question personal rights to one’s identity and how laws should protect individuals in such scenarios.

The use of deepfake technology has spread to other industries as well. Recently, a multinational finance firm was the target of a $25 million scam by individuals who used deepfake technology to pose as the chief financial officer and other senior members of said company. Increasing sophistication of deepfake and AI technology is not always a cause for celebration. While it can be used in educational and commercial cases for increased learning immersion, such advances can also be misused for malicious purposes, like potential scams and AI-generated robocalls.

Deep(er) Looks into Responsible AI Use

So how do organizations adopt Responsible AI practices to combat potential breaches of individual privacy or rights?

Deepfake technology has many positive use cases. To improve accessibility, for instance, it can be used on ads or educational material to offer content in different languages, or to generate synthetic audio to aid individuals with disabilities. But deepfakes can also be misused. Any company developing or deploying these systems must take action to avoid the potential negative impact that they can cause.

Groups that use AI-generated technology to create art or media must always be cognizant of, and take appropriate measures against, the possible harm that may come to individuals whose likeness is portrayed. Additionally, there must always be a coordinated, multi-team effort to understand how these portrayals can be used. Companies always need governance strategies to ensure ethical usage of autoencoders.

Ricardo Baeza-Yates, director of research at the Institute for Experiential AI (EAI), believes that “This will be the year of deepfakes, as we have elections in more than 70 countries. We have never had more people voting on Earth, and they will be the main target of political deepfakes.”

While the prospects of sophisticated deepfake and AI-generated technology may be exciting, it is important to consider the ethical implications of utilizing the likeness of individuals, especially without consent.

At the Institute for Experiential AI , our Responsible AI team works through these quandaries with our academic and industry partners. Whether establishing governance strategies for an entire organization or performing project-specific diligence checks, let our team be your partner in all of your AI projects.

Looking to educate your organization’s leadership on RAI? Check our Executive Education series here!