Recap: AI Needs to Earn our Trust; Emilia Gomez on Creating a Regulatory Framework

By: David Bolton

Ask the average person if they trust artificial intelligence and the answer will likely involve some element of suspicion. A more prudent line of inquiry would be to find out if they believe AI can be trustworthy at all. A positive response doesn’t mean that people aren’t distrustful of AI, more that they believe there is some potential for it to become reliable in the future. And while AI has made waves across a variety of sectors—from healthcare and autonomous vehicles to holiday shopping and supply chains—the concept of ethical, responsible, or trustworthy AI prompts serious debate.

According to Emilia Gómez, a principal investigator on Human and Machine Intelligence (HUMAINT) at the European Commission’s Joint Research Centre (JRC), wider acceptance of AI applications will require a coordinated effort from varying stakeholders. These actions are most likely to center on the level of risk and the impact of algorithms on human behavior. 

AI remains largely unregulated. It’s not subject to the same oversight that we attach to physical products. Over the last couple of years, the discussions and controversies that AI has attracted have undoubtedly generated interest in more ethical approaches.

Regulations Can Improve Trust

 

Speaking as a Distinguished Lecturer in the Fall Seminar Series at EAI, Gomez argued that the current efforts of the European Union to address the ethical, legal, and, by association, trustworthy elements of AI within a regulatory framework could be a key step towards jurisdictional standardization. The challenge, she said, will come from legal uncertainties, inconsistent enforcement, regulatory fragmentation, and societal mistrust.

“The EU approach to trustworthy AI has come from both ethical and legal requirements, which is the current status,” Gomez said. “The challenges that policymakers are trying to address are linked to some or certain characteristics that AI systems have, such as complexity, opacity, the unpredictability of approaches or models, autonomy, and, finally, the use of data.” 

We are still in the early days of AI. Trust in it depends not only on the tasks that we set for it but also on its capacity to anticipate human behavior. For example, we always expect smartphones and apps to do what we want, when we want it. If we apply the same logic to AI, then the degree to which we can trust a human-designed algorithm directly influences the chance of successful societal integration.

Europe Is Focused on Risk Management

 

AI is often a ubiquitous and unseen presence in our lives. This subliminal interaction raises a lot of questions for which we have few answers. We should also think about what we mean by “trustworthy” and whether it would be better to characterize AI as less harmful or more dependable.

Gomez said there must be two pillars of focus for trustworthy AI. The European approach is to have an ecosystem of excellence (R&D, testing and experimentation, digital innovation, skills, and talent) and another one of trust. The latter is a huge motivator for both the scientific community and the companies that want to make use of AI. 

Both of these pillars should take into account seven key requirements for trustworthy AI. All of the requirements need to be continuously evaluated and addressed throughout an AI system’s lifecycle:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination, and fairness
  • Societal and environmental well-being
  • Accountability

The EU has been working on a regulatory framework governing AI for some time. In 2019, a high-level expert group representing member states published the “Ethics Guidelines for Trustworthy AI,” but the proposed EU AI Act is the first law that deals with the risks AI poses to public health, safety, and wellbeing. 

For example, the three risk levels that have been identified as having social impact fall under the buckets of “unacceptable,” “high,” and “low.” Unacceptable uses of AI would include subliminal manipulation, exploitation, and social scoring or “real time biometric identification for law enforcement purposes,” all of which are prohibited by the fundamental values and rights set out in the EU Constitution. 

High risk AI systems would be subject to regulatory processes, Gomez explained. These systems would include anything deemed to be a safety component of an already regulated product—medical devices, machinery, hazard-detection in motor vehicles, for example. In addition, certain systems (which Gomez referred to as “stand-alone”) would also fall under this risk category—biometric identification and categorization of natural persons, law enforcement, management and operation of critical infrastructure, and more. 

Stakeholders or developers of AI systems would need to establish and implement a risk management plan. In addition, they would be required to define the intended purpose of their AI system. Risk management would, therefore, include:

  • High-quality training, validation, and testing data
  • Technical documentation and logging capabilities
  • An appropriate degree of transparency and the ability to provide users with information on the capabilities and limitations of the system
  • Human oversight
  • Robustness, accuracy, and cybersecurity

All of the above requirements are critical to putting trustworthy AI into practice. However, the success of the risk management plan depends on the legal text, standards, and engineering practices that underpin the AI systems themselves.

“Most AI systems will not be high risk,” Gomez noted. “But there will be what we consider to be minimal risk or a transparency obligation. The ones that communicate with people, for example—we need to notify humans that they are interacting with an AI system.”

Consider the Context

 

Gomez cited four areas that regulators will seek to address within the confines of the act and its framework: autonomous vehicles, facial recognition/processing, children, and music. 

Her research has shown that children, for instance, can have regular interactions with conversational AI and recommendation systems. This has a direct impact on both the child’s cognitive processes and their ability to self-learn without an adult present. Music producers are also incorporating machine learning, with some creators reportedly cloning the voices of popular singers, as opposed to going down the licensing route. 

All of these identified sub-sectors influence whether we can consider an AI system trustworthy. For the moment, however, we should think about what we want AI to be. 

“The idea is that we are not going to address AI in a general way but on what we call the intended outcomes or purpose in the context of the application,” she said. “This is in line with us thinking about AI as socio-technical systems. So we also need to consider the context in which they are applied.”

For Gomez, we are at the start of a long regulatory road towards what she refers to as trustworthy AI. From a long-term perspective, the goal must be to ensure that we think about how to align trust and dependability in the AI mix. In addition, there will be a need to alleviate societal mistrust. As she noted, AI is often thought of as a nascent technology, but its integration is happening in real time. 

By putting a regulatory framework in place that promotes risk management, we can standardize the approach and get a clear picture of how AI should work. Trust has to be earned, irrespective of whether you are talking about a machine or a human being. And that should always be part of any conversation that relates to societal good.

To find out more about Gomez and her work with HUMAINT and the regulatory challenges that lie ahead for AI systems, check out this replay of the talk.  

You can also find out more about the proposed EU AI Act and its impact on AI applications here.