News
AI Event

Workshop Charts the Course for Responsible AI

By
No items found.
October 24, 2023
Share this post
Workshop Charts the Course for Responsible AI

The Institute for Experiential AI’s workshop aimed to establish a consensus on the essential elements of a Responsible AI framework and the research priorities associated with it.

What should be the goals of a framework on the responsible development and use of AI? What are the necessary elements of a responsible AI framework? How should such frameworks be incorporated into the innovation workflow?

Those were some of the topics of deliberation for a diverse group of experts in responsible AI at a workshop hosted by the Institute for Experiential AI on October 17.

The day-long workshop, titled “Shaping Responsible AI: From Principles to Practice” brought together more than 40 experts from across the globe to define the most important elements of robust responsible AI (RAI) frameworks, best practices, and “grand challenges.”

“We think it is important to identify the essential elements of responsible AI frameworks and workflows so that we can define the standards of responsible AI practice and determine what counts as an adequate RAI framework,” Institute Director of Responsible AI Practice Cansu Canca, co-lead of the workshop, said. “We don’t think there will be one framework to rule them all. There will always be a plurality in terms of how we embed responsible AI and ensure that it’s implemented as an integral part of the innovation process, but there should be a minimum standardization for any RAI framework that organizations use. We wanted to discuss with experts what these standard elements would look like.”

Institute Director of Research Ricardo Baeza-Yates, who co-led the workshop with Canca, believes the event was an important step toward prioritizing research on responsible AI at the Institute and beyond.

“From today’s input, we have a more holistic view of the most relevant parts of a responsible AI framework,” Baeza-Yates said. “We have more things to think about and the problem is harder in some ways, but at the same time now we have a glimpse into the gaps to focus on.”

The invitation-only workshop, which preceded the Institute’s flagship AI Business Leaders conference “Leading with AI Responsibly,” featured panel discussions, breakout sessions, and open conversations about the biggest issues in responsible AI. Attendees moved from open discussions to intimate roundtables in the well-lit room on the top floor of Northeastern’s East Village Building, collaborating around some of the industry’s most difficult topics.

Questions included how ethics frameworks should work in the context of accelerating innovation and how to implement ethical frameworks into organizational workflows.

Attendees came from academia and industry as well as from advocacy and research organizations. Many commented on the wide-ranging perspectives in the room. In total, 22 nationalities were represented, with participants evenly distributed across age groups and genders.

“We wanted to bring together leading voices, as well as voices that haven’t been heard, in a structured way to avoid overlooking perspectives,” Canca said.

The workshop also served as the official launch of Northeastern University’s AI Literacy Lab, an initiative to spur understanding and collaboration at the intersection of communication and artificial intelligence.

“Terms like responsible AI are vague, but at events like this people work to develop specific ways of thinking to guide professionals, government policy makers, professors, and students,” said Ben Shneiderman, a computer science pioneer and emeritus professor at the University of Maryland who attended the event. “There’s a lot of audiences for the important work happening here.”

The organizers plan to co-author a whitepaper with the workshop participants summarizing learnings from the event.

“We usually work in laboratories or in specific use cases that are isolated from society, and until you scale these technologies massively, it’s difficult to anticipate these issues,” said Eduard Fosch-Villaronga, eLaw Center for Law and Digital Technologies, Leiden University, who attended the event. “As we work towards solutions, we should be trying to think about ways in which we can address and mitigate some of the risks that these technologies pose. It gives me positivity and also hope that there are events like this.”

Canca also believes that the event served to strengthen the Institute’s ability to drive responsible AI practices.

“These are the people who we think are doing some of the leading work in the field of responsible AI, so having collaborative relationships with these leaders is also one of our goals,” Canca said. “We want to extend these collaborations and impact the field of responsible AI in partnership with them. It was also encouraging to see participants who have already thought deeply about these topics say that the workshop was very informative. It shows that we’re heading in the right direction.”

As new partnerships and research collaborations continue to blossom from the gathering, Baeza-Yates sees the workshop as a standalone success for helping attendees broaden their thinking.

“There were many points of view that expanded my understanding of responsible AI — things other people were emphasizing that I hadn’t thought as much about — and other attendees said the same thing,” Baeza-Yates said. “It was a confirmation that when you put smart people from different backgrounds together in a room, good things happen. This was confirmed by plenty of positive feedback that we received after the workshop.”

Stay tuned for more insights from  our incredible week of Leading With AI, Responsibly programming!