By: Tyler Wells Lynch
As a philosopher, Cansu Canca quibbles with the idea of there being an “intersection” between AI and ethics. As the Director of the Responsible AI (RAI) practice at the Institute for Experiential AI, she participates in a variety of conferences focused on different aspects of AI and ethics, and this topic comes up quite a lot.
“I'm hoping that it is clear by now that any AI system—traditional, predictive, generative, whichever one you pick—comes with its own ethical questions and risks,” she says. “Once we start developing an AI system, we are making ethical decisions whether or not developers are aware of the decisions. If we don't recognize and understand these decisions, then we are more likely to make mistakes.”
Ethical risks only multiply as you move from development to deployment. So, rather than thinking about AI ethics as its own separate category, AI developers and stakeholders would be wise to learn the “grammar” of RAI within their given field—to infuse technical, ethical, and regulatory decision-making and problem-solving into the innovation process. Call it a Responsible AI framework.
Ethical Gaming Grammar
To understand what Cansu means, let’s look at the video game industry. Last month, she attended the European Conference on AI and delivered a keynote speech at the Ethics of Game AI workshop.
The idea for the event was to bring together researchers interested in applying a Responsible AI approach to gaming—an industry that is increasingly dependent on AI techniques from both a creative and technical perspective. Two questions Cansu focused on were about how to integrate AI ethics into the game design process and how much information game developers should provide to players with regards to the ethical implications of the use of AI.
“There's always this tension,” Cansu explained. “You don't want to know everything about a game, because it loses the excitement. We want games to be fun and engaging! But you do want to know what you're getting yourself into in terms of data sharing, in terms of what kind of information to provide, in terms of potential emotional manipulation that the game may employ, and in terms of what kind of environment you are stepping into.”
Gaming is a unique ethical environment, as the industry has never been far from controversy, whether you’re talking about content, character representation, or the subtle ways in which developers can manipulate players. Games are by definition supposed to be engaging—but engagement, Cansu points out, exists on a sliding scale with addiction. AI allows developers to supercharge that addictive potential by using biotrackers and personalization to zero in on the brain’s reward centers.
“When we ask, for example, how much a developer should be responsible, that is both a technical and ethical question,” Cansu says. “What should be the responsibility of a developer? How much control do they have as they use generative AI? And what are the RAI tools that should be available to them as they try to reduce ethical risks in the games they develop?”
Most addictive products, whether it’s alcohol or gambling, are heavily regulated. So the question isn’t so much whether AI-driven games should be regulated—but how. And with recent advances in generative AI, brain mapping, and network science, the need for an RAI foundation is becoming all the more obvious.
Ethics and Policy
Gaming is just one industry among many that make use of AI. From life science to healthcare to climate, AI’s potential scales in proportion to the amount of data used. At the United Nations last month, Cansu participated in a high-level meeting titled Governing AI for Humanity—a much broader discussion that was focused on national efforts and international cooperation in AI regulation.
Of note was the fact that every country that participated in the meeting agreed there is a need to regulate AI. The need, Cansu explained, is predicated on the assumption that we want to continue creating systems that are beneficial to society. That’s an important way to frame the debate, because it highlights the essential value of AI systems. The problem is that there’s little agreement as to how to go about regulating AI systems, especially as their capabilities grow rapidly with the new developments in generative AI—lots of questions, few concrete answers.
That being said, the participation of high-level representatives from governments, corporations, universities, and other stakeholders suggests movement on an issue that didn’t really exist until recently. And the fact that the interest is high-level—spanning industrial and academic borders—indicates a nuanced understanding of what progress looks like in this area.
As the Director of Responsible AI Practice, Cansu advocates that the best way to move forward and create AI systems that benefit us is by involving all stakeholders from academia, industry, and government, and by creating a multidisciplinary discourse. It’s a key reason why the Institute for Experiential AI’s next event features a workshop, business conference, and career fair, a core idea being to highlight the ethical, technical, and political aspects of RAI in a way that crosses old academic and industrial borders.
Interested in participating or learning more? Register or learn more here.