by Tyler Wells Lynch
Artificial intelligence (AI) offers plenty to be excited about but also reason to pause. AI helps decide which loan applications should be granted, which prisoners deserve parole, and which way an autonomous vehicle should turn. When a family’s financial situation, an incarcerated person’s freedom, or a pedestrian’s safety is on the line, companies have a responsibility to minimize harm, and that requires more than just a pledge to be good. Organizations need to take meaningful action that protects customers, stakeholders, and the general public.
CascadAI serves as a hub for organizations that want to do just that. A partnership between Mangrove, the government of Quebec, and the Institute for Experiential AI (EAI) at Northeastern University, the annual pan-Canadian conference aims to move the AI community beyond theory with actionable guidelines centered on community engagement, resource allocation, open communication, and human-centered planning.
This year’s conference drew over 100 attendees with distinguished speakers representing a broad swath of industries from across North America. Together, they provide a roadmap for companies that are interested in using AI responsibly.
What Is Responsible AI?
Every company seems to have a different idea of what “responsible AI” means. Most turn on a few key terms like fairness, transparency, and accountability, but there’s no standard for what is or isn’t “responsible.”
Is that a problem? Perhaps. Vague definitions can lead developers to build systems with ethics as an afterthought. In fact, that habit is behind a lot of the controversies that sparked interest in responsible AI in the first place — that, and a prevailing attitude that those who make technology get to shape technology.
But let’s look at who’s actually making these technologies: By and large, it’s STEM-educated white men with similar socioeconomic backgrounds. According to Katrina Ingram, founder of Ethically Aligned AI and ethics advisor for the city of Edmonton, Alberta, that’s a textbook definition of a monoculture. The exclusion of diverse perspectives, whether intentional or not, limits the kinds of questions that can be asked as well as the range of problems that can be targeted.
So let’s start with some definitions: What is “responsible” AI, and is that term preferable to “ethical” or “trustworthy?”
For Cansu Canca, research associate professor at EAI and founder of the AI Ethics Lab, the answer is both yes and no. Insofar as any of these terms imply that AI itself can be moral, they are unhelpful. “Responsible AI,” though, is perhaps the least problematic.
Consider the example of an elevator that works 99 percent of the time: Is such a success rate enough to warrant trust? Probably not. So why would it be any different with AI? If there’s any trust to be found in an AI system, Canca says, it’s in the regulatory structures and cultural practices that govern its usage, not the system or even the individual who programmed it.
As citizens of a democracy, we expect those regulatory structures to have democratic inputs. Tracy Laureate, assistant professor of critical media and big data at Carleton University, cited the idea of “technological citizenship,” stressing the need to reframe our understanding of data to encompass the full suite of human experience, not just its technical aspects. That means reimagining technology as one tool within a battery of political, economic, and cultural institutions, all of which are accountable to one another.
“We cannot solve complex social, economic, and environmental problems with technological solutionism,” Tracy said. “Mathematics, statistics, engineering, and science alone is not enough. It’s part of the problem-solving equation but not the only one.”
Against AI Solutionism
The belief that societal problems have technological solutions is common enough in tech circles. It’s even somewhat forgivable given the tremendous change technology has brought over the last few decades. But dismissing the other parts of the equation — what Tracy Laureate referred to as “ethical, political, spatial, philosophical” ways of thinking — shifts power towards the owners of AI while excluding those most affected.
A similar trick gets pulled when we view AI as a competitor rather than a collaborator — as a tool for automating and replacing human processes, rather than assisting them. That approach isn’t just lazy, it’s dangerous because it defers human judgment to the fluky aptitude of data sets and algorithms.
But that doesn’t mean AI can’t impart some social benefits.
In his keynote address, Ben Shneiderman, founder of the Human-Computer Interaction Lab at the University of Maryland, provided an optimistic but realistic guide to how AI can improve human lives. Dubbed human-centered AI (HCAI), the goal is to increase the amount of automation while ensuring human oversight. As one might imagine, finding the sweet spot is the difficult part and will almost certainly require new governing structures and design principles to get right.
That latter point is important because, as many of the speakers pointed out, AI is rarely designed with ethical principles from the start. Var Shankar, an attorney and director of policy at the Responsible AI Institute, argued that translating responsible AI into practice is especially difficult given the range of potential applications. There are just too many contingencies to unify the development phase under a single, ethical, and context-sensitive umbrella. The challenge behooves a cultural shift, one that maintains a more thoughtful approach throughout the development lifecycle. That has to begin with education.
One of many engaging CascadAI sessions. Photo courtesy of Cuneyt Gurcan Akcora, Assistant Professor of Computer Science and Statistics, University of Manitoba, Canada.
Whatever the degree of human involvement, AI systems need to have clear modes of cooperation. Kate Larson, professor of computer science at the University of Waterloo, argued that AI needs to go a step further in developing actual cooperative intelligence. Once again, the challenge here comes down to context: “Cooperation” can mean vastly different things depending on the practitioners, stakeholders, beneficiaries, and jurisdictions in question.
Let’s take a look at an example.
Responsible AI in Practice
Neil Ternowetsky is the CTO and product manager at TRAINFO Corporation. His company designs and deploys a variety of sensors, machine learning (ML) models, and data integration tools to visualize and predict railroad crossing blockages. Government agencies can use TRAINFO’s AI platform to predict and manage traffic and keep supply chains running smoothly.
With so much data, some of which are used to direct emergency responders, the company cannot afford to let algorithms run the show. So, in training its models, TRAINFO employs people to vet and review algorithms prior to deployment. The company views its data as a tool to assist rather than replace emergency dispatchers.
But any attempt to make an AI system more cooperative or context-sensitive requires access to the right kind of data, and that does not necessarily mean more data.
Graham Erickson, lead machine learning developer at AltaML, knows that human biases will invariably show up in data, so the only way to minimize harm is to design and analyze systems through the lens of bias and fairness.
How does a company searching for an AI vendor go about avoiding these pitfalls? Sergey Bukharov, chief customer officer at SkyHive, ended the day with some advice, warning there’s no perfect prepackaged ethical solution. Companies can only expect ethical outcomes if they’re invested in ethical outcomes from the jump. Thankfully, though, more and more companies appear to be moving in this direction.
As recently as 2020, Fortune 500 companies were consulting AI vendors with hardly any questions related to the ethical or responsible use of AI. Since then, a paradigm shift has led more and more companies to ask increasingly sophisticated questions about how their AI systems are deployed and to what end. We’re even starting to see some forward-thinking companies perform fairness audits and A/B impact tests of their vendors’ AI platforms. We can only expect this trend to continue.
Watch key CascadAI conversations here.