News
Responsible AI

How to Bridge Science and Philosophy: Ask the Right Questions

By
No items found.
May 22, 2024
Share this post
How to Bridge Science and Philosophy: Ask the Right Questions

It’s been a couple hundred years since science and philosophy went their separate ways, but the two fields still share a lot in common. They both rely on logic and reason, and they’re both interested in discovering fundamental truths about reality. However, with so many disciplines branching off over so many years, it’s difficult to find real-world situations where the two fields intersect. Scientists, after all, have little interest in rational inquiry without empirical evidence, and most philosophers have little interest in narrowing their inquiries to those with quantitative inputs.

With the advent of AI, though, it’s never been clearer how much the distant cousins of science and philosophy need each other. And for Matthew Sample, a philosopher working at the intersection of AI and ethics, that need reveals itself in the siloing of everyday professionals.

“If you have a new tech being built,” Matthew says, “philosophers should be asking why it is being built, but they're often distracted by theory. Scientists and engineers should also be asking why the tech is being built, but they're often constrained by whoever paid them or their institutional context.”

How to overcome that divide is one of the central questions of AI and its impact on society. Thus, Matthew sees the trajectory of his career as finding and creating spaces where those kinds of questions can be asked. It hasn’t always been easy. When he was a professor of responsible research and innovation at Leibniz University Hannover, Matthew lamented how little incentive there was for professors to solve real-world problems, the focus there being on teaching and publishing.

“My ambition was to bring philosophy out of the university space,” he says, “to get in there with real problems that organizations and communities are facing right now.”

From the Jump

Matthew’s move to the Institute for Experiential AI, with its focus on experiential learning and industry partnerships, was natural. Now, as a member of the Responsible AI Practice, Matthew works with industry partners to support their AI development and deployment. The projects run the gamut from training employees on the basics of Responsible AI to submitting AI models for fairness evaluations to designing concrete, actionable ethics tools.

A common challenge, however, is the tendency of some organizations to treat ethics post-hoc—when people approach the Responsible AI Practice with a finished product looking for approval. That doesn’t always work. AI can be used internally or externally—as an operational tool or as a product for sale—but in either case, Matthew explains, ethical considerations need to be present from the jump.

“Companies with experience dealing with complex tech products know that sometimes you have to pivot your design even midway through,” he says. “If someone in industry decides that they really don't want to take Responsible AI seriously, that's not really the partner we want to have anyway. If we can't make a benefit or we can't make a positive impact on their work, it's not worth the engagement.”

It’s also important to recognize those problems that are not quantitative in nature and thus not amenable to automation. Because of the hype around AI, lots of companies approach Matthew and his team looking for ways to use AI to replace a vital human element, such as in hiring, business strategy, or medical diagnosis. 

“Diagnosis is a complex human process where you take not only data but also values, which you have to establish between the doctor and the patient,” Matthew explains. “How worried are you? How bad are the symptoms? All those questions dial into a certain level of sensitivity that’s required.”

AI, in this context, is more suitable for a support function, such as in processing medical records, highlighting radiological data, or reminding clinicians of patient allergies. What you don’t want is for AI to replace that critical bond that exists between patient and doctor.

Just Asking Questions

In Matthew’s mind, asking how AI could support diagnosis is one of the more straightforward questions we can ask. More broadly, he calls for a greater willingness to ask ourselves what societal role we want technology to play and what function we want it to provide. Those aren’t questions we’ve been conditioned to ask in this age of “move fast and break things.”

“We’ve kind of been sleepwalking for the last 150 years,” he says. “I think we're overdue for a check-in to ask, where are the places in life where we want a data driven approach, and where are the places where we feel like there's something irreducibly human about the situation?”

Of course, there are plenty of reasons to lament our capacity to ask such deep questions, especially given the money that’s at stake. So is Matthew Sample optimistic? 

“I'm optimistic that there are real improvements that can be made and they're achievable,” he says. “I'm pessimistic in the sense that I think there's a lot of inertia around the way we do technological development in society right now, and that it's not a good system and it will actually inhibit our ability to make tech serve the public interest. We have to change the basic model of tech development.”

But even there, Matthew points to the work his team is engaged in as a glimmer of hope. With such multidisciplinary talent, the Responsible AI team at the Institute for Experiential AI is tasked with identifying those areas where we, in the broadest sense, “can do better.” And that begins with integration—or, perhaps, reintegration. For centuries, science and technology have progressed with ethics as an afterthought. But Responsible AI is putting those human, philosophical questions back into their rightful place at the very heart of research and innovation.

“Ethics means not only thinking about the stakes and the consequences of our technological decisions, but also adapting our practices so that we can respond to those things,” he says. “If people get anything from our work, I would hope it’s to think more about ethics, not just about naming the problems, but about integrating ethics into their practice so that they proactively think about harms and how to make a positive impact.”

Learn more about Matthew Sample’s work on the Responsible AI team at the Institute for Experiential AI.