by Tyler Wells Lynch
The Institute for Experiential AI welcomed Lorena Jaume-Palasí, founder of the nonprofit Ethical Tech Society, to speak about ethics and computational systems and how the lenses of design justice, machine bias, and fairness fail to contextualize moral conflicts in technology. The lecture is part of IEAI’s Distinguished Lecturer Series. Watch the full replay above or read on for an event summary.
The Problem With Algorithms
Algorithms are increasingly part of our daily lives, impacting our health and well-being in often visible ways. Computational systems direct social services, credit scores, healthcare decisions, and myriad other high-impact domains, all of which have social and ethical consequences. But are these systems equipped to deal with complex ethical issues and dilemmas? For Lorena Jaume-Palasí, the answer is a resounding no — at least not in their current state.
Latent bias, ruthless efficiency, and historical ignorance have codified ethics within the isolated domain of mathematics. Divorced from their social, historical, and humanist contexts, computational ethics systems have come to distort rather than mirror their humanist forebears.
To illustrate the problem, Palasí points to the branching of the scientific disciplines, which began in the late middle ages as experts became more specialized within their fields. While foundational to the dictates of good science, disciplinary esteem further ramified the “tree of science” to the point where those distant fields had trouble communicating with one another. Eventually, some branches became unbridgeable, such as the gap between the social or “soft” sciences and the so-called “hard” sciences (e.g., physics, chemistry, mathematics). Crossing fundamental borders of methodology risked infringing upon what Timnit Gebru called the “hierarchy of knowledge.”
But Palasí implores us to rethink the wisdom of this separation, pointing to the rich ethical tradition of the humanities as a fountain of context for machine learning systems. Looking backward, we can see that even the social dimension of technology is not new. In the early 20th century, an international elite of engineering communities emerged and understood that their field shaped society by creating highly planned infrastructures.
What’s Past is Prologue
Palasí cites sociologist Hans Freyer who wrote in 1933, “If the immanent utopia of technology is the transformability of all materials and forces into each other, then the immanent utopia of planning is the transformability of all historical situations into each other.”
At first, those physical infrastructures were so sophisticated that technologists embraced an almost utopian vision for the promise of mathematical planning. But the actual history of the 20th century challenged all notions of finding utopia in the indifferent heart of cold computation.
Nevertheless, the same thinking persists in the world of Artificial Intelligence. Even if the initial idea of AI was philosophical and humanistic, modern ethical machine learning quandaries show the AI community has lost touch with what Palasí calls “the mother and father of the scientific approach.”
Toward a Just Future
To develop algorithms that can adapt to genuine ethical inputs, they must contextualize personal, societal, and institutional relations. Rather than optimizing for efficiency, computational systems must optimize for context.
Returning to the hierarchy of disciplines, the challenge here is bridging the gap between rigid mathematical expectations and humanistic vagaries. “Context” doesn’t lend itself well to datafication, but there is already a rich trove of research at the intersection of ethics and technology. For decades, scientists like Donna Haraway, Richard Lewontin, Helen Nissenbaum, and Deborah Johnson have been addressing epistemological concerns in the way we teach natural sciences.
One conclusion to be drawn is that many of the so-called “hard” sciences are deeply embedded with ideological biases and unrealistic expectations of objectivity. Worse yet, assumptions about their objectivity only fortify the power structures that emerge out of them.
In his 1950 book, The Human Use of Human Beings, Norbert Wiener laid out how the mathematization of the world through technology was not simply a matter of planning but, in the end, a form of control.
“Information is a name for the content of what is exchanged with the outer world as we adjust to it, and make our adjustment felt upon it. The process of receiving and of using information is the process of our adjusting to the contingencies of the outer environment, and of our living effectively within that environment. The needs and the complexity of modern life make greater demands on this process of information than ever before, and our press, our museums, our scientific laboratories, our universities, our libraries and textbooks, are obliged to meet the needs of this process or fail in their purpose. To live effectively is to live with adequate information. Thus, communication and control belong to the essence of man’s inner life, even as they belong to his life in society.”
As Wiener argues, humanity is always fighting nature’s tendency to degrade organized systems and resist its tendency toward entropy. Humanity, in turn, seeks to control nature by optimizing it through ruthless efficiency. We see this in rotor designs that mimic maple seeds, energy grids that model as “hive minds,” whale-fin-shaped wind turbines, mechanical forests in Singapore, and robots that resemble human or animal forms.
Ethicists and sociologists might go a step further, arguing that computational networks go well beyond the individual services they are assumed to provide. As infrastructures in their own right, they shape the very societies we live in, intensifying their impact with scale.
Palasí argues that the optimization of nature and the resulting control networks are a form of hyper-efficiency fueled by past data. In that sense, progress, at least in a technological sense, is very much about trying to create a future that is based on the past. Let’s take a closer look at how.
Biased Algorithms: A Primer
In many ways, the entire field of ethics in AI blew open with the publication of Joy Buolamwini’s 2016 paper “In the Beginning Was The Coded Gaze.” The study revealed how computer vision was racially-and gender-biased toward white men to the detriment of women and people of color. Since then, we’ve seen a high amount of interest in the ethics of algorithms.
In 2018, a ProPublica investigation of the COMPAS algorithm, which is used extensively in the U.S. court system, unearthed coded biases that negatively impacted black/racialized communities. Specifically, the report analyzed recidivism risk, concluding blacks were “almost twice as likely as whites to be labeled a higher risk but not re-offend.” The COMPAS algorithm makes the opposite mistake among whites, who are much more likely than blacks to be labeled lower-risk but go on to commit other crimes.
Other research, such as Latanya Sweeney’s 2013 study of the Google ad platform, discovered a statistically significant volume of Black-identifying first names containing the word “arrest” in the ad text. Attempts to redress these issues have involved using more categories, more attributes, and more data, but other researchers have highlighted flaws in making data merely more inclusive.
A 2019 book by Keeanga-Yamahtta Taylor called Race for Profit coined the term “predatory inclusion.” The term references federal housing policies that explicitly target or “include” African Americans but have the effect of juicing real estate profits while failing to address housing inequality.
What’s missing from these algorithms is not an abundance of data but the context around them. Adding to the challenge is a cultural perception that technology is neutral, rational, and objective, and therefore an ally in the struggle to de-bias algorithms. The question is asked, can we develop metrics capable of translating fairness, justice, and ethical principles into mathematical contexts?
A recent study by Ben Green revealed a problem with this approach. The study showed how human oversight of recommendation algorithms, particularly within the justice system, imbues systems with additional bias. The study also found that people are not really up to evaluating the quality of algorithmic predictions.
The risk of recidivism is a clear example of the problem because human interactions with algorithmic predictions tend to center risk itself as the predominant factor in the decision to grant parole. But there are numerous other legal, social, even biological factors that, whether for technological or cultural reasons, are not factored into the recommendation algorithm. What is considered are reams of data that purport to render a profile of risk based on inputs from the past.
How to Contextualize Data
The omission of context from judicial decision-making contravenes widely accepted notions of justice. In the occidental world, justice is grounded in concepts of social stability, equality of dignity, and human rationality that can be thought of as operationalizing fairness and enshrining ethical order into legal infrastructures. But the irony is that the moment a law is enshrined, justice becomes external to the legal system, remaining outside the system as a form of comparison.
That means that not everything that occurs within a legal system is ethical, nor can it be. As John Rawls put it, the structure of the legal system becomes “the primary subject of justice because its effects are so profound and present from the start.” Therefore, any idea of fairness in developing an ethical system must depend on a fluid, conversational, and societal perspective as a point of comparison.
When we think about algorithmic systems and automation technologies, we make many presuppositions about their use, namely the prioritization of data that feeds into them. Wearing the same mathematical lens worn by the utopian planners of the early 20th century, we assume that the information is unbiased. The use of data, which largely stems from the past, carries with it an assumption of an ideal, unbiased provenance. It can therefore serve as a norm or proxy for the future. This use allows the algorithms to be obscured and protected from external review. The use of these systems in such a way declares a desire for the future to be as the past.
Why is this wrong? Computational systems depend on stable natures to perform as planned. They extract idiosyncrasies and outliers to make general profiles of human beings. But ethical systems depend on contextual thinking, utilizing specific facts to distinguish between general rules and exceptions. Ideally, ethical systems avoid making judgments based on what similar profiles have done in the past.
A (Tongue-in-Cheek) Theorem
Palasí’s theorem is that “If a process is algorithmically fair, it probably is socially unfair.” But, as she points out, this theory should be taken in jest because ethical systems are inherently fluid; attempting to systematize them effectively destroys them. As we have seen, algorithms cannot contextualize, which is one of the main points of fairness. And to the extent that it does contextualize, it is often freighted with historical bias.
Algorithms, in their current form, are being used to optimize personal services, which means we evaluate individuals based on profiles not based on particularities, but on which profile the individuals belong to. For that, they are useless.
A better path may be to develop systems, assumptions, and recommendations based on relationships rather than input nodes. Self-organizing principles are good examples of this. A paper from 2017 addressing the problem of equal headway instability in public transportation showed how self-organizing methods better regulate service schedules through adaptation. Another example is in indigenous approaches to wildfire management, which use periodic hazard-reduction burns and view fires as natural processes rather than disruptions to efficiency that must be resisted.
In that sense, solutions may be found in attempts to model nature. If you look at a forest, its success depends not on the proliferation of individual trees but on the root systems and mycelial networks that relay information in the form of energy. These systems collaborate and react to one another, fostering an invisible network of constant adaptation. The success of such a forest is not its ability to optimize all available resources but its ability to contextualize its surroundings.
Q: Do you believe it’s reasonable to ask or expect a team of technicians or scientists at a company to learn the history and appreciate the global phenomena of colorism, racism, misogyny, or heteronormativity? Or is that a pipe dream? Or is it good enough to simply deconstruct the illusion of overcoming what you call the subjectivity of governance?
Lorena Jaume-Palasí: I think that is part of the conversations that we all need to have right now, because our future is entangled in the past, and in the past there is no such thing as objectivity. What we have is what we consider to be objective, but that objectivity is always correlated to the culture and the position of the speaker and the person observing and creating. But I think there is some momentum in terms of questioning colonialist thinking that systematizes through mathematics. What worries me is the cold war rhetoric that we have right now in the U.S. and the European Union regarding this race for machine learning and quantum computing and where no one wants to lag behind. That thinking negates the fact that we are all connected to each other. Technology is already an international affair. It starts with mineral resources that are located, for the most part, in regions that are not receiving the benefits of the technologies they supply. Often these countries are exploited for those very resources, and negating that reality is an impediment to realizing the interconnectedness of both human and technological systems. But I think any progress will have to be bottom-up. It’s about all you people who are working in this field. It’s on your capacity to reflect, and your demand for change. And it’s on people from different disciplines to recognize they may be wrong. The same thinking has to exist at Google, Facebook, NGOs, political organizations, and elsewhere. But the energy has to come from the bottom.
Q: In regards to your theory that if a process is algorithmically fair it is probably socially unfair, is it possible that this theory is also biased?
LJP: The theorem was a fun way of formulating the idea that fairness is, ironically, not something you can systematize. The moment you systematize it you kill it. That’s the reason why, in justice, we say that a judge does not impart justice; a judge applies law. We say that because the judge knows that some laws might be fair while others are not. Some judgments are strictly legal instruments that a judge needs to apply. But laws require some kind of societal review, otherwise they would not evolve. What was considered fair 100 years ago no longer applies. Because of that, fairness escapes every type of categorization, be it mathematical or legal. So it’s sort of like an indefinable shadow. We all have our own idea of fairness, which is part of our societal conversation and part of what confirms public opinion. When judges come to a specific decision, they can try to prove whether that decision would be considered fair, and that serves as a proxy to understand where we are going legally: Is it in the right direction, or are there things we need to rethink?
Q: One of our philosophies in the Institute for Experiential AI is that, while we recognize there are all these problems, we still have to live with them. Briefly, can you summarize how we can take a better approach?
LJP: In Spain, we have a saying that you cannot demand pears from an oak. You can only get pears from a pear tree. The way we’re using computational infrastructures is, ironically, a child of our thinking, but it doesn’t fit the expectations we’ve put on it. So I think we first need to realize what these systems are fit for, before we keep trying to make a pear tree from an oak tree.
About Lorena Jaume-Palasí:
Lorena Jaume-Palasí is a consultant and scientist. In this role, she moves in the field of tension between digital technology and ethics. As an expert, Lorena Jaume-Palasí works for the European Parliament and the European Commission, and the government of her home country Spain appointed her to the National Council on Artificial Intelligence. She is currently involved in the nonprofit organization she founded, The Ethical Tech Society. There, she explores the question of how ethics and legal philosophy can be reconciled with digitization. As a co-founder of the AlgorithmWatch initiative, the native Spaniard received the Theodor Heuss Medal in 2018 “for her contribution to a differentiated view of algorithms and their mechanisms of action.”