By: David Bolton
A digital-based workforce is a key reason as to why state-of-the art technologies such as AI are both increasingly prevalent and a potential income stream for online workers.
Saiph Savage, an assistant Professor in the Khoury College of Computer Sciences and director of the Civic A.I. Lab at Northeastern University, believes the integration of humans into the nascent AI pipeline raises important questions about this growing virtual labor market. Speaking at the final Expeditions in Experiential AI seminar of the fall semester, she presented research findings from studying the impact of AI in society, and revealed a new “AI for Good” framework.
“We are living in a time where we have AI creating futuristic realities like self-driving cars and voice assistants that can respond to anything we ask,” Savage said. “These are signs that AI can be good for society. However, we also need to recognize that AI is creating a new global underclass.“
For workers in the nascent AI pipeline, the combination of unrealistic employer expectations, pay inequality, limited potential for personal improvement, a lack of fair or reasonable assessments and end user-generated feedback loops are all real-world concerns. For context, Savage said, researchers predict that around 50 percent of the U.S. population will have been involved in AI-based jobs by 2027.
Workers are performing tasks behind many of the AI advances already augmenting our personal and professional lives. This labor force is transcribing audio, moderating social media content, labeling images and more. All of these inputs are a vital part of the AI pipeline, but Savage is concerned that the majority of these workers are not only earning less than minimum wage but also that these workplaces have not been designed for measurable social good or continuous improvement.
Savage argued that while AI is creating millions of jobs, there is evidence that digital platforms are responsible for a growing community of “invisible workers.” In addition, there is a lack of visibility for worker success, especially when it comes to tracking time spent on various activities, completion rates and actual earnings.
“We have no idea as to what is going on with the workers or how they are being treated,” Savage noted. “We could try and get insights through interviews, but that data might be tainted. There might also be bias and I want to be able to quantify what exactly is happening inside these digital labor markets.”
Savage argued we must think about the value of and systemic problems experienced by the invisible or “ghost workers” as they interact within the digital labor market.
Savage’s research lab has created AI-based tools that “go under the hood” and take a deeper dive into elements such as the time spent completing tasks, searching for work and the difference in digital labor requirements for specific tasks. These activity algorithms are a critical part of being able to assess where the gaps in end user experience and employer oversight occur.
To achieve or advance her research goals, Savage relies on a capability framework that is based on existing social justice theories. In order to design AI for invisible workers, she uses a quantitative method to identify the existing disconnect and where it can benefit from other stakeholders. And by using sensitive design and aligning with social context, the framework is able to understand values and improve labor conditions “behind the scenes in the AI industry.”
For instance, so-called “super workers” are inevitably familiar with the platform and its requirements. These individuals have thrived within the system and are able to make a good living from continued engagement. That makes them a good candidate for the design of AI-based coaching models, especially if they can share strategies for success.
“When you are thinking about justice and social good, you shouldn’t just think about whether you are giving people access to an equal amount of resources, but rather questioning what those resources are being used for. What can some of them do that others can’t? Why can certain stakeholders achieve success? That way you start to untangle the critical problems that workers are facing.”
To date, Savage’s team has used web plug-ins to track not just activity and earned wages but also engagement and AI-based coaching opportunities across different sections of society. Rural workers, for instance, are one group that can benefit from measurable changes in the digital labor market. Public libraries are also ideal locations for people to become involved in a labor platform that uplifts the AI workforce to a new level, she notes.
The underlying issue is not about giving people access to the same level of resources, she said, but questioning whether the tools or platforms themselves are contributing to a disconnect. If the invisible workers are to remain motivated within the AI pipeline, they must feel valued. Systemic problems can be solved, but it requires the input of not only digital platform owners and employers but also the workers themselves.
“I argue that we need to understand what the values of the different stakeholders are,” Savage says. “It’s also important that we connect with the social justice theories that help us better understand the problems that they face. Through this, we can create better technological solutions that are more focused on AI for social good.”
To learn more about Savage and her continued research into the challenges that exist within the digital workforce and the AI pipeline, check out this replay of her talk. And watch all of our fall semester replays here.