Wearables Industry & AI Ethics: Promise and Peril
The wearables industry holds incredible promise for the future of human health. The power to fulfill that promise lies within the data wearables collect and the analytics and artificial intelligence that turn those data into insights. The potential of wearable technology to dramatically change lives for the better, however, goes hand in hand with the inherent dangers of large-scale data collection and analysis. That’s why wearables makers need a comprehensive and practical approach to Responsible AI.
“Properly designed and implemented wearable, nearable, and ingestible sensors, paired with well-designed artificial intelligence algorithms, have the potential to transform proactive health & wellness, and healthcare in significant ways,” says Eugene Tunik, Director of AI + Health at the Institute for Experiential AI (EAI) at Northeastern University. But, he cautions, “There is a growing fracture in trust between consumers and data-driven devices, especially fueled as tech giants reduce their ethics teams, increasing opacity on how data is used and shared.”
The use of artificial intelligence compounds these concerns by introducing risks tied to bias magnification, trustworthiness, and explainability. Unlocking the promise of wearable technology requires a thorough understanding of its risks — and a plan to navigate them.
Broad and Dynamic Concerns
Wearable technology has the potential to revolutionize the healthcare industry. In some areas it already is. However, the increasing use of wearables by healthcare providers raises significant ethical concerns regarding data privacy, security, and ownership. This may lead to a lack of transparency in the collection and sharing of patient data, as well as potential discrimination based on health status.
Tunik cites photoplethysmography (PPG) as an illustrative example of how bias can end up harming a particular population. PPG is widely used because it is an effective and low-cost technique for collecting valuable biophysical information such as oxygen saturation in the blood, heart rate, and blood pressure. PPG sensors can be found in wearable devices such as smartwatches as well as familiar medical equipment like blood pressure cuffs.
“However,” he says, “individuals with darker skin have higher levels of melanin, which absorbs light. Therefore, the reflected signal in darker skinned individuals is weaker compared to the signal recorded from someone with lighter skin. The same effect occurs in people who are obese and have more adipose (fat) tissue.
“The ramifications of these biases are significant,” says Tunik. “For example, individuals with either darker skin or obesity may be less likely to receive care compared to lighter skinned or lean individuals.”
Beyond the collection of data, companies face numerous complex ethical challenges in how that data is used and analyzed. Today, artificial intelligence is a necessary component of any product pipeline that uses data as a critical input. Wearables makers will need to use AI to deliver on their promised patient and customer experience and stay ahead of the competition while ensuring that product updates do not compromise their ability to adhere to terms of service. Additionally, because so many wearables assist or even encourage behavior modification—as anyone who’s become addicted to “getting steps in” can testify—the effect on users’ lives must be factored into any new initiative.
“These devices can be capable of knowing the user in a very intimate and personal manner,” says Cansu Canca, Ethics Lead at EAI. “If there is no good explanation (which means, an understandable, accessible, and concise explanation) of how their data is safeguarded, then they should assume that their data is not safe.”
Expert in the Loop
While the wearables user is a human in the loop in the sense that they receive input from an AI system and provide data outputs to that system, they are not prepared to monitor the system for risks or make adjustments to that system if harm is being done. Some wearables are developed with a specific population in mind. For instance, wearables can enable at-home care for the elderly via smart monitoring of vital signs, automatic fall alerts, and other biomarker measurement. To responsibly serve this population, wearables makers must be able to understand how they interact with technology and design a strategy that ensures accessibility and protects mental health.
Tunik points out that in many healthcare organizations, pre-existing governance models can be applied to data and AI capability development. “Healthcare already has a mature and widely recognized code of ethics. With the help of Responsible AI experts, many of these principles can be applied to AI,” he says.
Wearables makers must be sure that an expert in the loop is present to look out for red flags. Knowing where to place this expert and what red flags to look for is not easy. Teams must be able to work together to incorporate human health expertise as well as data and AI expertise so they can build and maintain an agile framework that responds in real time and evolves as technologies evolve and populations change.
“Preserving privacy often comes with a trade-off: It might reduce health outcomes for the individual or for the target population or limit accuracy,” says Canca. “These trade-offs cannot be resolved with generic rules but rather they are case-based. The ethics expertise helps to decide on how to resolve these trade-offs and integrate these decisions into the design and development of the technology.”
The wearables industry is expected to grow into a $186 billion dollar industry by 2030. It is a broad category, consisting of fitness trackers like Fitbit and Whoop, AR/VR headsets, AI-powered hearing aids, vital sign monitors, and much more. Because the purposes of these wearable devices vary greatly, the approaches to addressing ethical challenges will vary. To properly understand and navigate these challenges, wearables makers must build cross-functional teams of experts—and a human-centric strategy to guide them.
Tunik believes we have arrived at an inflection point. “Collectively, all of these factors position us, as a society, on two potential tracks,” he says. “On one hand, if we don’t reign in these challenges, the technology will experience incremental advancements and uptake.
“On the other hand, we have an opportunity to reflect on how to path-correct: to collect more reliable data, to be transparent in data use, to incorporate consumers and healthcare teams into the design, manufacturing, and data sharing principles, and to solicit feedback for keeping things on track.”
For more, watch our Sensing & AI Ethics: Applications in Health webinar.
Connect with Maria Giovanna here.
Works Cited
Hill, S. (2019, February 13). Should Big Tech Own Our Personal Data? Wired; WIRED.
Sedaris, D. (2014, June 23). Stepping Out. The New Yorker.
Grand View Research. (2021, October). Wearable Technology Market Size | Industry Report, 2020-2027.
Jang, J., & Lee, E. (2019). Social, ethical and ecological issues in wearable technologies. Journal of Medical Systems.
American Physical Therapy Association. APTA digital health transparency campaign.
Wired. Wearing your intelligence: The ethics of wearable tech.
Wearable technology in health care: Getting better all the time. (n.d.). Deloitte Insights.