Responsible AI
A comprehensive approach to the ethical and responsible use of AI.
A Multidisciplinary Research Area Aiming for AI Systems that Benefit Individuals, Societies, and the Environment
By encompassing the ethical, legal, and technical aspects of developing and deploying beneficial AI technologies, we ensure your AI system does not cause harm, interfere with human agency, discriminate, or waste resources.
Establishing Responsible AI Guidelines for Developing AI Applications and Research
Our interdisciplinary team of AI ethicists, responsible AI leaders, computer scientists, philosophers, legal scholars, sociologists, and psychologists collaborate to make meaningful progress, translate ethics in to practice and shape the future of technology. We work with academic and industry partners across domains including health, criminal justice, finance, and social media to:
Establish guidelines for AI ethics governance mechanisms
Translate abstract values into practical guiding principles
Install ethics training for AI practitioners
Produce concrete, action-guiding tools for developing AI applications and research
How We Help Partners
We are a multidisciplinary team of experts with experience translating ethics into practice. We draw on Northeastern's strengths and existing collaborations with academic and industry partners to address the field’s most pressing questions and address context-specific challenges related to fairness, bias, autonomy, diversity, transparency, explainability, and privacy.
Our work with academic and industry partners involves:
Translating abstract values into practical guiding principles
Establishing ethics training for AI practitioners
Producing concrete, actionable tools for developing AI applications