AI Ethics and Responsible AI aim for AI systems that benefit individuals, societies, and the environment. It encompasses all the ethical, legal, and technical aspects of developing and deploying beneficial AI technologies. It includes making sure your AI system does not cause harm, interfere with human agency, discriminate, or waste resources.
AI Ethics and Responsible AI is a multidisciplinary research area where an exceptional team of computer scientists, philosophers, legal scholars, sociologists, psychologists, and many other experts work together to make progress and shape the future of technology.
The institute draws on Northeastern's strengths and existing interdisciplinary collaborations to translate ethics into practice. We work with academic and industry partners to establish guidelines for AI ethics governance mechanisms, translate abstract values into practical guiding principles, and install ethics training for AI practitioners.
The body of ethics resources we are working to build addresses context-specific challenges related to factors such as fairness, bias, autonomy, diversity, transparency, explainability, and privacy. Drawing on a close working relationship with the Northeastern Ethics Institute, EAI works to produce concrete, action-guiding tools for developing AI applications and research across domains like health, law and criminal justice, finance, and social media.