By: Zach Winn
Note: We received an overwhelming number of questions during this webinar and will be addressing the questions in a follow-up post.
On April 21, three pioneers in data and AI explained how companies can use generative AI to responsibly boost productivity in a webinar hosted by the Institute for Experiential AI (EAI) at Northeastern.
The experts cut through the hype of generative AI to demystify the technology and zero in on its real value. Along the way, they discussed its problems and gave reasons why companies need to establish frameworks for using generative AI ethically and responsibly.
Usama Fayyad, executive director of EAI, began the webinar by explaining how generative AI tools like ChatGPT work. He also outlined the many limitations of the models, noting there is strong evidence that humans create and review some of ChatGPT’s responses.
Fayyad also walked through possible applications of generative AI for companies in finance, law, medicine, manufacturing, and more, including simple and relatively safe uses like debugging code and extracting data from documents.
Next up was EAI senior principal research scientist Kenneth Church, who framed his presentation around a question he hears often: Will AI solve my business problem? The answer, unfortunately, is that it’s complicated. Some business use cases are more promising than others.
Church outlined the types of problems large language models are well equipped to solve and explained how companies should adopt those models.
Church also discussed the problem of hallucinations, in which large language models make things up. Such hallucinations are a major problem because models can generate alternative facts faster than people are able to fact check them.
“This is a very dangerous failure mode because people are likely to believe these alternative facts,” Church said. He pointed to one mistake shown on the program 60 Minutes recently as an example. “This is a really serious error. One error like that can get your product canceled.”
Director of Research at EAI, Ricardo Baeza-Yates then listed several bad use cases of generative AI, emphasizing the importance of using AI responsibly.
Baeza-Yates also gave a darkly comic example of ChatGPT’s limitations: When someone asked the model to list the top AI researchers that had passed away recently, his own name appeared. Curious, Baeza-Yates asked the model follow up questions and was given further incorrect details on his supposed death.
“There is no knowledge base behind these answers, so you get these incoherent answers,” he told the audience. “The [models] not only hallucinate — they believe what you say.”
Baeza-Yates has thought deeply about how to safeguard against these problems: He recently coauthored a paper for the Association for Computing Machinery that set forth nine principles for responsibly developing and deploying algorithmic systems. In the presentation, he explained how organizations can implement those principles in their adoption of AI systems.
Baeza-Yates’ presentation ties back to his work on responsible AI for EAI, which provides expertise and training to help organizations implement these ideas. Last year, EAI established the world’s first AI Ethics Advisory Board, which helps institutions solve these problems.
Baeza-Yates concluded by stating that, because of the potential consequences for businesses, deploying AI responsibly is no longer optional.
To get all the insights and watch the full presentation, click here.