Tech giants on red alert. Universities reimagining core curricula. Governments fretting over a new era of automated misinformation. Writers, coders, paralegals left out in the cold. ChatGPT’s generative capabilities are able to produce streams of information that appears to be generated by an intelligent person. But is the panic justified?
ChatGPT’s biggest flaws are AI’s biggest flaws: It struggles with critical and creative thinking, can’t explain its behavior, and has only a surface-level grasp of ethical conduct. This critique isn’t new. At EAI, researchers, ethicists, engineers, and scientists have been arguing for a more human-centric approach to AI—not just in the development of new language models, but also in their deployment in the real world.
“GPT is good at regurgitating well-established knowledge if it is included (with repetition in multiple docs) in its training data. For well-chosen training corpus, ChatGPT is able to effectively summarize or fuse together multiple sources. But that is also its weakness since it has little semantic understanding of what it is fusing. Where it struggles is building on multiple inquiries, applying innovative thinking to a problem, and being able to critically think about what it supplies as answers or track the sources of the information. This includes a lack of common sense reasoning which ends up with clearly silly statements about some topics that are “obvious” to humans. This emphasizes the need to have the human-in-the-loop as we leverage these new impressive capabilities to speed up our writing, coding, researching, and understanding information.”
—USAMA FAYYAD, EXECUTIVE DIRECTOR
“The claim that ChatGPT is a force for democratization was also made for the internet, but the inequality of the world keeps increasing. Democratization is not uniform. Language models favor the people that have the knowledge, time, and money to use them. In fact, [ChatGPT’s] answers are not coherent across languages, meaning the knowledge is not consistent.”
—RICARDO BAEZA-YATES, DIRECTOR OF RESEARCH
“The ethical challenge isn’t just that we lack sufficient regulation, that practitioners don’t have sufficient concrete guidance, that there is a dearth of interdisciplinary AI ethics scholars that can help navigate tricky issues raised by AI, that there is insufficient training, education around AI ethics, or that many key stakeholders lack sufficient understanding of how these technologies work and how they interact with core values. The problem is all of these things.”
—JOHN BASL, ASSOCIATE PROFESSOR OF PHILOSOPHY
“It will allow for a new type of therapeutic intervention, where speech and language practice are necessary for improvement. Chatbots and voice bots might also be useful to compensate for impairments that may exclude the individual from participating in educational or vocational opportunities. The reality is that some of this technology is not quite ready for the average user and requires more tuning to ensure that responses of the AI are accurate and pertinent to the conversation. I see these capabilities as a step function change in the technology, but not as something that we need to fear or treat as if it came out of nowhere.”
—RUPAL PATEL, CORE FACULTY
“Unlike humans, large language models can ‘read’ fast enough to keep up with the onslaught of new papers in the life sciences. However, current versions are still insufficient for academic use. Principally, this is because the kind of model used by ChatGPT cannot cite its sources. Although a master at ‘summarizing’ complex bodies of literature, ChatGPT can’t tell you where it got a specific idea from, nor where you can look for more information. Not only does its inability to cite sources raise ethical considerations, it is the major barrier to more widespread adoption in the sciences. But, this will change. I predict that within the next two years, there will be large language models that can summarize data with the speed of ChatGPT and the accountability of Wikipedia. A ChatGPT that can cite its sources will change the face of research in the life sciences.”
—SAM SCARPINO, DIRECTOR OF AI + LIFE SCIENCE
“Any new technology that can affect lives brings about the same danger of being misused due to human ignorance and human greed. These are probably the two biggest problems that AI has as of now. Hence, it is important to educate the general public about the benefits and dangers that the use of AI can bring. It is important that regulating bodies put in place regulations that will contain how unscrupulous some organizations can be. Over the centuries, every new technology has been abused to gain power and wealth, and AI will not be made an exception unless we ensure that its use is regulated.”
—AYAN PAUL, POST DOCTORAL FELLOW