The Truth About ChatGPT

 

Tech giants on red alert. Universities reimagining core curricula. Governments fretting over a new era of automated misinformation. Writers, coders, paralegals left out in the cold. ChatGPT’s generative capabilities are able to produce streams of information that appears to be generated by an intelligent person. But is the panic justified?

ChatGPT’s biggest flaws are AI’s biggest flaws: It struggles with critical and creative thinking, can’t explain its behavior, and has only a surface-level grasp of ethical conduct. This critique isn’t new. At EAI, researchers, ethicists, engineers, and scientists have been arguing for a more human-centric approach to AI—not just in the development of new language models, but also in their deployment in the real world.

Here are six things to know about ChatGPT from EAI Experts

“GPT is good at regurgitating well-established knowledge if it is included (with repetition in multiple docs) in its training data.  For well-chosen training corpus, ChatGPT is able to effectively summarize or fuse together multiple sources. But that is also its weakness since it has little semantic understanding of what it is fusing. Where it struggles is building on multiple inquiries, applying innovative thinking to a problem, and being able to critically think about what it supplies as answers or track the sources of the information. This includes a lack of common sense reasoning which ends up with clearly silly statements about some topics that are “obvious” to humans. This emphasizes the need to have the human-in-the-loop as we leverage these new impressive capabilities to speed up our writing, coding, researching, and understanding information.”

—USAMA FAYYAD, EXECUTIVE DIRECTOR

“The claim that ChatGPT is a force for democratization was also made for the internet, but the inequality of the world keeps increasing. Democratization is not uniform. Language models favor the people that have the knowledge, time, and money to use them. In fact, [ChatGPT’s] answers are not coherent across languages, meaning the knowledge is not consistent.”

—RICARDO BAEZA-YATES, DIRECTOR OF RESEARCH

“The ethical challenge isn’t just that we lack sufficient regulation, that practitioners don’t have sufficient concrete guidance, that there is a dearth of interdisciplinary AI ethics scholars that can help navigate tricky issues raised by AI, that there is insufficient training, education around AI ethics, or that many key stakeholders lack sufficient understanding of how these technologies work and how they interact with core values. The problem is all of these things.”

—JOHN BASL, ASSOCIATE PROFESSOR OF PHILOSOPHY

“It will allow for a new type of therapeutic intervention, where speech and language practice are necessary for improvement. Chatbots and voice bots might also be useful to compensate for impairments that may exclude the individual from participating in educational or vocational opportunities. The reality is that some of this technology is not quite ready for the average user and requires more tuning to ensure that responses of the AI are accurate and pertinent to the conversation. I see these capabilities as a step function change in the technology, but not as something that we need to fear or treat as if it came out of nowhere.”

—RUPAL PATEL, CORE FACULTY

“Unlike humans, large language models can ‘read’ fast enough to keep up with the onslaught of new papers in the life sciences. However, current versions are still insufficient for academic use. Principally, this is because the kind of model used by ChatGPT cannot cite its sources. Although a master at ‘summarizing’ complex bodies of literature, ChatGPT can’t tell you where it got a specific idea from, nor where you can look for more information. Not only does its inability to cite sources raise ethical considerations, it is the major barrier to more widespread adoption in the sciences. But, this will change. I predict that within the next two years, there will be large language models that can summarize data with the speed of ChatGPT and the accountability of Wikipedia. A ChatGPT that can cite its sources will change the face of research in the life sciences.”

—SAM SCARPINO, DIRECTOR OF AI + LIFE SCIENCE

“Any new technology that can affect lives brings about the same danger of being misused due to human ignorance and human greed. These are probably the two biggest problems that AI has as of now. Hence, it is important to educate the general public about the benefits and dangers that the use of AI can bring. It is important that regulating bodies put in place regulations that will contain how unscrupulous some organizations can be. Over the centuries, every new technology has been abused to gain power and wealth, and AI will not be made an exception unless we ensure that its use is regulated.”

—AYAN PAUL, POST DOCTORAL FELLOW

 

Read more from EAI and see our events. Learn about our AI Solutions HubResponsible AI Services, on-demand AI Ethics Advisory Board, and discover business opportunities.

Latest Posts

Inside Intuit’s Quest to Leverage the “Transformative” Powers of AI

Some companies are only just starting to think about AI. Others have already begun to embed it in their operations. Then there’s Intuit, which has been investing in AI for years. The financial technology company, which is known for products like TurboTax, QuickBooks, and Credit Karma, has a history of getting out in front of […]

Six Leading AI Experts Weigh in on the White House Executive Order

Last month, the Biden administration issued an Executive Order (EO) establishing rules and guidelines on the use of artificial intelligence. Through an assortment of benchmarks, appropriations, pilot projects, and reporting requirements, the order’s stated goals are to preserve privacy, protect vulnerable groups, promote competition, and advance civil rights, among other ideals. Our Directors of research, […]

Chegg CEO Announces A New Age Of Learning With Generative AI

Dan Rosensweig laid out a compelling vision for AI to radically transform the education industry at the Institute’s Leading With AI Responsible conference.  In January Dan Rosensweig, the CEO of education technology company Chegg, met with OpenAI CEO Sam Altman to discuss ChatGPT, which had been making waves since Altman’s company released the chatbot […]

AI and the Environment: Is It Enough to Lead by Example?

AI systems are nothing if not power hungry. Researchers from the University of Massachusetts found training just a single AI model can emit the same amount of carbon dioxide as about 63 gasoline-powered cars driven for a year. Another study estimated that emissions from the Information and Communications Technology (ICT) industry as a whole will […]

Peter Norvig Redefines AI Success with Call For Human-Centered Solutions

At the Institute’s Leading With AI Responsibly conference, the industry pioneer made the case that businesses should focus on AI’s broad societal impact as they develop products. Peter Norvig literally wrote the book on artificial intelligence. The California native is the co-author of one of the most popular textbooks on the subject, “Artificial Intelligence: A […]