Recap: Language Models Are Notoriously Deceptive. Why? What Can Be Done About It? 

 

by: Tyler Wells Lynch

October 25, 2022

The dangerous thing about AI is how good it is at appearing intelligent. Deep nets and language models (LMs) display impressive degrees of linguistic and syntactic fluency, but because they lack critical features of language that are innate to humans—such as temporal logic, object association, discursive planning, and plain old common sense—they’re prone to a lot of bias and untruthfulness. 

In fact, LMs will outright fabricate information in a way that’s fluent enough for people to believe in them. Fake news, conspiracy theories, and “alternative facts” are just a few of the side effects.

This tenuous grasp on truth may be the most pressing concern in LM research and perhaps even a reason to return to the drawing board. But as Ken Church, Senior Principal Research Scientist at the Institute for Experiential AI (EAI), explains, the drawing board has to incorporate an evolving landscape of cultural, ethical, and technical concerns.

As part of the Fall Seminar Series lecture hosted by EAI, Ken introduced a new programming language called GFT (General Fine-Tuning) that makes deep nets look like a statistics package. Upholding the adage “less is better,” GFT is a more accessible language that non-programmers can use to fit a regression or classification model.

Ken also took aim at the current state of LM research, identifying a number of weaknesses and pitfalls, one being an overreliance on collocation in predictive language modeling. (In linguistics, collocation refers to words that frequently appear next to each other, as in “strong coffee” or “heavy rain.”) 

“[Collocation models] hallucinate—that is, they basically make up alternative facts faster than you can fact check,” Ken explained. “And this is a dangerous problem. People are likely to believe some of the stuff they make up.”

A standard criticism of LMs involves ethical questions like bias and fairness. Ken referred to these questions as “Risk 1.0.” However, an emerging set of risks, which he dubs “Risk 2.0,” turns on how machine learning and social media combine in a way that is addictive, dangerous, deadly, and insanely profitable.

These concerns go well beyond bias. He cited a recent example where a reporter from the MIT Technology Review tried to talk to two Facebook directors about the plethora of hate speech and fabrications on their platform, only to be stymied and redirected to the more narrow problem of biased algorithms.

“Just as you wouldn't expect the tobacco companies to try to sell fewer cigarettes,” Ken said, “you probably can't expect the companies that are making lots of money to stop making lots of money.”

So the problem of ethics in language models is not just a technical one; it’s apparently cultural and economic, as well. To reach anything approaching a truly “intelligent” language model will require diverse, interdisciplinary approaches. That means taking seriously an evolving risk environment—a world where dangerous and addictive LMs are also highly profitable for enterprises. To help address some of these challenges, EAI created the AI Ethics Advisory Board—an on-demand, multidisciplinary team of 40+ experts in AI ethics and practice.

Ken touched upon a number of other topics in his lecture, which you can
watch here. Register for upcoming AI seminars here.