ChatGPT Isn’t ‘Hallucinating’—It’s Bullshitting!

ARTIFICIAL INTELLIGENCE-AI, 29 Jul 2024

Joe Slater, James Humphries & Michael Townsen Hicks | Scientific American – TRANSCEND Media Service

Malte Mueller/Getty Images

It’s important that we use accurate terminology when discussing how AI chatbots make up information.

17 Jul 2024 – Right now artificial intelligence is everywhere. When you write a document, you’ll probably be asked whether you need your “AI assistant.” Open a PDF and you might be asked whether you want an AI to provide you with a summary. But if you have used ChatGPT or similar programs, you’re probably familiar with a certain problem—it makes stuff up, causing people to view things it says with suspicion.

It has become common to describe these errors as “hallucinations.” But talking about ChatGPT this way is misleading and potentially damaging. Instead call it bullshit.

We don’t say this lightly. Among philosophers, “bullshit” has a specialist meaning, one popularized by the late American philosopher Harry Frankfurt. When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don’t care whether what they say is true. ChatGPT and its peers cannot care, and they are instead, in a technical sense, bullshit machines.

We can easily see why this is true and why it matters. Last year, for example, one lawyer found himself in hot water when he used ChatGPT in his research while writing a legal brief. Unfortunately, ChatGPT had included fictitious case citations. The cases it cited simply did not exist.

This isn’t rare or anomalous. To understand why, it’s worth thinking a bit about how these programs work. OpenAI’s ChatGPT, Google’s Gemini chatbot and Meta’s Llama all work in structurally similar ways. At their core is an LLM—a large language model. These models all make predictions about language. Given some input, ChatGPT will make some prediction about what should come next or what is an appropriate response. It does so through an analysis of enormous amounts of text (its “training data”). In ChatGPT’s case, the initial training data included billions of pages of text from the Internet.

From those training data, the LLM predicts, from some text fragment or prompt, what should come next. It will arrive at a list of the most likely words (technically, linguistic tokens) to come next, then select one of the leading candidates. Allowing for it not to choose the most likely word each time allows for more creative (and more human-sounding) language. The parameter that sets how much deviation is permitted is known as the “temperature.” Later in the process, human trainers refine predictions by judging whether the outputs constitute sensible speech. Extra restrictions may also be placed on the program to avoid problems (such as ChatGPT saying racist things), but this token-by-token prediction is the idea that underlies all of this technology.

Now, we can see from this description that nothing about the modeling ensures that the outputs accurately depict anything in the world. There is not much reason to think that the outputs are connected to any sort of internal representation at all. A well-trained chatbot will produce humanlike text, but nothing about the process checks that the text is true, which is why we strongly doubt an LLM really understands what it says.

So sometimes ChatGPT says false things. In recent years, as we have been becoming accustomed to AI, people have started to refer to these falsehoods as “AI hallucinations.” While this language is metaphorical, we think it’s not a good metaphor.

Consider Shakespeare’s paradigmatic hallucination in which Macbeth sees a dagger floating toward him. What’s going on here? Macbeth is trying to use his perceptual capacities in his normal way, but something has gone wrong. And his perceptual capacities are almost always reliable—he doesn’t usually see daggers randomly floating about! Normally his vision is useful in representing the world, and it is good at this because of its connection to the world.

Now think about ChatGPT. Whenever it says anything, it is simply trying to produce humanlike text. The goal is simply to make something that sounds good. This is never directly tied to the world. When it goes wrong, it isn’t because it hasn’t succeeded in representing the world this time; it never tries to represent the world! Calling its falsehoods “hallucinations” doesn’t capture this feature.

Instead we suggest, in a June report in Ethics and Information Technology, that a better term is “bullshit.” As mentioned, a bullshitter just doesn’t care whether what they say is true.

So if we do regard ChatGPT as engaging in a conversation with us—though even this might be a bit of a pretense—then it seems to fit the bill. As much as it intends to do anything, it intends to produce convincing humanlike text. It isn’t trying to say things about the world. It’s just bullshitting. And crucially, it’s bullshitting even when it says true things!

Why does this matter? Isn’t “hallucination” just a nice metaphor here? Does it really matter if it’s not apt? We think it does matter for at least three reasons:

First, the terminology we use affects public understanding of technology, which is important in itself. If we use misleading terms, people are more likely to misconstrue how the technology works. We think this in itself is a bad thing.

Second, how we describe technology affects our relationship with that technology and how we think about it. And this can be harmful. Consider people who have been lulled into a false of security by “self-driving” cars. We worry that talking of AI “hallucinating”—a term usually used for human psychology—risks anthropomorphizing the chatbots. The ELIZA effect (named after a chatbot from the 1960s) occurs when people attribute human features to computer programs. We saw this in extremis in the case of the Google employee who came to believe that one of the company’s chatbots was sentient. Describing ChatGPT as a bullshit machine (even if it’s a very impressive one) helps mitigate this risk.

Third, if we attribute agency to the programs, this may shift blame away from those using ChatGPT, or its programmers, when things go wrong. If, as appears to be the case, this kind of technology will increasingly be used in important matters such as health care, it is crucial that we know who is responsible when things go wrong.

So, next time you see someone describing an AI making something up as a “hallucination,” call it bullshit!

_____________________________________________________

Joe Slater is a lecturer in moral and political philosophy at the University of Glasgow.

James Humphries is a lecturer in political theory at the University of Glasgow.

Michael Townsen Hicks is a lecturer in philosophy of science and technology at the University of Glasgow.

Go to Original – scientificamerican.com


Tags: , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

There are no comments so far.

Join the discussion!

We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.

2 × = 12

Note: we try to save your comment in your browser when there are technical problems. Still, for long comments we recommend that you copy them somewhere else as a backup before you submit them.

This site uses Akismet to reduce spam. Learn how your comment data is processed.