Skip to content

Are LLMs Liars, Hallucinators, or Just Bullshitters?

Bull in a Llama costume looking back at you

Harry Frankfurt’s 2005 book, On Bullshit, defines speech that is intended to persuade without consideration of the truth as “bullshit.”  Frankfurt, a Professor Emeritus of Philosophy at Princeton, differentiates bullshit from lying in that liars care about hiding the truth. In contrast, bullshitters don’t care about whether their statements are true or false.

​Michael Townsen Hicks, a post-doctoral researcher at the University of Glasgow, and his co-authors apply Frankfurt’s concepts to Generative AI, in their colorfully titled paper, “ChatGPT is Bullshit”.   The authors suggest that characterizing outputs of LLMS as bullshit, rather than hallucinations, more accurately reflects the nature of these systems and offers better guidance for public and policy understanding. Hicks et al. distinguish between "hard" and "soft" bullshit, where the former involves intentional deception about the nature of a topic, and the latter involves a mere indifference to truth. They argue that, at a minimum, LLMs produce soft bullshit and potentially hard bullshit if one considers the design and purpose behind their creation.

In this episode, Steven Baker, a Technology Advisor with AIFoundry.org, and I talk about why LLMs bullshit, the potential problems with this, and what to do about it. This is a really fun episode, and I highly recommend watching the video above.

If you’d like to catch some of our future podcasts, subscribe to our calendar below: