Large Language Models and Their Big Bullshit Potential

Powerful large language models (LLMs) have recently burst onto the scene, with application across a wide range of functions—such as chat, customer service, and internet search. Accordingly, we can expect to begin encountering artificially generated text at growing volumes and frequencies. Yet some commentators have complained that LLMs are essentially bullshitting, generating convincing outputs with no regard for the truth. If correct, that would make them distinctively dangerous discourse participants, since bullshitters do not only undermine the norm of truthfulness (by saying false things) but the very value of truth itself (by treating it as entirely irrelevant). So can LLMs really be bullshitting? In addressing this question—and answering in the affirmative—I will arrive at a definition of bullshitting that improves on existing approaches in the philosophical literature.