AI is bullshit

What is being sold to us all is bullshit and bullshit is now coming at us from more angles.

Broadened bullshit bombardment boosts.


On Monday (June 10, 2024) we discussed research that points to modest expectations of AI gains.

Earlier, (here and here) we also commented on the concept of 'bullshit' and the academic definition of that word.

Well, in an academic paper posted on Friday (8 June 2024), three academics have now also concluded that a better word to describe AI 'mistakes' would be the word 'bullshit' as opposed to the word 'hallucinations'.

(Hallucinations is the word that has become the norm to denote AI's nonsense, mistakes, or - sometimes - really dangerous answers.)

From Michael Townsen Hicks, James Humphries and Joe Slater's paper:

"We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."

Business Implications


While these types of discussions and analyses may seem to be trivial, we have to recognise that almost anyone and everyone - including most organisations - are now using (or starting to use) AI extensively.

So, there is danger.

One still has to be very careful when using information that you receive from these services.

How careful?

Well, we all know that guy, Joe, in the office that is both a font of wisdom and a real bullshitter when it suits his peculiar sense of humour to be one.

So, this is exactly how careful we should be: as careful as you are when Joe gives you an answer.

Wikipedia warns us too


Or, to be even more exact in terms of what this may mean, let's look at Wikipedia again:

"In philosophy and psychology of cognition, the term "bullshit" is sometimes used to specifically refer to statements produced without particular concern for truth, clarity, or meaning, distinguishing "bullshit" from a deliberate, manipulative lie intended to subvert the truth."

The Wikipedia article then states:

"In business and management, guidance for comprehending, recognizing, acting on and preventing bullshit, are proposed for stifling the production and spread of this form of misrepresentation in the workplace, media and society."

Keep all of this in mind.

We are dealing with supremely 'intelligent' bullshitters when we deal with modern AI services.


[Ed's note] The brilliant Hannah Arendt captures the risks of this condition so perfectly in her 1971 piece Lying in Politics, where in part she wrote:

"There always comes the point beyond which lying becomes counterproductive. This point is reached when the audience to which the lies are addressed is forced to disregard altogether the distinguishing line between truth and falsehood in order to be able to survive. Truth or falsehood—it does not matter which any more, if your life depends on your acting as though you trusted; truth that can be relied on disappears from public life and with it the chief stabilizing factor in the ever-changing affairs of men." - Hannah Arendt