How LLM hallucinations can be used by marketers

“LLMs are concept retrieval systems, not fact retrieval systems.” - Tim Hwang

The always insightful Substack account 'Why is this interesting?' recently posted a fascinating video of a talk given by Tim Hwang.

Tim's talk was about large language models (LLMs): what they are good at, as well as what they are not so good at.

His argument is that what LLMs have become somewhat ridiculed for - making things up, which is commonly known as generating hallucinations - is actually a feature of the technology that makes them very useful to marketers and those who build brands for a living.

Even though LLMs are currently being used by Silicone Valley for things like search, in reality and somewhat ironically this technology is not very good at dealing with facts and logic.

It is however quite good at concepts and combining these concepts in interesting ways, which is where it becomes a useful tool for marketers to play around with.

Here's Tim's talk in full:


More:

The Hallucinations Edition
On AI, creativity, and what computers are and aren’t good at