The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic's con

The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic's con

In "The LLMentalist Effect," Baldur Bjarnason explores how chat-based large language models (LLMs) mirror psychic cons. Despite a common belief that these models possess intelligence, Bjarnason argues that they function through statistical tricks akin to cold reading used by mentalists. He suggests that the illusion of intelligence is often projected by users themselves, rather than being an intrinsic property of LLMs, drawing parallels with psychic scams that rely on subjective validation and the Forer effect. Bjarnason expresses skepticism about the reliability and ethical implications of LLMs, cautioning against their use in critical areas due to their propensity for error and the misleading perception of intelligence they create.

Visit Original Article →