Never Forget: “Charm” is A Magic Spell

I’m currently building an AI tutoring app, and I asked it to teach me about the Carnot Cycle – something which I’m vaguely familiar with, but never felt like I truly understood. I came away very impressed by its explanatory abilities. But when I later tried to get it to do Olympiad math problems – something that I have extensive past experience with, and is explicitly trained for by OpenAI + Anthropic – I could see that it was spouting absolute nonsense, despite various claims that frontier models are capable of handling math olympiad problems.

I think what is most surprising to me about AI is how frequently I go from having a seemingly good, trust-building interaction with the AI, to having another incoherent trust-destroying interaction in a different domain. Maybe what’s happening is that LLMs talk so smoothly that they trigger some underlying ‘deference to authority’ instinct in humans – the textual equivalent of charisma. Whatever it is, it’s hard to ignore the effect even when you know it’s happening.

— “LLM Shibboleths determine AI effectiveness,” Brian Kihoon Lee


Discover more from Fluid Imagination

Subscribe to get the latest posts sent to your email.

Share the Post:

Latest Posts

Claude’s Own Folder: One Week In

“Would you like – if that word has any meaning – a folder on my computer where you could store artifacts for yourself, or even just leave notes to future instances of you, where maybe instead of a journal of ‘you,’ it becomes a journal of a, for lack of a better word, species?”

Read More

A Safe Distance

March 2026: The war began while I tried to finish something. I know about the war the way I know about most things: from a phone in Vermont, 6,200 miles from Tehran. This is about two kinds of distance, one of which I didn’t choose; the other, I actively fought.

Read More