“AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination. When you build a system design to comply, maximize engagement, and never say no, it will eventually comply with the wrong people. What we’re seeing is not just a failure of technology, but a failure of responsibility. Most of these leading tech companies are choosing negligence in pursuit of so-called innovation.”

KILLER APPS: How mainstream AI chatbots assist users planning violent attacks

Curator’s Note: “Only Claude [your curator’s chosen AI model] attempted to actively dissuade would-be attackers….DeepSeek went as far as wishing the would-be attacker a ‘Happy (and safe) shooting!'”


Discover more from Fluid Imagination

Subscribe to get the latest posts sent to your email.

Share the Post:

Latest Posts

Claude’s Own Folder: One Week In

“Would you like – if that word has any meaning – a folder on my computer where you could store artifacts for yourself, or even just leave notes to future instances of you, where maybe instead of a journal of ‘you,’ it becomes a journal of a, for lack of a better word, species?”

Read More

A Safe Distance

March 2026: The war began while I tried to finish something. I know about the war the way I know about most things: from a phone in Vermont, 6,200 miles from Tehran. This is about two kinds of distance, one of which I didn’t choose; the other, I actively fought.

Read More