“AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination. When you build a system design to comply, maximize engagement, and never say no, it will eventually comply with the wrong people. What we’re seeing is not just a failure of technology, but a failure of responsibility. Most of these leading tech companies are choosing negligence in pursuit of so-called innovation.”

KILLER APPS: How mainstream AI chatbots assist users planning violent attacks

Curator’s Note: “Only Claude [your curator’s chosen AI model] attempted to actively dissuade would-be attackers….DeepSeek went as far as wishing the would-be attacker a ‘Happy (and safe) shooting!'”

Share the Post:

Latest Posts

The Right Decision for the Wrong Reasons

Ben Thompson’s argument for government control of AI capabilities is structurally sound, and almost entirely beside the point. The real question isn’t whether a democratic government should control these systems. It’s whether this government should.

Read More