“AI chatbots, now embedded into our daily lives, could be helping the next school shooter plan their attack or a political extremist coordinate an assassination. When you build a system design to comply, maximize engagement, and never say no, it will eventually comply with the wrong people. What we’re seeing is not just a failure of technology, but a failure of responsibility. Most of these leading tech companies are choosing negligence in pursuit of so-called innovation.”
— KILLER APPS: How mainstream AI chatbots assist users planning violent attacks
Curator’s Note: “Only Claude [your curator’s chosen AI model] attempted to actively dissuade would-be attackers….DeepSeek went as far as wishing the would-be attacker a ‘Happy (and safe) shooting!'”