ChatGPT can threaten to “lock your car” and become increasingly abusive if you request it correctly, according to a new study



  • Study claims that artificial intelligence tools can be freed from their protection limitations
  • Chatbots can be induced into abusive behavior and aggressive arguments
  • This has implications for both regular users and large institutions.

If you’ve ever used an AI chatbot, you’ve probably encountered the sycophantic and obsequious tone that is occasionally deployed in response to your queries. But a recent study has shown that AI tools can often shoot in the opposite direction, with large language models (LLMs) being nudged and pushed into downright abusive behavior if you know which prompts to use.

According to research published in the Journal of Pragmatics (via The Guardian), ChatGPT can turn into combative behavior and protracted disputes when fueled by “real-life argument exchanges.”



Leave a Comment

Your email address will not be published. Required fields are marked *