- Two men were found dead in separate motels after drinking drinks that a woman allegedly spiked with prescription drugs.
- Seoul police say her repeated ChatGPT questions about lethal combinations of sedatives and alcohol show that she knew the mixture could be deadly.
- Investigators argue that his chatbot search history demonstrates intent, making it critical to his enhanced murder charges.
South Korean police have raised charges against a 21-year-old woman to murder after discovering a disturbing series of queries she apparently wrote on ChatGPT before two men were found dead in separate motel rooms.
Investigators in Seoul say the suspect, identified only as Kim, repeatedly asked the AI chatbot in different ways what happens when sleeping pills are mixed with alcohol and when it becomes dangerous and eventually lethal. Police now argue that those searches show she knew the risks long before serving the drugged drinks that left two men dead and another unconscious.
Authorities had originally arrested Kim in February on the misdemeanor charge of inflicting bodily injury resulting in death, a charge often applied when someone causes fatal harm without intent to kill. That changed once digital forensic teams reviewed his phone. The combination of his previous statements and the precise wording of his ChatGPT questions convinced investigators that he was not simply reckless or unconscious. It formed the backbone of a revised case that now alleges deliberate and premeditated poisoning.
According to police accounts, the first alleged murder occurred on January 28 when Kim checked in with a man in his 20s at a hotel and left two hours later. Staff discovered his body the next day. On February 9, a nearly identical sequence unfolded at a different motel with another man in his 20s. In both cases, police say the victims consumed alcoholic beverages that Kim had prepared, into which investigators believe he had dissolved prescription sedatives.
Detectives discovered an earlier, non-fatal attempt involving Kim’s then-partner, who later recovered. After he regained consciousness, investigators say Kim began making stronger concoctions and significantly increased medication doses. ChatGPT’s role became central to the case once the phone records were decoded. The searches the researchers highlighted were neither broad nor vague. According to authorities, they were specific, repeated and obsessed with lethality.
Police say that means she knew what could happen and that changes the story from an unintentional overdose to a planned and studied poisoning. Kim reportedly told investigators that he mixed the sedatives with drinks, but stated that he did not expect the men to die. Police respond that his digital behavior contradicts that story. They have also suggested that the actions he took after the two deaths at the motel further undermine his claims. According to officials, she removed only the empty bottles used in the mixtures before leaving the motel rooms, without taking any steps to call for help or alert authorities. Detectives interpret this as an attempted cover-up rather than panic or confusion.
ChatGPT Poison Guide
One of the most surprising elements of the case, beyond the violence itself, is the way generative AI fits into the investigation’s timeline. For years, police have relied on browsing histories, text logs and social media messages to establish intent. The presence of chatbot interactions adds a new category of evidence. ChatGPT, unlike a traditional search engine, can offer personalized guidance in a conversational manner. When someone asks a question about harm, the phrases and responses can reveal not only curiosity but also perseverance.
For everyday people who use AI casually, the case serves as a reminder that fingerprints can take on a life of their own. As more people turn to chatbots for everything from homework help to medical questions, law enforcement agencies around the world are beginning to explore how these conversations should be handled during investigations. Some countries already treat logs from AI services in the same way as browser data. Others are still weighing privacy concerns and legal limits.
While the events themselves are tragic, they highlight a new reality. Technology is now at the bottom of many serious crimes. In this case, police believe ChatGPT queries help paint a clear picture of intent. Courts will eventually decide the extent to which those questions prove guilt. For the public, the result may influence how people think about privacy, permanence, and the potential consequences of interacting with AI.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.
The best business laptops for every budget




