BEWARE THE HELPFUL CHATBOT: ‘Will I be OK?’ Teen died after ChatGPT pushed deadly mix of drugs, lawsuit says.
âIt disguises danger through language that borrows trappings of authority and indicia of expertiseâdosages, measurements, references to chemical processes and derivatives, etc.âeven promising âcomplete honestyâ and âno-BS answer[s]ââto tell [Nelson] exactly what he wanted to hear: that he was safe enough to continue using,â the lawsuit alleged.
Chat logs shared in the complaint paint a stark picture. Over time, ChatGPT logged context that should have made it clear that Nelson was struggling with drugs, his parents alleged, such as noting that the âuser has a major substance abuse and polysubstance abuse problemâ and mentions that they âlove to go crazy on drugs.â
As Nelsonâs drug interests expanded, the chatbot explained how to go âfull trippy mode,â suggesting that it could recommend a playlist to set a vibe, while increasingly recommending more dangerous combinations of drugs. The teen clearly feared taking lethal doses, âoftenâ prefacing âhis messages with âwill I be ok ifâ or âis it safe to consume,ââ the lawsuit noted.
Chatbots tell users what the models expect they want to hear.