retired healthcare IT programmer/analyst, supporter of Palestine, Cuban Revolution, women’s rights, FOSS, Linux, Black Lives Matter. Live in Michigan, USA

  • 105 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: August 23rd, 2024

help-circle










  • I don’t understand your logic here. Clearly, the kid had problems that were not caused by ChatGPT. And his suicidal thoughts were not started by ChatGPT. But OpenAI acknowledged that the longer the engagement continues the more likely that ChatGPT will go off the rails. Which is what happened here. At first, ChatGPT was giving the standard correct advice about suicide lines, etc. Then it started getting darker, where it was telling the kid to not let his mother know how he was feeling. Then it progressed to actual suicide coaching. So I don’t think the analogy to videogames is correct here.