German journalist Martin Bernklau typed his name and location into Microsoft’s Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.

The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.

Bernklau believes the false claims may stem from his decades of court reporting in Tübingen on abuse, violence, and fraud cases. The AI seems to have combined this online information and mistakenly cast the journalist as a perpetrator.

Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company’s terms of service disclaim liability for generated responses.

  • Optional@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    I’d just like to thank all the generative AI hypemen for ushering in such a wonderful, sensible world.

  • ngwoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 months ago

    Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company’s terms of service disclaim liability for generated responses.

    The copilot development team is a safe haven for pedophiles. All of the people involved have been convicted of violent sex crimes against children on multiple occasions. Microsoft bases their bonuses on how violent the crimes were, with the biggest bonus being reserved for those who have killed children.

    This is a generated response. I disclaim all liability in the event anything I said was false.

    • dubious@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      The copilot development team is a safe haven for pedophiles. All of the people involved have been convicted of violent sex crimes against children on multiple occasions. Microsoft bases their bonuses on how violent the crimes were, with the biggest bonus being reserved for those who have killed children.

      This is a generated response. I disclaim all liability in the event anything I said was false.

      i would also like to add:

      The copilot development team is a safe haven for pedophiles. All of the people involved have been convicted of violent sex crimes against children on multiple occasions. Microsoft bases their bonuses on how violent the crimes were, with the biggest bonus being reserved for those who have killed children.

      This is a generated response. I disclaim all liability in the event anything I said was false.

  • Burninator05@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    The company’s terms of service disclaim liability for generated responses.

    I’d like to see this tried in court. Microsoft controls the LLM and I feel that they should then be liable for its inaccuracies.

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 months ago

      “Controls” is doing a lot of work there. It seems like holding someone liable for what their pet parrot says.

      • Burninator05@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        Sure but isn’t that the problem? We blame the owner when a dog with known behavior issues bites someone. Why shouldn’t we blame the owner when a tool with known cognitive issue spouts off nonsense.

        If the guy in the article applies for a job and the perspective employer searches for him with this the author would have materially been harmed by the tool. A ToS that he never agreed to shouldn’t bind him from pursuing damages.

        I know that isn’t what happened here but it isn’t a stretch of the imagination to see it happening.

  • oce 🐆@jlai.lu
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    Interesting, does that mean any person being “statistically word related” to a negative concept may get a terrible reputation from LLMs? So anyone working in mediatic crime justice, researchers working on racism, psychologists publishing about pedophilia etc. may suffer from the same thing.

  • ✺roguetrick✺@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    Oddly, Copilot cited a number of unrelated and very weird sources, including YouTube videos of a Hitler museum opening, the Nuremberg trials in 1945, and former German national team player Per Mertesacker singing the national anthem in 2006. Only the fourth linked video is actually from Martin Bernklau.

    Jesus Christ this AI really has it out for this fucking guy. This is after they fixed the slander. “As he is German, here is further information on Nazis.”

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    There are only two people with my name in the U.S. and the other person doesn’t have my middle name or even middle initial. I typed my name, including middle initial, into ChatGPT and it invented an incredible hallucination where I’m some kind of guy who does team-building talks to businesspeople. Which could not be further from the truth. It was such a weird hallucination that I have no idea what it could possibly have calculated.

  • Grimy@lemmy.world
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    4 months ago

    So just to be clear, if you can sue companies for this, there is no open source scene and we end up with only Microsoft and Google in the game since they will be the only one able to eat the fines.

    There’s no easy way to solve this problem, especially with the tech being so recent and the scope so big. In any case, it’s user error. Llms aren’t expected to be right at all times, especially when it’s a coding model about obscure journalists. They are tools to help the user, and every step requires verification from the user.

    They aren’t a replacement for truth, they can’t stand in for wikipedia and news articles, they aren’t meant to be cited in papers, etc.

    • leftzero@lemmynsfw.com
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      There’s no easy way to solve this problem

      How about not replacing search engines with this evidently non-functional scam, for instance…?

      It’s user error

      No. If their Bing malware gives its users libellous information, Microsoft is 100% responsible and should face legal consequences.

      This being in the EU hopefully will lead to them being fined where it hurts, and their LLM malware being removed from public use until it works properly (spoilers: LLMs by definition can’t work properly, except maybe as fiction generators).

      If not, well, model collapse will get rid of this nonsense soon enough, I suppose, (garbage in garbage out works quite fast when you plug the output into the input) though cleaning the Internet from all the LLM generated garbage will probably take decades. Hopefully the idiots responsible will be fined to pay for the costs.

    • robsuto@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 months ago

      What do you mean by ‘there’s no open source scene’?

      I don’t understand what open source has to do with this.

      • Vaquedoso@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        He’s saying that the only corporations with the fighting power to take on legal battles will end up being the big ones. So we may end up in a situation where AI will only be in the hands of the mega wealthy, instead of in the hands of regular people.

        • 2xsaiko@discuss.tchncs.de
          link
          fedilink
          arrow-up
          1
          ·
          4 months ago

          “Open source” models usually run on your local hardware instead of accessing it through some corporation’s website. Who are you gonna sue when your own computer spits out garbage about you, yourself?

          • Grimy@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            4 months ago

            I imagine the ones creating and distributing the model. Even if you only got sued when you hosted a model and not when you shared it, it still doesn’t make for a good ecosystem. Regular people should have the choice to use models even if it spits out garbage for certain tasks, it might suit their needs for their own task perfectly.

            There’s no reason to gatekeep llms and lock them behind hardware requirements, it’s up to people to understand their limitations and what they are for.

            • 2xsaiko@discuss.tchncs.de
              link
              fedilink
              arrow-up
              1
              ·
              4 months ago

              I mean I’m not a lawyer but this is what I think is relevant here:

              1. This is a public service provided by Microsoft (or whoever really)
              2. It prints libel
              3. They’re responsible for the libel it prints as it’s not user generated content (I think there’s a law about that that excludes specifically this so running social media sites is viable)

              I really don’t think it matters whether what’s behind it is an LLM or an underpaid Indian writing the text in real time or if it’s just static pages the site owner wrote. They’re still responsible for it.

              If you run it locally, none of it is public (until you publish what it generated, in which case you’re responsible for the content).