• 12212012@z.org
    link
    fedilink
    arrow-up
    23
    ·
    3 days ago

    AI doesn’t hallucinate. It’s a fancy marketing term for when AI confidently does something in error.

    The tech billionaires would have a harder time getting the mass amounts of people that don’t understand interested if they didn’t use words like hallucinate.

    It’s a data center, not a psychiatric patient

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Agree, the term is misleading.

      Talking about hallucinations lets us talk about undesired output as a completely different thing than desires output, which implies it can be handled somehow.

      The problem it the LLM can only ever output bullshit. Often the bullshit is decent and we call it output, and sometimes the bullshit is wrong and we call it hallucination.

      But it’s the exact same thing from the LLM. You can’t make it detect it or promise not to make it.

      • underisk@lemmy.ml
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        You can’t make it detect it or promise not to make it.

        This is how you know these things are fucking worthless because the people in charge of them think they can combat this by using anti hallucination clauses in the prompt as if the AI would know how to tell it was hallucinating. It already classified it as plausible output by creating it!

        • Deestan@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          They try to do security the same way, by adding “pwease dont use dangerous shell commands” to the system prompt.

          Security researchers have dubbed it “Prompt Begging”