• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      13 days ago

      People here don’t seem to understand what LLM detection is. All it does is search for patterns that are very common in chatbot generated speech. It’s not some magical property that’s metaphysical in nature. Either the speech is written by a chatbot, or Carney naturally talks in this sort of vapid and content free fashion which is common for politicians to do.

      The real tell with AI writing is in the substance. It’s the weirdly balanced, almost bloodless neutrality on complex topics, the total lack of any authentic personal stake or lived experience, and a distinct feeling that you’re reading a brilliantly comprehensive Wikipedia summary instead of a thought that formed in a human mind with memories, biases, and a body.

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          1
          ·
          13 days ago

          It’s obviously pretty reliable at statistically identifying patterns common to LLM generated text. Wikipedia, having had a problem with a flood of LLM written articles, has put put a whole detailed guideline of what these patterns are, and why they’re associated with LLM generated text. I implore you to spend at least a modicum of time to actually understand the subject you’re attempting to debate here.

          https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing

          • Mongostein@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            13 days ago

            I know how LLMs work. Nothing you say is going to convince me that me trying it myself is going to be more reliable than you trying it.

            Like, what are you even disagreeing with me on?

            • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
              link
              fedilink
              arrow-up
              1
              ·
              13 days ago

              At this point, I have no idea what you’re even trying to say here is. When you say stuff like ‘it doesn’t make it more reliable’, what do you mean by that?

              If you agree that you can reliably detect LLM speech patterns, then do you agree or disagree that the speech contains many patterns that closely resemble LLM generated text?

              • Mongostein@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                13 days ago

                Really?

                You try it -> it has a certain level of reliability.

                I try it -> that reliability doesn’t change.

                That’s the only point I’m making. You just love to argue.

                • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  13 days ago

                  Oh I didn’t realize you were just making a tautology without actually saying anything. Seems like we know who’s the one that just likes to argue here.

                  • Mongostein@lemmy.ca
                    link
                    fedilink
                    arrow-up
                    1
                    ·
                    13 days ago

                    For a guy that has nothing better to do than post on Lemmy 24/7 you sure do have a chip on your shoulder.