• 0 Posts
  • 80 Comments
Joined 1 year ago
cake
Cake day: August 9th, 2023

help-circle

  • LLMs don’t solve problems. That’s the point being made here. Many other algorithms do indeed solve issues, but those are very niche, as the alogos were explicitly designed for those situations.

    While yes, humans excel at pattern recognition, sometimes to the point of it being a problem, there are many things we do that have nothing to do with patterns beyond the fact that they are tangentially involved. Emotions for instance don’t inherently follow patterns. They can, but they aren’t directly tied. Exploration also doesn’t come from pattern recognition.

    If you need examples of why people flat out say LLMs aren’t solving problems, look at the recent “how many r’s in strawberry” which has admittedly been “fixed” in many models.















  • That… isn’t telling you what you want to hear.

    LLMs are literally just complex autocorrect. They don’t weight their responses based on what a user wants to hear (unless explicitly instructed to) they simply return the most algorithmically generic response it can find.

    Tell it to talk like a pirate, it will pattern match to pirate talk. It’s not doing it because you want it to, but because you gave it a “pre prompt” to talk like a pirate, and it did the most likely thing that would happen.

    Yes, this can seem like telling you what you want, but go ask it to tell you what shape the world is. Then tell it you want the earth to be flat, and to answer the question again. Both times the answer will be an oblate spheroid, because it doesn’t know nor care what you want.

    Now, if you say “Imagine the world is flat” first, yeah it’ll tell you it’s flat. Not because you want it to, but because you’re explicitly handing it “new information” that you want it to incorporate into its response.