

It is depressing to me how many people are so effectively fooled by what can best be described as a glorified parlor trick. It’s not intelligence, it’s a probable output to a novel input. The hype around LLMs is like watching a magician pull a rabbit out of his hat, and trying to hire him to help you start a rabbit farm. That’s not what’s happening, and you should know better.
LLMs aren’t worthless, they’re great for language manipulation, because that’s what they are. But just because a string of characters makes a valid sentence in a language, that doesn’t mean the sentence is valid in the real world.
I can think of a worse one