It’s everywhere. I was just trying to find some information on starting seeds for the garden this year and I was met with AI article after AI article just making shit up. One even had a “picture” of someone planting some seeds and their hand was merged into the ceramic flower pot.
However, you won’t be able to tell AI slop from human slop, and we’ve had human slop around and already overwhelming, but nothing compared to LLM slop volume.
In fact, reading AI slop text reminds me a lot of human slop I’ve seen, whether it’s ‘high school’ style paper writing or clickbait word padding of an article.
One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn’t matter.
That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don’t think that any tweaking or fiddling with the model will ever be able to do anything toward what you’re describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.
The reason they can’t make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn’t have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.
Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of “people” say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!
I think the point is rather that it is distinguishable for someone knowledgeable on the subject, but not for someone is not. Thus making it harder to evolve from the latter to the former.
It’s everywhere. I was just trying to find some information on starting seeds for the garden this year and I was met with AI article after AI article just making shit up. One even had a “picture” of someone planting some seeds and their hand was merged into the ceramic flower pot.
The AI fire hose is destroying the internet.
I fear when they learn a different layout. Right now it seems they are usually obvious, but soon I wont be able to tell slop from intelligence.
You will be able to tell slop from intelligence.
However, you won’t be able to tell AI slop from human slop, and we’ve had human slop around and already overwhelming, but nothing compared to LLM slop volume.
In fact, reading AI slop text reminds me a lot of human slop I’ve seen, whether it’s ‘high school’ style paper writing or clickbait word padding of an article.
One could argue that if the AI response is not distinguishable from a human one at all, then they are equivalent and it doesn’t matter.
That said, the current LLM designs have no ability to do that, and so far all efforts to improve them beyond where they are today has made them worse at it. So, I don’t think that any tweaking or fiddling with the model will ever be able to do anything toward what you’re describing, except possibly using a different, but equally cookie-cutter way of responding that may look different from the old output, but will be much like other new output. It will still be obvious and predictable in a short time after we learn its new obvious tells.
The reason they can’t make it better anymore is because they are trying to do so by giving it ever more information to consume in a misguided notion that once it has enough data, it will be overall smarter, but that is not true because it doesn’t have any way to distinguish good data from garbage, and they have read and consumed the whole Internet already.
Now, when they try to consume more new data, a ton of it was actually already generated by an LLM, maybe even the same one, so contains no new data, but still takes more CPU to read and process. That redundant data also reinforces what it thinks it knows, counting its own repetition of a piece of information as another corroboration that the data is accurate. It thinks conjecture might be a fact because it saw a lot of “people” say the same thing. It could have been one crackpot talking nonsense that was then repeated as gospel on Reddit by 400 LLM bots. 401 people said the same thing; it MUST be true!
I think the point is rather that it is distinguishable for someone knowledgeable on the subject, but not for someone is not. Thus making it harder to evolve from the latter to the former.