doodledup@lemmy.worldtoTechnology@lemmy.world•Meta addresses AI hallucination as chatbot says Trump shooting didn’t happenEnglish
1·
5 months agoHuman beings are not infallible either.
Human beings are not infallible either.
AI doesn’t know what’s wrong or correct. It hallucinates every answer. It’s up to the supervisor to determine whether it’s wrong or correct.
Mathematically verifying the correctness of these algorithms is a hard problem. It’s intentional and the trade-off for the incredible efficiency.
Besides, it can only “know” what it has been trained on. It shouldn’t be suprising that it cannot answer about the Trump shooting. Anyone who thinks otherwise simply doesn’t know how to use these models.
The point is that they can use that data for further training. They want to build a monopoly like Google is for search.
This account is a China propaganda shill. Half of their posts are about China and how bad Europe and US is.