AI is so amazing we just need to hire entire departments to make sure it isn’t completely full of shit!
Waow, look at all the money we saved with layoffs. Buy our stock!
Waow, look at all the growth we’re experiencing, we have to hire more developers. Buy our stock!
Sadly, this is exactly what’s happening.
We fire humans to prop up AI. But AI is so not there yet that we need humans to double check.
🙃
Yeah but the new humans are cheaper
🎶 it’s the ciiiircle of liiife… 🎶
After brainwashing, here comes slopwashing by Google’s clank engineers.
AI doesn’t hallucinate. It’s a fancy marketing term for when AI confidently does something in error.
The tech billionaires would have a harder time getting the mass amounts of people that don’t understand interested if they didn’t use words like hallucinate.
It’s a data center, not a psychiatric patient
It’s also not intelligent but stochastic language models.
Agree, the term is misleading.
Talking about hallucinations lets us talk about undesired output as a completely different thing than desires output, which implies it can be handled somehow.
The problem it the LLM can only ever output bullshit. Often the bullshit is decent and we call it output, and sometimes the bullshit is wrong and we call it hallucination.
But it’s the exact same thing from the LLM. You can’t make it detect it or promise not to make it.
You can’t make it detect it or promise not to make it.
This is how you know these things are fucking worthless because the people in charge of them think they can combat this by using anti hallucination clauses in the prompt as if the AI would know how to tell it was hallucinating. It already classified it as plausible output by creating it!
They try to do security the same way, by adding “pwease dont use dangerous shell commands” to the system prompt.
Security researchers have dubbed it “Prompt Begging”
Or, you know, you could just build a good search engine and let users scroll 15 seconds in the first result to find what they’re looking for.
I’m not anti-AI at all, but their LLM definitely isn’t ready for the top of a google search as if it is real information. Of course, posting promoted search results at the top of the searches as if it was a real result already devalued them. They at least need the LLM result to be an opt-in option with caveats. I would probably opt-in but I would like off to be the default.
It’s trained on their SERPs that have been steadily getting more useless for 20 years. Of course its answers suck.
They didn’t have people doing that?
I’ve seen 100 shitty job postings for rating AI results. It’s rather complicated and pays pennies.
Everybody must jump onto the AI train no matter how often it derails!
So who profits?
The unholy alliance of tech giants and government.
Who loses?
Everybody else. This is US tax money being thrown into a money burning machine.
wasted money is bad enough, but the environmental cost is the one we should be rioting about
I always wanted to be a Turk.
You know what? A search engine business where you as a site owner
- pay a little registration fee
- fill out a webform
and the engine does a little background check to make sure it’s not spam.
Where the users can sort and filter the available fields however they want.
No spiders or crawling.
Would that work?








