

1·
3 days agoThey try to do security the same way, by adding “pwease dont use dangerous shell commands” to the system prompt.
Security researchers have dubbed it “Prompt Begging”


They try to do security the same way, by adding “pwease dont use dangerous shell commands” to the system prompt.
Security researchers have dubbed it “Prompt Begging”


Agree, the term is misleading.
Talking about hallucinations lets us talk about undesired output as a completely different thing than desires output, which implies it can be handled somehow.
The problem it the LLM can only ever output bullshit. Often the bullshit is decent and we call it output, and sometimes the bullshit is wrong and we call it hallucination.
But it’s the exact same thing from the LLM. You can’t make it detect it or promise not to make it.


Wow so next time I have a burning need for agentic experiences in my life I know a product exists to serve my need.
Don’t be so negative! It’s also found a huge market in scams. Both for stealing celebrity likenesses, and making pictures and video of nonexistent products.