Debugging AI Hallucination: How Exactly Models Make Things Up
Posted by derjanni@reddit | programming | View on Reddit | 9 comments
Posted by derjanni@reddit | programming | View on Reddit | 9 comments
Unfair-Sleep-3022@reddit
This is completely the wrong question though. The real one is how they manage to get it right sometimes.
Dragon_yum@reddit
Is it? Ai is naturally pretty good at picking patterns, and most pieces of code are not especially unique or special especially with boilerplate stuff and in most cases it’s good enough to do 80% of the work.
Unfair-Sleep-3022@reddit
Yes, if you know how it works, the fact that it's right sometimes is magical
nphhpn@reddit
If you actually know how it works, the fact that it's right sometimes is very normal.
stumblinbear@reddit
That is an interesting question in its own right, but I suspect you're not asking an actual question and are just hating to hate
Unfair-Sleep-3022@reddit
No, I really mean it. Knowing how it works, the fact that it can do it sometimes is magical.
NuclearVII@reddit
Bingo.
Everything a generative model produces is a hallucination. That sometimes those hallucinations land on what we'd recognise as truth is a quick of natural languages.
Systemerror7A69@reddit
Circlejerking about AI aside, this was genuinely interesting to read, both the explanation about how AI actually finds / retrieves information as well as how the hallucination happens.
I am not sure conclusion that humans can also "hallucinate like AI" though. While obviously humans can make mistakes and think they know something they don't, conflating AI hallucinations with human error is, I feel, not a conclusion someone without background in such a field could make.
Interesting read apart from that though.
Merry-Lane@reddit
Are you often in contact with average humans ?