> If an LLM happens to know the answer to your question
You're missing the point. It doesn't "know" anything. The only thing it can "know" is the statistical relationships between tokens in its dataset. It doesn't "know" anything about the meaning of those tokens. It doesn't even "know" whether it "knows" anything or not. The best it can do is "Here's a recursively generated string of ASCII codes that are statistically likely to follow each other according to the data corpus."
It's Rashomon. It can point you in the right directions a lot of the time, but there's no getting around the fact that you have to double-check its answers with external sources.
> Or at least this is how I interpret the term.
That's not a very useful interpretation because it's not grounded in technical reality.
> It doesn't "know" anything.
The word know is an abstraction I use in order to avoid going into technical details.
> That's not a very useful interpretation because it's not grounded in technical reality.
My interpretation aligns with what people generally mean by hallucination, and it's definitely more useful than saying that any output is hallucination.
The difference is: what people generally mean by hallucination is "LLM said something wrong as if it was right". And what you are adding to that in your previous comments is the concept of whether or not the LLM knows the right answer. Which it never does. That's where your interpretation and the general interpretation differ.
I'm afraid I don't personally see how to explain more clearly, so will just say instead that given multiple people are in this thread telling you your understanding of how LLMs work isn't right, please consider that to at least be a possibility and look into it further rather than digging deeper into your current beliefs.