> It doesn't "know" anything.
The word know is an abstraction I use in order to avoid going into technical details.
> That's not a very useful interpretation because it's not grounded in technical reality.
My interpretation aligns with what people generally mean by hallucination, and it's definitely more useful than saying that any output is hallucination.
The difference is: what people generally mean by hallucination is "LLM said something wrong as if it was right". And what you are adding to that in your previous comments is the concept of whether or not the LLM knows the right answer. Which it never does. That's where your interpretation and the general interpretation differ.
I'm afraid I don't personally see how to explain more clearly, so will just say instead that given multiple people are in this thread telling you your understanding of how LLMs work isn't right, please consider that to at least be a possibility and look into it further rather than digging deeper into your current beliefs.