I think your defense of reusing terms for new phenomenon is fair.
But in this specific case, I would say the reuse of this particular word, to apply to this particular error, is still incorrect.
A person hallucinating is based on a many leveled experience of consciousness.
The LLM has nothing of the sort.
It doesn't have a hierarchy of knowledge which it is sorting to determine what is correct and what is not. It doesn't have a "world view" based on a lifetime of that knowledge sorting.
In fact, it doesn't have any representation of knowledge at all. Much less a concept of whether that knowledge is correct or not.
What it has is a model of what words came in what order, in the training set on which it was "trained" (another, and somewhat more accurate, anthropomorphism).
So without anything resembling conscious thought, it's not possible for an LLM to do anything even slightly resembling human hallucination.
As such, when the text generated by an LLM is not factually correct, it's not an hallucination, it's just wrong.
To cite integrative-psych:
https://www.integrative-psych.org/resources/confabulation-no...
"...this usage is misleading, as it suggests a perceptual process that LLMs, which lack sensory input, do not possess."
They prefer the word "confabulation", but I would also differ with that.
They define confabulation: "the brain creates plausible but incorrect memories to fill gaps".
Since, as with the lack of perceptions, LLMs are not retaining anything like a memory, I would also argue this term is inappropriate.
In terms of differentiating error categories, it's straightforward to specify, math error, spelling error, grammatical error, when those occur.
In the case of syntactically correct, but factually incorrect output, the word "wrong" describes this specific error category much more accurately than "hallucinate", which carries a host of inaccurate psychological implications.
This also speaks to a main point of my original post, that the use of "hallucinate" is euphemistic.
When we use a s/w tool for the input of human language questions, with the objective of receiving correct human language answers, just having a syntactically correct answer is not sufficient.
It needs to be emphasized that answers in this category are "wrong", they are not factually correct.
Using the word "hallucinate" is making an excuse for, and thus obfuscating, this factual error generated by the s/w tool.