One of the most offensive words in the anthropomophization of LLMs is: hallucinate.
It's not only an anthropomorphism, it's also a euphemism.
A correct interpretation of the word would imply that the LLM has some fantastical vision that it mistakes for reality. What utter bullsh1t.
Let's just use the correct word for this type of output: wrong.
When the LLM generates a sequence of words, that may or may not be grammatically correct, but infers a state or conclusion that is not factually correct; lets state what actually happened: the LLM generated text was WRONG.
It didn't take a trip down Alice's rabbit hole, it just put words together into a stream that inferred a piece of information that was incorrect, it was just WRONG.
The euphemistic aspect of using this word is a greater offense than the anthropomorphism, because it's painting some cutesy picture of what happened, instead of accurately acknowledging that the s/w generated an incorrect result. It's covering up for the inherent short comings of the tech.
When a person hallucinates a dragon coming for them, they are wrong, but we still use a different word to more precisely indicate the class of error.
Not all llm errors are hallucinations - if an llm tells me that 3 + 5 is 7, It's just wrong. If it tells me that the source for 3 + 5 being 7 is a seminal paper entitled "On the relative accuracy of summing numbers to a region +-1 from the fourth prime", we would call that a hallucination. In modern parlance " hallucination" has become a term of art to represent a particular class of error that llms are prone to. (Others have argued that "confabulation" would be more accurate, but it hasn't really caught on.)
It's perfectly normal to repurpose terms and anthropomorphizations to represent aspects of the world or systems that we create. You're welcome to try to introduce other terms that don't include any anthropomorphization, but saying it's "just wrong" conveys less information and isn't as useful.
I think your defense of reusing terms for new phenomenon is fair.
But in this specific case, I would say the reuse of this particular word, to apply to this particular error, is still incorrect.
A person hallucinating is based on a many leveled experience of consciousness.
The LLM has nothing of the sort.
It doesn't have a hierarchy of knowledge which it is sorting to determine what is correct and what is not. It doesn't have a "world view" based on a lifetime of that knowledge sorting.
In fact, it doesn't have any representation of knowledge at all. Much less a concept of whether that knowledge is correct or not.
What it has is a model of what words came in what order, in the training set on which it was "trained" (another, and somewhat more accurate, anthropomorphism).
So without anything resembling conscious thought, it's not possible for an LLM to do anything even slightly resembling human hallucination.
As such, when the text generated by an LLM is not factually correct, it's not an hallucination, it's just wrong.
To cite integrative-psych:
https://www.integrative-psych.org/resources/confabulation-no...
"...this usage is misleading, as it suggests a perceptual process that LLMs, which lack sensory input, do not possess."
They prefer the word "confabulation", but I would also differ with that.
They define confabulation: "the brain creates plausible but incorrect memories to fill gaps".
Since, as with the lack of perceptions, LLMs are not retaining anything like a memory, I would also argue this term is inappropriate.
In terms of differentiating error categories, it's straightforward to specify, math error, spelling error, grammatical error, when those occur.
In the case of syntactically correct, but factually incorrect output, the word "wrong" describes this specific error category much more accurately than "hallucinate", which carries a host of inaccurate psychological implications.
This also speaks to a main point of my original post, that the use of "hallucinate" is euphemistic.
When we use a s/w tool for the input of human language questions, with the objective of receiving correct human language answers, just having a syntactically correct answer is not sufficient.
It needs to be emphasized that answers in this category are "wrong", they are not factually correct.
Using the word "hallucinate" is making an excuse for, and thus obfuscating, this factual error generated by the s/w tool.
I teach an "advanced" shell scripting course with an exam.
I mark "hallucinations" as "LLM Slop" in my grading sheets, when someone gives me a 100-character sed filter that just doesn't work that there is no way we discussed in class/in examples/in materials, or a made up API endpoint, or non-nonsensical file paths that reference non-existent commands.
Slop is an overused term these days, but it sums it up for me. Slop, from a trough, thrown out by an uncaring overseer, to be greedily eaten up by the piggies, who don't care if its full of shit.