I think the overarching theme that I glean from LLM critics is some kind of visceral emotional reaction, disgust even, with the idea of them, leading to all these proxy arguments and side quests in order to try and denigrate the idea of them without actually honestly engaging with what they are or why people are interacting with them.
so what they don't "understand", by your very specific definition of the word "understanding"? the person you're replying to is talking about the fact that they can say something to their computer in the form of casual human language and it will produce a useful response, where previously that was not true. whether that fits your suspiciously specific definition of "understanding" does not matter a bit.
so what they are over-confident with areas outside of their training data? provide more training data, improve the models, reduce the hallucination. it isn't an issue with the concept, it's an issue with the execution. yes you'll never be able to reduce it to 0%, but so what? humans hallucinate too. what are we aiming for? omniscience?
You're thinking like an engineer and not a scientist. It's fine, but don't confuse the two.
perhaps I'm thinking like an engineer, but the people I'm referring to are thinking like priests. I don't know who in this debate is playing the role of scientist, but it surely isn't the people parroting irrelevant mock-philosophical quibbles and semantics