gerdesj 4 days ago

"The fact that we have figured out how to translate language into something a computer can "understand" should thrill linguists."

No, there is no understanding at all. Please don't confuse codifying with understanding or translation. LLMs don't understand their input, they simply act on it based on the way they are trained on it.

"And there's a fact here that's very hard to dispute, this method works. I can give a computer instructions and it "understands" them "

No, it really does not understand those instructions. It is at best what used to be called an "idiot savant". Mind you, people used to describe others like that - who is the idiot?

Ask your favoured LLM to write a programme in a less used language - ooh let's try VMware's PowerCLI (it's PowerShell so quite popular) and get it to do something useful. It wont because it can't but it will still spit out something. PowerCLI is not extant across Stackoverflow and co much but it is PS based so the LLMs will hallucinate madder than a hippie on a new super weed.

1
permo-w 4 days ago

I think the overarching theme that I glean from LLM critics is some kind of visceral emotional reaction, disgust even, with the idea of them, leading to all these proxy arguments and side quests in order to try and denigrate the idea of them without actually honestly engaging with what they are or why people are interacting with them.

so what they don't "understand", by your very specific definition of the word "understanding"? the person you're replying to is talking about the fact that they can say something to their computer in the form of casual human language and it will produce a useful response, where previously that was not true. whether that fits your suspiciously specific definition of "understanding" does not matter a bit.

so what they are over-confident with areas outside of their training data? provide more training data, improve the models, reduce the hallucination. it isn't an issue with the concept, it's an issue with the execution. yes you'll never be able to reduce it to 0%, but so what? humans hallucinate too. what are we aiming for? omniscience?

suddenlybananas 4 days ago

You're thinking like an engineer and not a scientist. It's fine, but don't confuse the two.

permo-w 3 days ago

perhaps I'm thinking like an engineer, but the people I'm referring to are thinking like priests. I don't know who in this debate is playing the role of scientist, but it surely isn't the people parroting irrelevant mock-philosophical quibbles and semantics