swores 12 hours ago

But that's not how they actually work.

> "If an LLM happens to know the answer to your question, that answer will have the greatest weight"

An LLM doesn’t “know” anything in the way you’re imagining. It doesn’t have stored facts or indexed knowledge to check against, it just has weights learned between token sequences, and it outputs whatever next token is assigned the highest probability given the prompt and prior context. That might happen to produce a correct answer (and people are obviously working hard to make the models produce right answers as often as possible), but it might just as easily produce a plausible-sounding but wrong one, even if the correct information was in the training data. Because that correct information being there doesn't guarantee it will have the highest weighting ever, yet alone the highest weighting in all contexts of previous tokens and in all temperature settings.

You’re right that hallucinations can sometimes look like “extrapolations” that happen to land correctly, but that’s incidental. It’s still doing the same token-by-token probability selection regardless of whether it ends up right or wrong.

Framing it around “missing knowledge” vs “existing knowledge” is misleading intuition. It’s better to think about it in terms of probability distributions over token sequences: the model’s training biases it toward correct sequences more often than incorrect ones, but there’s nothing fundamental in the architecture that guarantees that if the answer was present in training, it will always beat out wrong guesses.

p.s. It's late at night here and I'm about to go to bed, so apologies if I've not explained well in this comment - I gave it to ChatGPT hoping it could tidy things up for me and it just made a way more confusing version so I'm posting it as is :D Let me know if my explanation still isn't clear and I could try again, or answer any questions you have, tomorrow

1
koakuma-chan 10 hours ago

> An LLM doesn’t “know” anything in the way you’re imagining. It doesn’t have stored facts or indexed knowledge to check against

Neither does your brain and yet you do "know" something.

> but it might just as easily produce a plausible-sounding but wrong one, even if the correct information was in the training data

If the majority of information that was in the LLM's training data said 1 + 1 = 3, the LLM will tell you that 1 + 1 = 3, even if there was some information that said 1 + 1 = 2, and there's nothing wrong with that because the LLM is not supposed to fact-check.

> the model’s training biases it toward correct sequences more often than incorrect ones

No, the model's training biases it toward sequences that appear more frequently.