johnea 5 days ago

It's a little disconcerting to read this question here.

LLMs do not invent anything. They are trained on sequnces of words and produce ordered sequences of words from that training data in response to a prompt.

There is no concept of knowledge, or information that is understood to be correct or incorrect.

There is only the statistical ordering of words in response to the prompt, as judged by the order of words in the training data.

This is why other comments here state that the LLM is completely amoral, which is not immoral, it is without any concept or reference to morality at all.

2
mckirk 5 days ago

I'd argue that just because LLMs only reproduce patterns they have seen, this does not mean they are incapable of invention. It is possible that, if they manage to replicate the data at a deep enough level, that they start to replicate the underlying reasoning process itself, which means they could definitely be capable of putting together different parts of a puzzle to come up with something you could call an 'invention' that was not in their training data.

andrewflnr 5 days ago

Good grief. See my response here: https://news.ycombinator.com/item?id=44085890

johnea 4 days ago

To which, please see my comment here:

https://news.ycombinator.com/item?id=44092296