specialist 4 days ago

Apologies, I don't know enough to articulate my question, which is probably nonsensical any way.

LLMs (like GPT) and grammars (like Backus–Naur Form) are two different kinds of generative (production) systems, right?

You've been (heroically) explaining Chomsky's criticism of LLMs to other noobs: grammars (theoretically) explain how humans do language, which is very different from how ChatGPT (stochastic parrots) do language. Right?

Since GPT mimics human language so convincingly, I've been wondering if there's any overlap of these two generative systems.

Especially once the (tokenized) training data for GPTs is word based instead of just snippets of characters.

Because I notice grammars everywhere and GPT is still magic to me. Maybe I'd benefit if I could understand GPTs in terms of grammars.

1
foobarqux 4 days ago

> Since GPT mimics human language so convincingly, I've been wondering if there's any overlap of these two generative systems.

It's not really relevant if there is overlap, I'm sure you can list a bunch of ways they are similar. What's important is 1. if they are different in fundamental ways and 2. whether LLMs explain anything about the human language faculty.

For 1. the most important difference is that human languages appear to have certain constraints (roughly that language has parse tree/hierarchical structure) and (from the experiments of Moro) humans seem to not be able to learn arguably simpler structures that are not hierarchical. LLMs on the other hand can be trained on those simpler structures. That shows that the acquisition process is not the same, which is not surprising since neural networks work on arbitrary statistical data and don't have strong inductive biases.

For 2. even if it turned out that LLMs couldn't learn the same languages it doesn't explain anything. For example you could hard-code the training to fail if it detects an "impossible language" then what? You've managed to create an accurate predictor but you don't have any understanding of how or why it works. This is easier to understand with non-cognitive systems like the weather or gravity: If you create a deep neural network that accurately predicts gravity it is not the same as coming up with the general theory of relativity (which could in fact be a worse predictor for example at quantum scales). Everyone argues the ridiculous point that since LLMs are good predictors then gaining understanding about the human language faculty is useless, which is a stance that wouldn't be accepted for the study of gravity or in any other field.