In my view, there is a major flaw in his argument is his distinction into pure engineering and science:
> We can make a rough distinction between pure engineering and science. There is no sharp boundary, but it’s a useful first approximation. Pure engineering seeks to produce a product that may be of some use. Science seeks understanding. If the topic is human intelligence, or cognitive capacities of other organisms, science seeks understanding of these biological systems.
If you take this approach, of course it follows that we should laugh at Tom Jones.
But a more differentiated approach is to recognize that science also falls into (at least) two categories; the science that we do because it expands our capability into something that we were previously incapable of, and the one that does not. (we typically do a lot more of the former than the latter, for obvious practical reasons)
Of course it is interesting from a historical perspective to understand the seafaring exploits of Polynesians, but as soon as there was a better way of navigating (i.e. by stars or by GPS) the investigation of this matter was relegated to the second type of science, more of a historical kind of investigation. Fundamentally we investigate things in science that are interesting because we believe the understanding we can gain from it can move us forwards somehow.
Could it be interesting to understand how Hamilton was thinking when he came up with imaginary numbers? Sure. Are a lot of mathematicians today concerning themselves with studying this? No, because the frontier has been moved far beyond.*
When you take this view, it´s clear that his statement
> These considerations bring up a minor problem with the current LLM enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.
is not warranted. Consider the following, in his own analogy:
> These considerations bring up a minor problem with the current GPS enthusiasm: its total absurdity, as in the hypothetical cases where we recognize it at ones. But there are much more serious problems than absurdity. One is that GPS systems are designed in such a way that they cannot tell us anything about navigation, planning routes or other aspects of orientation, a matter of principle, irremediable.
* I´m making a simplifying assumption here that we can´t learn anything useful for modern navigation anymore from studying Polynesians or ants; this might well be untrue, but that is also the case for learning something about language from LLMs, which according to Chomsky is apparently impossible and not even up for debate.
I came to comments to ask a question, but considering that it is two days old already, I will try to ask you in this thread.
What you think about his argument about “not being able to distinguish possible language from impossible”?
And why is it inherent in ML design?
Does he assume that there could be such an instrument/algorithm that could do that with a certainty level higher than LLM/some ml model?
I mean, certainly they can be used to make a prediction/answer to this question, but he argues that this answer has no credibility? I mean, LLM is literally a model, ie probability distribution over what is language and what is not, what gives?
Current models are probably tuned more “strictly” to follow existing languages closely, ie that will say “no-no” to some yet-unknown language, but isn’t this improvable in theory?
Or is he arguing precisely that this “exterior” is not directly correlated with “internal processes and faculties” and cannot make such predictions in principle?