lanfeust6 5 days ago

I'm noticing that leftists overwhelmingly toe the same line on AI skepticism, which suggests to me an ideological motivation.

9
thomassmith65 5 days ago

Chomsky's problem here has nothing to do with his politics, but unfortunately a lot to do with his long-held position in the Nature/Nurture debate - a position that is undermined by the ability of LLMs to learn language without hardcoded grammatical rules:

  Chomsky introduced his theory of language acquisition, according to which children have an inborn quality of being biologically encoded with a universal grammar
https://psychologywriting.com/skinner-and-chomsky-on-nature-...

js8 4 days ago

I don't see how the two things are related. Whether acquisition of human language is nature or nurture - it is still learning of some sort.

Yes, maybe we can reproduce that learning process in LLMs, but that doesn't mean the LLMs imitate only the nurture part (might as well be just finetuning), and not the nature part.

An airplane is not an explanation for a bird's flight.

thomassmith65 4 days ago

The great breakthrough in AI turned out to be LLMs.

Nature, for an LLM, is its design: graph, starting weights, etc.

Environment, for an LLM, is what happens during training.

LLMs are capable of learning grammar entirely from their environment, which suggests that infants are too, which is bad for Chomsky's position that the basics of grammar are baked into human DNA.

suddenlybananas 4 days ago

LLMs require vastly more data than humans and still struggle with some more esoteric grammatical rules like parasitic gaps. The fact grammar can be approximated given trillions of words doesn't explain how babies learn language from a much more modest dataset.

numpad0 4 days ago

I think it does. I think LLM showed us possibility that maybe there's no language but just pile of memes and supplemental compression scheme that is grammar.

LLM had really destroyed Chomsky's positions in multiple different ways: nothing perform even close to LLM in language generation, yet it didn't grow a UG for natural languages, while it did develop a shared logic for non-natural languages and abstract concepts, while dataset needing to be heavily English biased to be English fluent, and parameter count needing to be truly massive as multiple hundred billion parameters large, so on and on.

Those are all circumstantial evidences at best, a random paraphernalia of statements that aren't even appropriate to bring into discussions, all meaningless - in the sense that an open hand of a person observing another individual aligned to a line between standing position of the person to the center of nearest opening of a wall would be meaningless.

suddenlybananas 4 days ago

>LLM had really destroyed Chomsky's positions in multiple different ways: nothing perform even close to LLM in language generation, yet it didn't grow a UG for natural languages

Do you even understand Chomsky's position?

numpad0 3 days ago

To be honest, I don't, at least not entirely. Noam Chomsky to me is patron saint of compilers and apparent sources of quotes used to justify eye-rolling decisions regarding i18n. At least a lot of his followers' understanding is that the UG is THE UG and a Universal Syntax, and/or is a decisive and scientific refutation of Sapir-Whorf hypothesis as well as European structuralism, not whatever his later works on UG that progressively pivoted its definition or nature vs nurture debates were "meant" to be discussing.

To me this text look like his Baghdad Bob moment. Silly but right and noble. What else is it?

Ironically these days you can just throw this text at ChatGPT to have it debloat or critique text like this transcripts. Worse results than taking time reading yourself, but gives you validation if that is what is needed.

thomassmith65 4 days ago

It's not that the invention of LLMs conclusively disproves Chomksy's position.

However, we now have a proof-of-concept that a computer can learn grammar in a sophisticated way, from the ground up.

We have yet to code something procedural that approaches the same calibre via a hard-coded universal grammar.

That may not obliterate Chomksy's position, but it looks bad.

suddenlybananas 4 days ago

That's not the goal of generative linguistics though; it's not an engineering project.

thomassmith65 4 days ago

The problem encompasses not just biology and information technology, but also linguistics. Even if LLMs say nothing about biology, they do tell us something about the nature of language itself.

Again, that LLMs can learn to compose sophisticated texts from training alone does not close the case on Chomsky's position.

However, it is a piece of evidence against it. It does suggest, by Occam's razor, that a hardwired universal grammar is the lesser theory.

suddenlybananas 4 days ago

How do LLMs explain how 5 year olds respect island constraints?

thomassmith65 4 days ago

I don't have the domain knowledge to discuss that.

suddenlybananas 4 days ago

If you don't know what a syntactic island is, perhaps you're not the best judge of the plausibility of a linguistic theory.

thomassmith65 4 days ago

Fantastic, let's have a debate about me /s

Supermancho 5 days ago

> AI skepticism

Isn't AI optimism an ideological motivation? It's a spectrum, not a mental model.

lanfeust6 5 days ago

Whether one expects AI to be powerful or weak should have nothing to do with political slant, but here it seems to inform the opinion. It begs the question: what do they want to be true? The enemy is both too strong and too weak.

They're firmly on one extreme end of the spectrum. I feel as though I'm somewhere in between.

rxtexit 5 days ago

Then you obviously didn't listen to a word Chomsky has said on the subject.

I was quite dismissive of him on LLMs until I realized the utter hubris and stupidity of dismissing Chomsky on language.

I think it was someone asking if he was familiar with the Wittgenstein Blue and Brown books and of course because he as already an assistant professor at MIT when they came out.

I still chuckle at my own intellectual arrogance and stupidity when thinking about how I was dismissive of Chomsky on language. I barely know anything and I was being dismissive of one of unquestionable titans and historic figures of a field.

hbartab 5 days ago

Chomsky has been colossally wrong on universal grammar.

https://www.scientificamerican.com/article/evidence-rebuts-c...

But at least he admits that:

https://dlc.hypotheses.org/1269#

numpad0 4 days ago

Leftists and intellectuals overlap a lot. LLM text must be still full of six fingered hands to many of them.

For Chomsky specifically, the entire existence of LLM, however it's framed, is a massive middle finger to him and a strike-through on a large part of his academic career. As much as I find his UG theory and its supporters irritating, it might be felt a bit unfair to someone his age.

protocolture 5 days ago

99%+ of humans on this planet do not investigate an issue, they simply accept a trusted opinion of an issue as fact. If you think this is a left only issue you havent been paying attention.

Usually what happens is the information bubble bursts, and gets corrected, or it just fades out.

internet_points 4 days ago

This is a great way to remove any nuance and chance of learning from a conversation. Please don't succumb to black-and-white (or red-and-blue) thinking, it's harmful to your brain.

lanfeust6 3 days ago

You're projecting.

santoshalper 5 days ago

Or an ideological alignment of values. Generative AI is strongly associated with large corporations that are untrusted (to put it generously) by those on the left.

An equivalent observation might be that the only people who seem really, really excited about current AI products are grifters who want to make money selling it. Which looks a lot like Blockchain to many.

EasyMark 5 days ago

I think viewing the world as either leftist or right wing is rather limiting philosophy and way to go through life. Most people are a lot more complicated than that.

mattw1 5 days ago

I have experienced this too. It's definitely part of the religion but I'm not sure why tbh. Maybe they equate it with like tech is bad mkay, which, looking at who leads a lot of the tech companies, is somewhat understandable, altho very myopic.

santoshalper 5 days ago

I see this as much more of a hackers vs. corporations ideological split. Which imperfectly maps to leftism vs conservatism.

The perception on the left is that once again, corporations are foisting products on us that nobody wants, with no concern for safety, privacy, or respect for creators.

For better or worse, the age of garage-tech is mostly dead and Tech has become synonymous with corporatism. This is especially true with GenAI, where the resources to construct a frontier model (or anything remotely close to it) are far outside what a hacker can afford.

lanfeust6 5 days ago

> I see this as much more of a hackers vs. corporations ideological split.

That framing may be true within tech circles, not the broader political divide. "Hackers" aren't collectively discounting and ignoring AI tools regardless of their enthusiasm for open-source.

Safety-ism is also most popular among those see useful potential in AI, and a generous enough timeline for AGI.

mattw1 5 days ago

That makes sense, and there's definitely an element of truth to that position. The trouble is, the response is to dissociate with the technology, which is really not a tenable position if you intend to have a meaningful part in like... anything in the future. What I see-- and this is just my personal experience-- is that leftists tend to want to pretend it isn't happening, or that it won't matter. When it fact nothing matters more.

The deepest of deep ironies: I talk to people all the time talking about ushering in an age of post-capitalism and ignoring AI. When I personally can't see how the AI of the next decade and capitalism can coexist, the latter being based on human labor and all. Like, AI is going to be the reason what you want is going to happen, so why ignore it?