> AI this decade have not, at least so far, substantially deepened our understanding of our own intelligence and its constitution
I would push back on this a little bit. While it has not helped us to understand our own intelligence, it has made me question whether such a thing even exists. Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions. When CNNs learned to recognize faces through a series of hierarchical abstractions that make intuitive sense it's hard to deny the similarities to what we're doing as humans. Perhaps it's all just emergent properties of some messy evolved substrate.
The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics. Theories often made the mistake of giving human observers some kind of special importance, which was later discovered to be the cause of theories not generalizing.
> The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all"
Instead I would take the opposite take.
How wonderful is it, that with naturally evolved processes and neural structures, have we been able to create what we have. Van Gogh’s paintings came out of the human brain. The Queens of the Skies - hundreds of tons of metal and composites - flying across continents in the form of a Boeing 747 or an A380 - was designed by the human brain. We went to space, have studied nature (and have conservation programs for organisms we have found to need help), took pictures the pillars of creation that are so incredibly far… all with such a “puny” structure a few cm in diameter? I think that’s freaking amazing.
"I guess humans really aren't so special after all"
This is a crazy take to me. As compared to what? The machines that we built?
Until we discover comparably intelligent life in the universe I think it's fair to say that we are indeed very special.
I am pleasantly surprised that David Hume's writings have been mentioned. I love his works.
I have to confess this is the only essay of his I know, though it's an all-time favorite. What other Hume pieces would you recommend?
What initially drew me to David Hume was a quote from his discussions of miracles in "An Enquiry Concerning Human Understanding" (name of chapter is "Of Miracles").
That said, I began with "A Treatise of Human Nature" around the age of 17, translated to my native language (his works are not an easy read in English, IMO), due to my interest in both philosophy and psychology.
If you haven't read them yet, I would certainly recommend them. I would recommend the latter I mentioned even if you are not interested in psychology (but may be interested in epistemology, philosophy of mind, and/or ethics), as he gets into detail about his "impressions" vs "ideas".
Additionally, he is famously known for his "problem of induction" which you may already know.
Its like saying:
Ah, but these wizards created a magical entity that can also do magic! Wizards must not be so special after all...
You know how many old sci-fi settings pictured aliens as bipedal furry animals or lizards? Even to go from that to realistically-intelligent swarms of insects is already difficult.
(Of course, there’s plenty of sci-fi where conscious entities manifest themselves as abstract balls of pure energy or the like; except for some reason those balls still think in the same way we do, get assigned the same motivations, sometimes even speak our language, etc., which makes it, in a way, even less realistic than the walking and talking human-cat hybrid you’d see in Elder Scrolls.)
Whenever we ponder questions of intelligence and consciousness, the same pitfall awaits.
Since we don’t have an objective definition of consciousness or intelligence (and in all likelihood we can’t have one, because any formal attempt at such wouldn’t get very far due to being attempted by the same thing that’s being defined), the only one that makes sense is, in crude language, “something like what we are”. There’s a vague feeling that it has to do with free will, self-awareness, etc.; however, all of it is also influenced by the nature of us being all parts of some big figurative anthill—assuming your sense of self only arises as you model yourself against the other (starting with your parents/caretakers and on), a standalone human cannot be self-aware in the way we are if it evolved in an emptiness without others—i.e., it would not possess human intelligence; supported by our natural-scientific observations rejecting the possibility of a being of this shape and form ever evolving in the first place.
In other words, the more different some kind of intelligence is from ours, the less it would look like intelligence to us—which makes the search for alien intelligence in space somewhat tragically futile (if it exists, we wouldn’t recognize it unless it just happens to be like us), but opens up exciting opportunities for finding alien but not-too-alien intelligence right on this planet (almost Douglas Adams style, minus dolphins speaking English).
There’s an extra trick when it comes to LLMs. In case of alien life, the possibility of a radically different kind of consciousness producing output that closely mimics our own is almost impossible (if our prior assumption is correct, then for all intents and purposes truly alien, non-meatbag-scale kind of intelligence might not be able to recognize ours in the first place, just like we wouldn’t recognize alien intelligence). However, the LLMs are designed to mimic the most social aspect of our behavior, our communication aimed at fellow humans; so when an LLM produces sufficiently human-like output—even if it has a very different kind of consciousness[0] or no consciousness at all (more likely, though as we concluded above we can’t distinguish between the two cases anyway)—our minds are primed to see it as a manifestation of [which would be human-like] intelligence, even if there’s nothing that would suggest such judging by the way it’s created (which is radically different from the way we’ve been creating intelligent life so far, wink-wink), by the substrate it runs on, if not by the way it actually works (which per our conclusion above we might never be able to conclusively determine about our own minds, without resorting to unfalsifiable philosophical assumptions for at least some aspects of it).
So yes, I’d say humans are special, if nothing else then because by the only usable (if somewhat circular) definition of what we are there’s absolutely nothing like us around, and in all likelihood can never be. (That’s not to say that something not like us isn’t special in its own way—I mean, think of the dolphins!—but given we, due to not being it, would not be able to properly understand it, it just never hits the same.)
[0] Which if true would be completely asocial (given it neither exists in groups nor depends on others for survival) and therefore drastically different from ours.
In Star Trek the whole humanoids everywhere thing is an obvious practicality in producing episodes, though.
They spent the whole budget on the salt vampire and never recovered.
Well, most sci-fi still fits the bill. Vinge is a bit interesting in that he plays around with the idea with Tines where an “individual” (in human sense) is a pack of 5 of them[0] or with civilizations that “transcend” and then no one has any idea of what are about anymore, and how a bunch of civilizations evolved from humans which explains how they all just happen to operate on equivalent human meatbag scale.
[0] Genuinely not unlike how a congregation of gelled-together humans is an entity that can achieve much more than an individual human.
"Brain_s_". I find we (me included) generally overlook/underestimate the distributed nature of human intelligence, included in the AI field. That's why when I first heard of mixture of experts I was thrilled about the idea and the potential. (One could also see similarities in random forest). I believe a path to AGI(tm) would be to reproduce the evolution of human intelligence artificially. Start with small models training bigger and bigger models and let the bigger successfull models (insert RL, genetic algos, etc.) "reproduce" and teach newer models from scratch. Having different model architecture cohabit could maybe even lead to the kind of specializations we see in parts of the brain
> Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions.
Isn't Physics trying to describe the natural world? I'm guessing you are taking two positions here that are causing me confusion with your statement: 1) that our minds can be explained strictly through physical processes, and 2) our minds, including our intelligence, are outside of the domain of Physics.
If you take 1) to be true, then it follows that Physics, at least theoretically, should be able to explain intelligence. It may be intractably hard, like it might be intractably hard to have physics decribe and predict the motions of more than two planetary bodies.
I guess I'm saying that Physical laws ARE natural laws. I think you might be thinking that natural laws refer solely to all that messy, living stuff.
I think their emphasis is on simple and beautiful; not that human intelligence is outside the laws of physics, but that there will never be a “Maxwell’s equations” modelling the workings of human intelligence, it will just be a big pile of hacks and complex interactions of many distinct parts; nothing like the couple of recursive LISP macros people of the 1960s might have hoped to find.
I think it is important to realize, that we need to understand language on our own terms. The logic of LLMs is not unlike alien technology to us. That being said, the minimalist program of Chomsky lead to nowhere, because just like programming, it found edge case after edge case, reducing it further and further, until there was no program anymore that resembled a real theory. But it is wrong to assume that the big progress in linguistics is in vain, the same reason Prolog, Theorem provers, type theory, category theory is in vain, when we have LLMs that can produce everything in C++. We can use the technology of linguistics to ground our knowledge, and in some dark corner of the LLM it might already have integrated this. I think the original divide between the sciences and the humanities might be deeper and more fundamental than we think. We need linguistic as a discipline of the humanities, and maybe huge swaths of Computer Science is just that.
I agree with you. I think the fundamental problem is we don't have a good unified theory of fuzzy reasoning. We have a lot of different formal approaches but they all have flaws.
Now LLMs made a big breakthrough that they showed we can do decent fuzzy reasoning in practice. But at the cost of nobody understanding the underlying process formally.
If we had a good unified (formal) theory of fuzzy reasoning, we could build models that reason better (or at least more predictably). But we won't get a better theory by scaling the existing models, I think Chomsky is right about that.
We lack the goal, not the means. If I am asking LLM a question, what answer do I want? A playfully creative one? A strictly logical one? A pleasingly sycophantic one? A harshly critical one? An out of the box devil's advocate one? A beautiful one? A practical one? We have no clue how to express these modes in logical reasoning.
By way of analogy, the result of the theorem prover is usually actionable (i.e. we can replace one kind of expression with its proven equivalent for some end like optimizing code-size or code-run-time), but mathematicians _still_ endeavor to translate the unwieldy and verbose machine-generated proofs into concise human-readable proofs, because those readable proofs are useful to our understanding of mathematics even long after the "productive action" has been taken.
In a way, this collaboration between the machine and the human is better than what came before, because now productive actions can be taken sooner, and mathematicians do not have to doubt whether they are searching for a proof that exists.
>That being said, the minimalist program of Chomsky lead to nowhere, because just like programming, it found edge case after edge case, reducing it further and further, until there was no program anymore that resembled a real theory
As someone who has worked in linguistics, I don't really see what you're talking about. Minimalism is not full of exceptions (please elaborate on a specific example if you have one). Minimalism was created to make the old theory, Government and Binding, simpler.
Yes, and the project can be criticised by reducing until there's no value anymore. Well known instances of this process:
- Predicate Fronting in Free Relatives: In sentences like “What John saw was a surprise,” labeling the fronted predicate is not without problems, Merge doesn’t yield a clear head.
- Optional Verb Movement in Persian: Yes-no questions where verbs can optionally move (e.g., “Did you go?” vs. “You went?”) messes up feature-checking’s binary mode.
- Non-Matching Free Relatives with Pied-Piping: Structures like “In whichever city you live, you’ll find culture” mess up standard labeling, needs extra stipulations.
- Some Subjects in Finnish: Nominative vs. non-nominative subjects (e.g., “Minua kylmä” [me-ACC cold]) complicate that Minimalist case assignment.
but we don't have llms that can "produce everything in c++".
We have LLMs that can get some boilerplate right if you use it in a greenfield project, and will repeatedly mess up your code once it grows enough for you to actually need assistance grokking it.
Neuroscientist here:
> Perhaps there are no simple and beautiful natural laws, like those that exists in Physics, that can explain how humans think and make decisions...Perhaps it's all just emergent properties of some messy evolved substrate.
Yeah, it is very likely that there are not laws that will do this, it's the substrate. The fruit fly brain (let alone human) has been mapped, and we've figured out that it's not just the synapse count, but the 'weights' that matter too [0]. Mind you, those weights adjust in real time when a living animal is out there.
You'll see in literature that there are people with some 'lucky' form of hydranencephaly where their brain is as thin as paper. But they vote, get married, have kids, and for some strange reason seem to work in mailrooms (not a joke). So we know it's something about the connectome that's the 'magic' of a human.
My pet theory: We need memristors [2] to better represent things. But that takes redesigning the computer from the metal on up, so is unlikely to occur any time soon with this current AI craze.
> The big lesson from the AI development in the last 10 years from me has been "I guess humans really aren't so special after all" which is similar to what we've been through with Physics.
Yeah, biologists get there too, just the other way abouts, with animals and humans. Like, dogs make vitamin C internally, and humans have that gene too, it's just dormant, ready for evolution (or genetic engineering) to reactivate. That said, these neuroscience issues with us and the other great apes are somewhat large and strange. I'm not big into that literature, but from what little I know, the exact mechanisms and processes that get you from tool using ourangs to tool using humans, well, those seem to be a bit strange and harder to grasp for us. Again, not in that field though.
In the end though, humans are special. We're the only ones on the planet that ever really asked a question. There's a lot to us and we're actually pretty strange in the end. There's many centuries of work to do with biology, we're just at the wading stage of that ocean.
[0] https://en.wikipedia.org/wiki/Drosophila_connectome
>You'll see in literature that there are people with some 'lucky' form of hydranencephaly where their brain is as thin as paper. But they vote, get married, have kids, and for some strange reason seem to work in mailrooms (not a joke). So we know it's something about the connectome that's the 'magic' of a human.
These cases seem totally fascinating. Have you any links to examples or more information (i'm also curious about the curious detail of them tending to work in mail rooms)?
It is possible that we simply haven't yet discovered those natural laws for "emergent behavior" from the "messy substrate".
> it has made me question whether such a thing even exists
I was reading a reddit post the other day where the guy lost his crypto holdings because he input his recovery phrase somewhere. We question the intelligence of LLMs because they might open a website, read something nefarious, and then do it. But here we have real humans doing the exact same thing...
> I guess humans really aren't so special after all
No they are not. But we are still far from getting there with the current LLMs and I suspect mimicking the human brain won't be the best path forward.
> But here we have real humans doing the exact same thing...
I'd wager that a motivation in designing these systems it so they do not make these mistakes. Otherwise what's the point, really.
I think a system too perfect will not show any creativity. Maybe wild new ideas require taking risks which means a system that can invent new things will end up making bad choices.
Our own, and other people's mistakes shape us, and our understanding. What even is perfect, anyway?
I'm not interested in navel gazing. I'm interested in getting my taxes done properly.