I don't find it reasonable that you didn't understand my corrections, because current AI already do. So I'm exiting the conversation.
https://chatgpt.com/share/683a3c88-62a8-8008-92ef-df16ce2e8a...
Ok, this is interesting indeed and I'll investigate more into it. But I think my points still stand. Let me elaborate.
An LLM only learns through input text. It doesn't have a first-person 3D experience of the world. So it can't execute physical experiments, or even understand them. It can understand the texts about it, but it can't visualize it, because it doesn't have a visual experience.
And ultimately our physical world is governed by physical processes. So at the fundamentals of physical reality, the LLMs lack understanding. And therefore will stay dependent on humans educating and correcting it.
You might get pretty impressively far with all kinds of techniques, but you can't cross this barrier with just LLMs. If you want to, you have to give it senses like humans to give it an experience of the world, and make it understand these experiences. And sure they're already working on that, but that is a lot harder to create than a comprehensive machine learning algorithm.
You're doing this thing again where you say tons of things that aren't true.
> An LLM only learns through input text.
This is false. There already exist LLM which understand more than just text. Relevant search term: multi-modality.
> It doesn't have a first-person 3D experience of the world.
Again false. It is trivial to create such an experience with multi-modality. Just set up an input device which streams that.
> So it can't execute physical experiments, or even understand them.
Here you get confused again. It doesn't follow, based on perceptual modality, that someone can't do or understand experiments. Hellen Keller can be both blind, but also do an experiment.
Beyond just being confused, you also make another false claim. Current LLMs already have the capacity to run experiments and do so. Search terms: tool usage, ReAct loop, AI agents.
> It can understand the texts about it, but it can't visualize it, because it doesn't have a visual experience.
Again, false!
Multi-modal LLMs currently possess the ability to generate images.
> And ultimately our physical world is governed by physical processes. So at the fundamentals of physical reality, the LLMs lack understanding. And therefore will stay dependent on humans educating and correcting it.
Again false. The same sort of reasoning would claim that Hellen Keller couldn't read a book, but braille exists. The ability to acquire information outside an umwelt is a capability that intelligence enables.
You come up with very interesting points, and I'm thankful for that. But I also think you're missing the crux or my message. LLMs don't experience the world the same way humans do. And they also don't think in the same way. So you can train them very far with enough input data, but there will always be a limit of what they can understand compared to a human. If you want them to think and experience the world in the same way, you basically have to create a complete human.
My example about the visualization was just an example to prove a point. What I ultimately mean is the whole complete human experience. And besides, if you give it eyes, what data are you gonna train it on? Most videos on the internet are filmed with one lens, which doesn't give you a 3D visual. So you would have to train it like a baby growing up, trial on error. And then again we're talking only about the visual.
Hellen Keller wasn't born blind, so she did have a chance to develop her visual brain functions. Most people can visualize things with their eyes closed.
Chess engines cannot see like a human can. When they think they don't necessarily think using the exact same method that a human uses. Yet train a chess engine for a very long time and it can actually end up understanding chess better than a human can.
I do understand the points you are attempting to make. The reason you're failing to prove your point is not because I am failing to understand the thrust of what you were trying to argue.
Imagine you were talking to someone who was a rocket scientist, and you were talking to them about engines and you had an understanding of engines that was predicated on your experience with cars. You start making claims about the nature of engines and they disagree with you they argue with you and they point out all these ways that you're wrong. Is this person going to be doing this because they're not able to understand your points? Or is it more likely that their experience with engines that are different than the engines that you're used to give them a different perspective that forced them to think of the world in a different way than you do?
Well chess has a very limited set of rules and playing field. And the way to win in chess is to be able to think forward, how all the moves could play out, and pick the best one. This is relatively easy to create an algorithm for that surpasses humans. That is what computers are good at: executing specific algorithms very fast. A computer will always beat a human to that.
So such algorithms can replace certain functions of humans, but they can't replace the human as a whole. And that is the same with LLMs. They save us time for repetative tasks, but they can't replace all of our functions. In the end an LLM is a comprehensive algorithm constantly updated with machine learning. It's very helpful, but it has its limits. The limit is constantly surpassed, but it will never replace a full human. To do that you need to do a whole lot more than a comprehensive machine learning algorithm. They can get very close to something that looks like a human, but there will always be something lacking. Which then again can be improved upon, but you never reach the same level.
That is why I don't worry about AI taking our jobs. They replace certain functions, which will make our job easier. I don't see myself as a coder, I see myself as a system designer. I don't mind if AIs take over (certain parts of) the coding process (once they're good enough). It will just make software development easier and faster. I don't think there will be less demand for software developers.
It will change our jobs, and we'll have to adapt to that. But that is always what happens with new technology. You have to grow along with the changes and not expect that you can keep doing the same thing for the same value. But I think that for most software developers that isn't news. In the old days people were programming in assembly, then compiled languages came and then higher level languages. Now we have LLMs, which (when they become good enough) will just be another layer of abstraction.