gitaarik 5 days ago

You come up with very interesting points, and I'm thankful for that. But I also think you're missing the crux or my message. LLMs don't experience the world the same way humans do. And they also don't think in the same way. So you can train them very far with enough input data, but there will always be a limit of what they can understand compared to a human. If you want them to think and experience the world in the same way, you basically have to create a complete human.

My example about the visualization was just an example to prove a point. What I ultimately mean is the whole complete human experience. And besides, if you give it eyes, what data are you gonna train it on? Most videos on the internet are filmed with one lens, which doesn't give you a 3D visual. So you would have to train it like a baby growing up, trial on error. And then again we're talking only about the visual.

Hellen Keller wasn't born blind, so she did have a chance to develop her visual brain functions. Most people can visualize things with their eyes closed.

1
JoshCole 5 days ago

Chess engines cannot see like a human can. When they think they don't necessarily think using the exact same method that a human uses. Yet train a chess engine for a very long time and it can actually end up understanding chess better than a human can.

I do understand the points you are attempting to make. The reason you're failing to prove your point is not because I am failing to understand the thrust of what you were trying to argue.

Imagine you were talking to someone who was a rocket scientist, and you were talking to them about engines and you had an understanding of engines that was predicated on your experience with cars. You start making claims about the nature of engines and they disagree with you they argue with you and they point out all these ways that you're wrong. Is this person going to be doing this because they're not able to understand your points? Or is it more likely that their experience with engines that are different than the engines that you're used to give them a different perspective that forced them to think of the world in a different way than you do?

gitaarik 5 days ago

Well chess has a very limited set of rules and playing field. And the way to win in chess is to be able to think forward, how all the moves could play out, and pick the best one. This is relatively easy to create an algorithm for that surpasses humans. That is what computers are good at: executing specific algorithms very fast. A computer will always beat a human to that.

So such algorithms can replace certain functions of humans, but they can't replace the human as a whole. And that is the same with LLMs. They save us time for repetative tasks, but they can't replace all of our functions. In the end an LLM is a comprehensive algorithm constantly updated with machine learning. It's very helpful, but it has its limits. The limit is constantly surpassed, but it will never replace a full human. To do that you need to do a whole lot more than a comprehensive machine learning algorithm. They can get very close to something that looks like a human, but there will always be something lacking. Which then again can be improved upon, but you never reach the same level.

That is why I don't worry about AI taking our jobs. They replace certain functions, which will make our job easier. I don't see myself as a coder, I see myself as a system designer. I don't mind if AIs take over (certain parts of) the coding process (once they're good enough). It will just make software development easier and faster. I don't think there will be less demand for software developers.

It will change our jobs, and we'll have to adapt to that. But that is always what happens with new technology. You have to grow along with the changes and not expect that you can keep doing the same thing for the same value. But I think that for most software developers that isn't news. In the old days people were programming in assembly, then compiled languages came and then higher level languages. Now we have LLMs, which (when they become good enough) will just be another layer of abstraction.