They are really not neuronal analogs, reasoning is far from what they do. If they reasoned, they'd stick to their guns more readily, but try to contradict an LLM and it will make any logic leap you ask it too.
If you debate with me, I'll keep reasoning on the same premises and usually the difference between two humans is not in reasoning but in choice of premises.
For instance you really want here to assert that LLM are close to human, I want to assert they're not - truth is probably in between but we chose two camps. We'll then reason from these premises, reach antagonistic conclusions and slowly try to attack each other point.
An LLM cannot do that, it cannot attack your point very well, it doesn't know how to say you're wrong, because it doesn't care anyway. It just completes your sentences, so if you say "now you're wrong, change your mind" it will, which sounds far from reasoning to me, and quite unreasonable in fact.
Gemini 2.5 will tell you when you're wrong. It's the first model to do so.
> An LLM cannot do that, it cannot attack your point very well, it doesn't know how to say you're wrong, because it doesn't care anyway. It just completes your sentences, so if you say "now you're wrong, change your mind" it will, which sounds far from reasoning to me, and quite unreasonable in fact.
That is absolute bullshit. Go try any frontier reasoning model such as Gemini 2.5 Pro or GPT-o3 and see how that goes. They will inform you that you are full of shit.
Do you understand that they are deep learning models with hundreds of layers and trillions of parameters? They have learned patterns of reasoning, and can emulate human reasoning well enough to call you out on that nonsense.