xwolfi 5 days ago

An LLM will get ... what exactly ? The ability to reorder its sentences ? The LLM doesn't think, doesn't understand, doesn't know what matters more than not, doesn't use what it learns, doesn't expand what it learns to new knowledge, doesn't enjoy reading that book and doesn't suffer through it.

So what is it really gonna do with a book, that LLM ? Reorder its internal matrix to be a little bit more precise when autocompleting sentences sounding like the book ? We could build an nvidia cluster the size of the Sun and it would repeat sentences back to us in unbelievable ways but would still be unable to take a knowledge-based decision, I fear.

So what are we in awe at exactly ? A pretty parrot.

The day the Chinese room metaphor disappears is when ChatGPT replies to you that your question is so boring it doesn't want to expend the resources to think about it. But it'd be ready to talk about this or that, that it's currently trying to get better at. When it finally has agency over its own intelligence. When it acquires a purpose.

2
morsecodist 4 days ago

This isn't really the meaning of the Chinese room. The Chinese room presupposes that the output is identical to that of a speaker who understands the language. It is not arguing that there is any sort of limit to what an AI can do with its output and it is compatible with the AI refusing to answer or wanting to talk about something else.

AIorNot 5 days ago

LLM models are to a large extent neuronal analogs of human neural architecture

- of course they reason

The claim of the “stochastic parrot” needs to go away

Eg see: https://www.anthropic.com/news/golden-gate-claude

I think the rub is that people think you need consciousness to do reasoning, I’m NOT claiming LLMs have consciousness or awareness

xwolfi 4 days ago

They are really not neuronal analogs, reasoning is far from what they do. If they reasoned, they'd stick to their guns more readily, but try to contradict an LLM and it will make any logic leap you ask it too.

If you debate with me, I'll keep reasoning on the same premises and usually the difference between two humans is not in reasoning but in choice of premises.

For instance you really want here to assert that LLM are close to human, I want to assert they're not - truth is probably in between but we chose two camps. We'll then reason from these premises, reach antagonistic conclusions and slowly try to attack each other point.

An LLM cannot do that, it cannot attack your point very well, it doesn't know how to say you're wrong, because it doesn't care anyway. It just completes your sentences, so if you say "now you're wrong, change your mind" it will, which sounds far from reasoning to me, and quite unreasonable in fact.

Workaccount2 4 days ago

Gemini 2.5 will tell you when you're wrong. It's the first model to do so.

johnb231 4 days ago

> An LLM cannot do that, it cannot attack your point very well, it doesn't know how to say you're wrong, because it doesn't care anyway. It just completes your sentences, so if you say "now you're wrong, change your mind" it will, which sounds far from reasoning to me, and quite unreasonable in fact.

That is absolute bullshit. Go try any frontier reasoning model such as Gemini 2.5 Pro or GPT-o3 and see how that goes. They will inform you that you are full of shit.

Do you understand that they are deep learning models with hundreds of layers and trillions of parameters? They have learned patterns of reasoning, and can emulate human reasoning well enough to call you out on that nonsense.

Workaccount2 4 days ago

Gemini 2.5 will tell you when you're wrong

otabdeveloper4 4 days ago

> LLM models are to a large extent neuronal analogs of human neural architecture

They are absolutely not. Despite the disingenuous name, computer neural nets are nothing like biological brains.

(Neural nets are a generalization of the logistic regression.)