Where are you getting all this (wrong) detail about the internals of the Chinese Room? The thought experiment merely says that the operator consults "books" and follows "instructions" (no doubt Turing-complete but otherwise unspecified) for manipulating symbols they they explicitly DO NOT understand - they do NOT have access to "symbolic and syntactical equivalences" - that is the POINT of the thought experiment. But the instructions in the books in a Chinese Room could perfectly well have an attention model. The details are irrelevant, because - I stress again - Searle's Chinese Room is not cognitively limited, by definition. Its hypothetical output is indistinguishable from a Chinese human.
I tend to agree that Chinese Rooms should be kept out of LLM discussions. In addition to it being a flawed thought experiment, of all the dozens of times I've seen them brought up, not a single example has demonstrated understanding of what a Chinese Room is anyway.
The details are irrelevant, because - I stress again - Searle's Chinese Room is not cognitively limited, by definition.
So said Searle. But without specifying what he meant, it was a circular statement at best. Punting to "it passes a Turing Test" just turns it into a different debate about a different flawed test.
The operator has no idea what he's doing. He doesn't know Chinese. He has a Borges-scale library of Chinese books and a symbol-to-symbol translation guide. He can do nothing but manipulate symbols he doesn't understand. How anyone can pass a well-administered Turing test without state retention and context-based reflection, I don't know, but we've already put more thought into this than Searle did.