Anthropic found that it Claude will pretend that it used the "standard" way to do addition- add the digits, carry the 1, etc- but the pattern of activations showed it using a completely different algorithm. So these things can role play as introspecting- they come up with plausible post-hoc explanations for their output- but they are still just pretending, so they will get it wrong.
So you can teach a model to sometimes ask for clarification, but will it actually have insight into when it really needs it, or will it just interject for clarification more or less at random? These models have really awful insight into their own capabilities, ChatGPT eg insists to me that it can read braille, and then cheerfully generates a pure hallucination.
> Anthropic found that it Claude will pretend that it used the "standard" way to do addition- add the digits, carry the 1, etc- but the pattern of activations showed it using a completely different algorithm.
That doesn't mean much; humans sometimes do the same thing. I recall a fun story about a mathematician with synesthesia multiplying numbers by mixing the colours together. With a bit of training such a person could also pretend to be executing a normal algorithm for the purposes of passing tests.
Even then the human doesn't know how they execute the algorithm, or mix the colours together - our conscious self-reflective mind has limits as to how far into our neural network weights it can reach. Can get further with lots of meditation, but it is still definitionally limited (in information theory terms).