Reason about: sure. Independently solve novel ones without extreme amounts of guidance: I have yet to see it.
Granted, for most language and programming tasks, you don’t need the latter, only the former.
99.9% of humans will never solve a novel problem. It's a bad benchmark to use here
But they will solve a problem novel to them, since they haven't read all of the text that exists.
I agree. But it’s worth being somewhat skeptical of ASI scenarios if you can’t, for example, give a well formulated math problem to a LLM and it cannot solve it. Until we get a Reimann hypothesis calculator (or equivalent for hard/old unsolved maths) it’s kind of silly to be debating the extreme ends of AI cognition theory
"I'm taking this talking dog right back to the pound. It completely whiffed on both Riemann and Goldbach. And you should see the buffer overflows in the C++ code it wrote for me."