The number one use case for AI for me as a programmer is still help finding functions which are named something I didn't expect as I'm learning a new language/framework/library.
Doing the actual thinking is generally not the part I need too much help with. Though it can replace googling info in domains I'm less familiar with. The thing is, I don't trust the results as much and end up needing to verify it anyways. If anything AI has made this harder, since I feel searching the web for authoritative, expert information has become harder as of late.
My problem with this usage is that the LLMs seem equally likely to make up a function they wish existed. When questioned about the seeming-too-convenient method they will usually admit to having made it up on the spot. (This happens a lot in Flutter/Flame land, I'm sure it's better at something more mainstream like Python?) That being said, I do agree that using it as supplemental documentation is one of the better usecases I have for it.