typical patterns to look out for:
- "Should I now give you the complete [result], fulfilling [all your demands]?"
- "Just say [go] and I will do it"
- "Do you want either [A, B, or C]"
- "In [5-15] minutes I will give you the complete result"
...
> "Do you want either [A, B, or C]"
That's an example of what I'm talking about. Watch the reasoning process produce multiple options. That's what it is trained to do. That is problem solving, not "engagement". It requires more compute, not less. You see that more with the expensive models.
> "In [5-15] minutes I will give you the complete result"
I haven't seen that before and I don't see how it's relevant.
> That's an example of what I'm talking about. Watch the reasoning process produce multiple options. That's what it is trained to do. That is problem solving, not "engagement". It requires more compute, not less. You see that more with the expensive models.
Fair point. Thanks for standing your ground and arguing so matter-of-factly with me! Appreciate it.
I have never been thanked for replying here before. Thanks.
The optional choices happen when it tries to reason out a solution, but then finds it is making too many assumptions of unknown details about the user's system, preferences, goals, and so on. It's just a thought pattern that it has learned to emulate.
People here will argue that LLM's cannot truly "think", but they are good enough at emulating thinking.