> Perhaps you remember that language models were completely useless at coding some years ago, and now they can do quite a lot of things, even if they are not perfect.
IMO, they're still useless today, with the only progress being that they can produce a more convincing facade of usefulness. I wouldn't call that very meaningful progress.
I don't know how someone can legitimately say that they're useless. Perfect, no. But useless, also no.
> I don't know how someone can legitimately say that they're useless.
Clearly, statistical models trained on this HN thread would output that sequence of tokens with high probability. Are you suggesting that a statement being probable in a text corpus is not a legitimate source of truth? Can you generalize that a little bit?
I’ve found them somewhat useful? Not for big things, and not for code for work.
But for small personal projects? Yes, helpful.
It's funny how there's a decent % of people at both "LLMs are useless" and "LLMs 3-10x my productivity"