lupire 7 days ago

Which LLMs and which versions?

1
daveguy 7 days ago

All. Of. Them. It's quite literally what they do because they are optimistic text generators. Not correct or accurate text generators.

e3bc54b2 7 days ago

This really grinds my gears. The technology is inherently faulty, but the relentless optimism of its future subtly hiding that by making it the user's mistake instead.

Oh you got a wrong answer? Did you try the new OpenAI v999? Did you prompt it correctly? Its definitely not the model, because it worked for me once last night..

traceroute66 7 days ago

> it worked for me once last night..

This !

Yeah, it probably "worked for me" because they spent a gazillion hours engaging in what the LLM fanbois call "prompt engineering", but you and I would call "engaging in endless iterative hacky work-arounds until you find a prompt that works".

Unless its something extremely simple, the chances of an LLM giving you a workable answer on the first attempt is microscopic.

Aeolun 7 days ago

Most optimistic text generators do not consider repeating the stuff that was already rejected a desireable path forward. It might be the only path forward they’re aware of though.

the_af 7 days ago

In some contexts I got ChatGPT to answer "I don't know" when I crafted a very specific prompt about not knowing being and acceptable and preferable answer to bullshitting. But it's hit and miss, and doesn't always work; it seems LLMs simply aren't trained to model admittance of ignorance, they almost always want to give a positive and confident answer.