root_axis 2 days ago

It could be trained to say that, but it's not exactly clear how you would reinforce the absence of certain training data in order to emit that response accurately, rather than just based on embedding proximity.

2
simianwords 2 days ago

Why does it seem so hard to make training data for this? You can cook up a few thousands of training data and do an RLHF.

root_axis 2 days ago

Yes, but all that does is locate "I don't know" near the cooked up data within the embeddings. This doesn't actually reflect an absence of data in the training.

jsnider3 2 days ago

Seems easy. Have a set of vague requests and train it to ask for clarification instead of guessing.

root_axis 2 days ago

As I said, it's possible to train it to ask for clarification, but it's not clear how to reinforce that response in a way that correctly maps on to the absence of data rather than arbitrary embedding proximity. You can't explicitly train on every possible scenario where the AI should recognize its lack of knowledge.

joleyj 2 days ago

If the solution were easy or obvious the problem would likely have already been solved no?

jsnider3 1 day ago

We've only had ChatGPT and the like for a few years. It took Ford longer to make automatic transmissions.

joleyj 1 day ago

So it is hard? Not easy? I would agree with that position. I think the analogy with automatic transmissions misses though. Programming actual intelligence into a computer seems orders of magnitude more complex and difficult than building the gearbox for a car.

jsnider3 1 day ago

I'm saying it shouldn't be that hard, but it's just one of a long list of features that the people whose job it is to do are working on.

root_axis 1 day ago

It is hard in the sense that it's an unsolved problem that emerges due to the way LLMs work. Perhaps some clever ML PhD will come up with a technique to solve it, but right now there's no clear solution.

timdiggerm 2 days ago

How does it identify what's vague?

jsnider3 1 day ago

Many ways. 1) Hire some humans to label the data. 2) Let the user give you feedback. 3) Ask another LLM.