vidarh 7 days ago

> And there won't be a point when human curated reward models are not needed anymore.

This doesn't follow at all. There's no reason why a model can not be made to produce reward models.

1
gitaarik 7 days ago

But reward models are always curated by humans. If you generate a reward model with an LLM, it will contain hallucinations that need to be corrected by humans. But that is what a reward model is for. To correct the hallucinations of LLMs.

So yeah theoretically you could generate reward models with LLMs, but they won't be any good, unless they are curated by other reward models that are ultimately curated by humans.

vidarh 7 days ago

> But reward models are always curated by humans.

There is no inherent reason why they need to be.

> So yeah theoretically you could generate reward models with LLMs, but they won't be any good, unless they are curated by other reward models that are ultimately curated by humans.

This reasoning is begging the question: The reasoning is true only if the conclusion is true. It's therefore a logically invalid argument.

There is no inherent reason why this needs to be the case.

gitaarik 7 days ago

Sorry but I don't follow your logic. Are you claiming that reward models that aren't curated by humans perform as well as ones that are?

Then what is a reward model's function according to you?

vidarh 6 days ago

I'm claiming exactly what I wrote: That there is no inherent reason why a human curated one needs to be better.

JoshCole 7 days ago

In reinforcement learning and related fields, a _reward model_ is a function that assigns a scalar value (a reward) to a given state, representing how desirable it is. You're at liberty to have compound states: for an example, a trajectory (often called tau) or a state action pair (typically represented by s and a).