I think the idea is they use another/same model to judge all the results and only return the best one to the user.
I think the idea is they just feed each to the RLHF reward model used to train the model and return the most rewarded answer.