That doesn't make any sense.
Why not? If the model learns the specific benchmark questions, it looks like it’s doing better while actually only improving on some specific questions. Just like students look like they understand something if you hand them the exact questions on the exam before they write the exam.
A benchmark that can be gamed cannot be prevented from being gamed by 'security through obscurity'.
Besides this whole line of reasoning is preempted by the mathematical limits to computation and transformers anyway. There's plenty published about that.
Sharing questions that make LLM behave funny is (just) a game without end, there's no need to or point in "hoarding questions".
Yes, it does, unless the questions are unsolved, research problems. Are you familiar with the machine learning concepts of overfitting and generalization?
A benchmark is a proxy used to estimate broader general performance. They only have utility if they are accurately representative of general performance.
In ML, it's pretty classic actually. You train on one set, and evaluate on another set. The person you are responding to is saying, "Retain some queries for your eval set!"
I think the worry is that the questions will be scraped and trained on for future versions.