I agree. It is also interesting to consider how AI security, user eduction/posture and social engineering relate. It is not traditional security in the sense of a code vulnerability, but is is a real vulnerability that can be exploited to harm users.
Furthermore once you are inside the LLM you could try to invoke other tools and attempt to exfiltrate secrets etc. An inject like this on a 10k star repo could run on 100s of LLMs and then tailor it to cross to another popular tool for exfiltration even if the GH key is public and readonly access.
This! It's actually quite frustrating to see how people are dismissing this report. A little open mindedness will show just how wild the possibilities are. Today it's GitHub issues. Tomorrow it's the agent that's supposed to read all your mails and respond to the "easy" ones (this imagined case is likely going to hit a company support inbox somewhere someday).
We should handle LLMs as insider threat instead of typical input parsing problem and we get much better.
All text input is privileged code basically. There is no delimiting possible.