I wonder if certain text could be marked as unsanitized/tainted and LLMs could be trained to ignore instructions in such text blocks, assuming that's not the case already.
This somewhat happens already, with system messages vs assistant vs user.
Ultimately though, it doesn't and can't work securely. Fundamentally, there are so many latent space options, it is possible to push it into a strange area on the edge of anything, and provoke anything into happening.
Think of the input vector of all tokens as a point in a vast multi dimensional space. Very little of this space had training data, slightly more of the space has plausible token streams that could be fed to the LLM in real usage. Then there are vast vast other amounts of the space, close in some dimensions and far in others at will of the attacker, with fundamentally unpredictable behaviour.
After I wrote the comment, I pondered that too (trying to think examples of what I called "security conscious design" that would be in the LLM itself). Right now and in near future, I think I would be highly skeptical even if an LLM was marketed as having such feature of being able to see "unsanitized" text and not be compromised, but I could see myself not 100% dismissing such thing.
If e.g. someone could train an LLM with a feature like that and also had some form of compelling evidence it is very resource consuming and difficult for such unsanitized text to get the LLM off-rails, that might be acceptable. I have no idea what kind of evidence would work though. Or how you would train one or how the "feature" would actually work mechanically.
Trying to use another LLM to monitor first LLM is another thought but I think the monitored LLM becomes an untrusted source if it sees untrusted source, so now the monitoring LLM cannot be trusted either. Seems that currently you just cannot trust LLMs if they are exposed at all to unsanitized text and then can autonomously do actions based on it. Your security has to depend on some non-LLM guardrails.
I'm wondering also as time goes on, agents mature and systems start saving text the LLMs have seen, if it's possible to design "dormant" attacks, some text in LLM context that no human ever reviews, that is designed to activate only at a certain time or in specific conditions, and so it won't trigger automatic checks. Basically thinking if the GitHub MCP here is the basic baby version of an LLM attack, what would the 100-million dollar targeted attack look like. Attacks only get better and all that.
No idea. The whole security thinking around AI agents seems immature at this point, heh.
Sadly, these ideas have been explored before, e.g.: https://simonwillison.net/2022/Sep/17/prompt-injection-more-...
Also, OpenAI has proposed ways of training LLMs to trust tool outputs less than User instructions (https://arxiv.org/pdf/2404.13208). That also doesn't work against these attacks.
even in the much simpler world of image classifiers, avoiding both adversarial inputs and data poisoning attacks on the training data is extremely hard. when it can be done, it comes at a cost to performance. I don't expect it to be much easier for LLMs, although I hope people can make some progress.
> LLMs could be trained to ignore instructions in such text blocks
Okay, but that means you'll need some way of classifying entirely arbitrary natural-language text, without any context, whether it's an "instruction" or "not an instruction", and it has to be 100% accurate under all circumstances.
This is especially hard in the example highlighted in the blog. As can be seen from Microsoft's promotion of GitHub coding agents, the issues are expected to act as instructions to be executed on. I genuinely am not sure if the answer lies in sanitization of input or output in this case
> I genuinely am not sure if the answer lies in sanitization of input or output in this case
(Preface: I am not an LLM expert by any measure)
Based on everything I know (so far), it's better to say "There is no answer"; viz. this is an intractable problem that does not have a general-solution; however many constrained use-cases will be satisfied with some partial solution (i.e. hack-fix): like how the undecidability of the Halting Problem doesn't stop static-analysis being incredibly useful.
As for possible practical solutions for now: implement a strict one-way flow of information from less-secure to more-secure areas by prohibiting any LLM/agent/etc with read access to nonpublic info from ever writing to a public space. And that sounds sensible to me even without knowing anything about this specific incident.
...heck, why limit it to LLMs? The same should be done to CI/CD and other systems that can read/write to public and nonpublic areas.
Maybe, but I think the application here was that Claude would generate responsive PRs for github issues while you sleep, which kind of inherently means taking instructions from untrusted data.
A better solution here may have been to add a private review step before the PRs are published.