I don't know that this is a sustainable approach. As LLMs become more capable and are able to do the functions that a real human employee is doing they will need similar access that a normal human employee would have. Clearly not all employees have access to everything, but there is clearly a need for some broader access. Maybe we should be considering human type controls. If you are going to give broader access then you need X, Y and Z to do it like it requests temporary access from a 'boss' LLM etc etc. There are clear issues with this approach but humans also have these issues too (social engineering attacks work all too well). Is there potentially a different pattern that we should be exploring now?
I feel like there needs to be a notion of "tainted" sessions that's adopted as a best practice. The moment that a tool accesses sensitive/private data, the entire chat session should be flagged, outside of the token stream, in a way that prevents all tools from being able to write any token output to any public channel - or, even, to be able to read from any public system in a way that might introduce side channel risk.
IMO companies like Palantir (setting aside for a moment the ethical quandaries of the projects they choose) get this approach right - anything with a classification level can be set to propagate that classification to any number of downstream nodes that consume its data, no matter what other inputs and LLMs might be applied along the way. Assume that every user and every input could come from quasi-adversarial sources, whether intentional or not, and plan accordingly.
GitHub should understand that the notion of a "private repo" is considered trade-secret by much of its customer base, and should build "classified data" systems by default. MCP has been such a whirlwind of hype that I feel a lot of providers with similar considerations are throwing caution to the wind, and it's something we should be aware of.
An LLM is not (and will never be) like a human.
There's an extremely large number of humans, all slightly different, each vulnerable to slightly different attack patterns. All of these humans have some capability to learn from attacks they see, and avoid them in the future.
LLMs are different, as there's only a smart number of flagship models in wide use. An attack on model A at company X will usually work just as well on a completely different deployment of model A at company Y. Furthermore, each conversation with the LLM is completely separate, so hundreds of slightly different attacks can be tested until you find one that works.
If CS departments were staffed by thousands of identical human clones, each one decommissioned at the end of the workday and restored from the same checkpoint each morning, social engineering would be a lot easier. That's where we are with LLMs.
The right approach here is to adopt much more stringent security practices. Dispense with role-based access control, adopt context-based access control instead.
For example, an LLM tasked with handling a customer support request should be empowered with the permissions to handle just that request, not with all the permissions that a CS rep could ever need. It should be able to access customer details, but only for the customer that opened the case. Maybe it should even be forced to classify what kind of case it is handling, and be given a set of tools appropriate for that kind of case, permanently locking it out of other tools that would be extremely destructive in combination.
I don't follow. How does making computer programs more capable make it more important to give them access to private data?
This is a pretty loaded response but I'll attempt to answer. First, it doesn't and it was never implied that generically it does. The connection I was making was that LLMs are doing more human like tasks and will likely need access similar to what people have for those tasks for the same reason people need that access. I'm making the observation that if we are going down this path, which it looks like we are, then maybe we can learn from the approaches taken with real people doing these things.