> Private data + data exfiltration (with no attacker-controlled data) is fine, as there's no way to jailbreak the LLM.
This is why I said *unless you...have a very good understanding of its behavior.*
If your public-facing service is, say, a typical RBAC implementation where the end user has a role and that role has read access to some resources and not others, then by all means go for it (obviously these system can still have bugs and still need hardening, but the intended behavior is relatively easy to understand and verify).
But if your service gives read access and exfiltration capabilities to a machine learning model that is deliberately designed to have complex, open-ended, non-deterministic behavior, I don't think "it's fine" even if there's no third-party attacker-controlled prompts in the system!
> This is why I said unless you...have a very good understanding of its behavior.
In this scenario the LLM's behavior per se is not a problem. The problem is that random third parties are able to sneak prompts to manipulate the LLM.