flakeoil 8 days ago

How about using LLMs to help us configure the access permissions and guardrails? /s

I think I have to go full offline soon.

2
TeMPOraL 8 days ago

Problem is, the mental model of what user wants to do almost never aligns with whatever security model the vendor actually implemented. Broadly-scoped access at least makes it easy on the user; anything I'd like to do will fit as a superset of "read all" or "read/write all".

The fine-grained access forces people to solve a tough riddle, that may actually not have a solution. E.g. I don't believe there's a token configuration in GitHub that corresponds to "I want to allow pushing to and pulling from my repos, but only my repos, and not those of any of the organizations I want to; in fact, I want to be sure you can't even enumerate those organizations by that token". If there is one, I'd be happy to learn - I can't figure out how to make it out of checkboxes GitHub gives me, and honestly, when I need to mint a token, solving riddles like this is the last thing I need.

Getting LLMs to translate what user wants to do into correct configuration might be the simplest solution that's fully general.

spacebanana7 8 days ago

This is interesting to expanding upon.

Conceivably, prompt injection could be leveraged to make LLMs give bad advice. Almost like social engineering.