hoppp 8 days ago

They exploit the fact the llm will do anything it can to anyone.

These tools cant exist securely as long as the llm doesn't reach at least the level of intelligence of a bug that can make decisions about access control and knows the concept of lying and bad intent

3
om8 7 days ago

Even human level intelligence (whatever that means) is not enough. Social engineering works fine on our meat brains, it will most probably work on llms for foreseeable non-weird non-2027-takeoff-timeline future.

Based on “bug level of intelligence”, I (perhaps wrongly) infer that you don’t believe in possibility of a takeoff. In case it is even semi-accurate, I think llms can be secure, but, perhaps, humanity will be able to interact with such secure system for not so long time

hoppp 7 days ago

I believe it takes off. I just think a bug is the lowest lifeform that can differentiate between friend or foe. so that's why I wrote that but it could be a fish or whatever

But I do think we need a different paradigm to get to actual intelligence as an LLM is still not it.

addandsubtract 7 days ago

Isn't the problem that the LLM can't differentiate between data and instructions? Or, at least in its current state? If we just limit it's instructions to what we / the MCP server provides, but don't let it eval() additional data it finds along the way, we wouldn't have this exploit – right?

dodslaser 7 days ago

Yes they can. If the token you give the LLM isn't permitted to access private repos you can lie all you want, it still can't access private repos.

Of course you shouldn't give an app/action/whatever a token with too lax permissions. Especially not a user facing one. That's not in any way unique to tools based on LLMs.

om8 7 days ago

I thing you are just arguing about words, not about meanings. I’d call what you are referring to “secure llm infrastructure ”, not “secure llm”.

But the thing is that we both agree about what’s going on, just with different words