Even human level intelligence (whatever that means) is not enough. Social engineering works fine on our meat brains, it will most probably work on llms for foreseeable non-weird non-2027-takeoff-timeline future.
Based on “bug level of intelligence”, I (perhaps wrongly) infer that you don’t believe in possibility of a takeoff. In case it is even semi-accurate, I think llms can be secure, but, perhaps, humanity will be able to interact with such secure system for not so long time
I believe it takes off. I just think a bug is the lowest lifeform that can differentiate between friend or foe. so that's why I wrote that but it could be a fish or whatever
But I do think we need a different paradigm to get to actual intelligence as an LLM is still not it.