The article cites a signal to noise ratio of ~1:50. The author is clearly deeply familiar with this codebase and is thus well-positioned to triage the signal from the noise. Automating this part will be where the real wins are, so I'll be watching this closely.
I’ve developed a few take-home interview problems over the years that were designed to be short, easy for an experienced developer, but challenging for anyone who didn’t know the language. All were extracted from real problems we solved on the job, reduced into something minimal.
Every time a new frontier LLM is released (excluding LLMs that use input as training data) I run the interview questions through it. I’ve been surprised that my rate of working responses remains consistently around 1:10 for the first pass, and often takes upwards of 10 rounds of poking to get it to find its own mistakes.
So this level of signal to noise ratio makes sense for even more obscure topics.
> challenging for anyone who didn’t know the language.
Interviewees don't get to pick the language?
If you're hiring based on proficiency in a particular tech stack, I'm curious why. Are there that many candidates that you can be this selective? Is the language so dissimilar that the uninitiated would need a long time to get up to speed? Does the job involve working on the language itself and so a specifically deep understanding is required?
> Interviewees don't get to pick the language?
For leetcode interviews, sure. Other than that, at least familiarity with the language is paramount, or with the same class of language.
Aren't most interviews like this? Most dev openings I see posted mention the specific language who's expertise they're looking for and the number of years of experience needed working with said language as well.
It can be annoying, but manageable. I've never coded in Java for example, but knowing C#, C++ and Python I imagine it wouldn't be too hard to pick up.
Huh, okay. That's not how we run interviews but I guess it's at least a thing, even if not common around here that I've seen yet (I'm not super current on interview practices though)
Regarding the job ads, yes they'd describe the ideal candidate but I haven't the experience that the perfect candidate ever actually shows up. Like you say, knowing J, T and Z, the company is confident enough that you'll be able to quickly pick up dotting the Is and crossing the 7s
That is the market nowadays. Employers seek not only deep knowledge in particular language, but also particular libraries. If you cannot answer interview questions about implementation of some features - you are out.
I do the same, but entry level problems that require healthy analysis. New frontier LLMs do not manage to do so well at all.
We’ve been working on a system that increases signal to noise dramatically for finding bugs, we’ve at the same time been thoroughly benchmarking the entire popular software agents space for this
We’ve found a wide range of results and we have a conference talk coming up soon where we’ll be releasing everything publicly so stay tuned for that itll be pretty illuminating on the state of the space
Edit: confusing wording
Interesting. This is for Bismuth? I saw your pilot program link — what does that involve?
Yup! So we have multiple businesses working with us and for pilots its deploying the tool, providing feedback (we're connected over slack with all our partners for a direct line to us), and making sure the uses fit expectations for your business and working towards long term partnership.
We have several deployments in other peoples clouds right now as well as usage of our own cloud version, so we're flexible here.
I was thinking about this the other day, wouldn't it be feasible to make fine-tune or something like that into every git change, mailist, etc, the linux kernel has ever hard?
Wouldn't such an LLM be the closer -synth- version of a person who has worked on a codebase for years, learnt all its quirks etc.
There's so much you can fit on a high context, some codebases are already 200k Tokens just for the code as is, so idk
I'd be willing to bet the sum of all code submitted via patches, ideas discussed via lists, etc doesn't come close to the true amount of knowledge collected by the average kernel developer's tinkering, experimenting, etc that never leaves their computer. I also wonder if that would lead to overfitting: the same bugs being perpetuated because they were in the training data.
I bet automatic this part will be simple. In general LLMs that have a given semantical ability "X" to do some task, have greater than X ability to check, among N replies about doing the same task, which reply is the best, especially if via binary tournament like RAInk did (it was posted here a few weeks ago). There is also the possibility to use agreement among different LLMs. I'm surprised Gemini 2.5 PRO was not used here, in my experience it is the most powerful LLM to do that kind of stuff.
1:50 is a great detection ratio for finding a needle in a haystack.
I don't think the author agrees as he points out the bugs weren't that difficult to find.
Nah. I'm not an expert code auditor myself but I've seen my colleagues do it and I've seen ChatGPT try its hand. Even when I give it a specific piece of code and probe/hint in the right direction, it produces five paragraphs of vulnerabilities, none of which are real, while overlooking the one real concern we identified
You can spend all day reading slop or you can get good at this yourself and be much more efficient at this task. Especially if you're the developer and know where to look and how things work already, catching up on security issues relevant to your situation will be much faster than looking for this needle in the haystack that is LLM output
If the LLM wrote a harness and proof of concept tests for its leads, then it might increase S/N dramatically. It’s just quite expensive to do all that right now.
Except that in my experience half the time it will modify the implementation in order to make the tests pass.
And it will do this no matter how many prompts you try or you forcefully you ask it.
With security vulnerabilities, you don't give the agent the ability to modify the potentially vulnerable software, naturally. Instead you make them do what an attacker would have to do: come up with an input that, when sent to the unmodified program, triggers the vulnerability.
How do you know if it triggered the vulnerability? Luckily for low-level memory safety issues like the ones Sean (and o3) found we have very good oracles for detecting memory safety, like KASAN, so you can basically just let the agent throw inputs at ksmbd until you see something that looks kind of like this: https://groups.google.com/g/syzkaller/c/TzmTYZVXk_Q/m/Tzh7SN...
> If the LLM wrote a harness and proof of concept tests for its leads, then it might increase S/N dramatically.
Designing and building meaningfully testable non-trivial software is orders of magnitude more complex than writing the business logic itself. And that’s if you compare writing greenfield code from scratch. Making an old legacy code base testable in a way conducive to finding security vulns is not something you just throw together. You can be lucky with standard tooling like sanitizers and valgrind but it’s far from a panacea.
Exactly. Many AI users can’t triage effectively, as a result open source projects get a lot of spam now: https://arstechnica.com/gadgets/2025/05/open-source-project-...
maybe we ask the AI to come up with an exploit, run it and see if it works? then you can RL on this.