There are a couple people I work with who clearly don’t have a good understanding of software engineering. They aren’t bad to work with and are in fact great at collaborating and documenting their work, but don’t seem to have the ability to really trace through code and logically understand how it works.
Before LLMs it was mostly fine because they just didn’t do that kind of work. But now it’s like a very subtle chaos monkey has been unleashed. I’ve asked on some PRs “why is this like this? What is it doing?” And the answer is “ I don’t know, ChatGPT told me I should do it.”
The issue is that it throws basically all their code under suspicion. Some of it works, some of it doesn’t make sense, and some of it is actively harmful. But because the LLMs are so good at giving plausible output I can’t just glance at the code and see that it’s nonsense.
And this would be fine if we were working on like a crud app where you can tell what is working and broken immediately, but we are working on scientific software. You can completely mess up the results of a study and not know it if you don’t understand the code.
>And the answer is “ I don’t know, ChatGPT told me I should do it.”
This weirds me out. Like I use LLMs A LOT but I always sanity check everything, so I can own the result. Its not the use of the LLM that gets me its trying to shift accountability to a tool.
Sounds almost like you definitely shouldnt use llms nor those juniors for such an important work.
Is it just me or are we heading into a period of explosion of software done, but also a massive drop of its quality? Not uniformly, just a bit of chaotic spread
> Is it just me or are we heading into a period of explosion of software done, but also a massive drop of its quality? Not uniformly, just a bit of chaotic spread
I think we are, especially with executives mandating the use LLMs use and expecting it to massively reduce costs and increase output.
For the most part they don't actually seem to care that much about software quality, and tend to push to decrease quality at every opportunity.
> llms nor those juniors for such an important work.
Yeah we shouldn’t and I limit my usage to stuff that is easily verifiable.
But there’s no guardrails on this stuff, and one thing that’s not well considered is how these things which make us more powerful and productive can be destructive in the hands of well intentioned people.
Which is frightening, because it's not like our industry is known for producing really high quality code at the starting point before LLM authored code.
> I’ve asked on some PRs “why is this like this? What is it doing?” And the answer is “ I don’t know, ChatGPT told me I should do it.”
This would infuriate me. I presume these are academics/researchers and not junior engineers?
Unfortunately this is the world we're entering into, where all of us will be outsourcing more and more of our 'thinking' to machines.