One thing I've noticed about working with LLMs is that it's forcing me to get _better_ at explaining my intent and fully understanding a problem before coding. Ironically, I'm getting less vibey because I'm using LLMs.
The intuition is simple: LLMs are a force multiplier for the coding part, which means that they will produce code faster than I will alone. But that means that they'll also produce _bad_ code faster than I will alone (where by "bad" I mean "code which doesn't really solve the problem, due to some fundamental misunderstanding").
Previously I would often figure a problem out by trying to code a solution, noticing that my approach doesn't work or has unacceptable edge-cases, and then changing track. I find it harder to do this with an LLM, because it's able to produce large volumes of code faster than I'm able to notice subtle problems, and by the time I notice them there's a sufficiently large amount of code that the LLM struggles to fix it.
Instead, now I have to do a lot more "hammock time" thinking. I have to be able to give the LLM an explanation of the system's requirements that is sufficiently detailed and robust that I can be confident that the resulting code will make sense. It's possible that some of my coding skills might atrophy - in a language like Rust with lots of syntactic features, I might start to forget the precise set of incantations necessary to do something. But, corresponding, I have to get better at reasoning about the system at a slightly higher level of abstraction, otherwise I'm unable to supervise the LLM effectively.
Yes, writing has always generally been great practice for thinking clearly. It's a shame it isn't more common in the industry⸺I do believe that the norm of lack of practice in it is one of the reasons why we have to deal with so much bullshit code.
The "hammock time thinking" is exactly what a lot of programmers should be doing in the first place⸺you absorb the cost of planning upfront instead of the larger costs of patching up later, but somehow the dominant culture has been to treat thoughtful coding with derision.
It's a real shame that AI beat human programmers at the game of thinking, and perhaps that's a good reason to automate us all out of our jobs.
One problem is that one person’s hammock time is another’s overthinking time and needs the opposite advice. Of course it’s about finding that balance and that’s hard to pin down with words.
But I take your point and the trend definitely seems to be towards quicker action with feedback rather than thinking things through in the first place.
In that sense LLM’s present this interesting middle ground in that it’s a faster cycle than actually writing the code, but still more active and externalising than getting lost in your own thoughts (not withstanding how productive that can still be).
All good software engineers learn this. Unless you’re actively working in some languages, you don’t need to worry about syntax (that’s why reference manuals are there for). Instead, grow your capacity to solve problems and to define precise solutions. Most time is spent doing that, realizing you don’t have a precise idea of what you’re working on and doing research about it. Writing code is just translating that.
But there are other concerns to code that you ought to pay attention to. Will it works in all cases? Will it run efficiently? Will it be easily understood by someone else? Will it easily be adapted to fit to a change of requirements?
Through LLMs, new developers are learning the beauty of writing software specs :')
It's weird, but LLMs really do gamify the experience of doing software engineering properly. With a much faster feedback loop, you can see immediate benefits from having better specs, writing more tests, and keeping modules small.
But it takes longer. People taking a proper course in software engineering or reading a good book about it is like going through a game tutorial, while people going through LLMs skip it. The former let you reach faster to the intended objectives, learning how to play properly. You may have some fun doing the latter, but you may also spend years and your only gain will be an ad-hoc strategy.
And they’re making it much easier to build comprehensive test suites. It no longer feels like grunt work.
Ha! I just ran into this when I had a vague notion of a statistical analysis that I wanted to do