It is rather soul crushing how fast LLMs spit out decent code.
In my experience, LLMs are idiot savant coders--but currently more idiot than savant. Claude 3.7 (via cursor and roo) can comment code well, create a starter project 10x faster than I could, and they spit out common crud apps pretty well.
However I've come to the conclusion that LLMs are terrible at decision making. I would much rather have an intern architect my code than let AI do it. It's just too unreliable. It seems like 3 out of 4 decisions that it makes are fine. But that 4th decision is usually asinine.
That said, I now consider LLMs a mandatory addition to my toolkit because they have improved my developer efficiency so much. I really am a fan. But without a seasoned dev to write detailed instructions, break down the project into manageable chunks, make all of the key design decisions, and review every line of code that it writes, today's AI will only add a mountain of technical debt to your project.
I guess I'm trying to say: don't worry because the robots cannot replace use yet. We're still in the middle of the hype cycle.
But what do I know? I'm just an average meat coder.
LLMs currently can generate a few thousand lines of coherent code but they cannot write a cohesive large scale code base.
But LLMs are very good at writing SQL and Cypher queries that I would spend hours or days figuring out how to write.
Agreed.
I find it interesting that LLMs seem pretty good at spitting out SQL that works well enough. But on the other hand LLMs seem pretty awful at working with CSS. I wonder if this is due to a difference in the amount of training data available for SQL vs CSS, or is this because CSS is a finicky pain in the ass when compared to SQL.
There should be a insane amount of CSS on the web but CSS output is primarily visual so I think that makes it hard for a text only model to generate.