16 year python dev who's done all that, lead multiple projects from inception to success, and I rarely manually code anymore. I can specify precisely what I want, and how I want it built (this is the key part), stub out a few files and create a few directories, and let an agent run wild but configured for static analysis tools/test suite to run after every iteration with the instructions to fix their mistakes before moving on.
I can deliver 5k LoC in a day easily on a greenfield project and 10k if I sweat or there's a lot of boilerplate. I can do code reviews of massive multi-thousand line PRs in a few minutes that are better than most of the ones done by engineers I've worked with throughout a long career, the list just goes on and on. I only manually code stuff if there's a small issue that I see the LLM isn't understanding that I can edit faster than I can run another round of the agent, which isn't often.
LLMs are a force multiplier for everyone, really senior devs just need to learn to use them as well as they've learned to use their current tools. It's like saying that a master archer proves bows are as good as guns because the archer doesn't know how to aim a rifle.
Assuming that your workflow works, and the rest of us just need to learn to use LLMs equally effective, won't that plateau us at the current level of programming?
The LLMs learn from examples, but if everyone uses LLMs to generate code, there's no new code to learn new features, libraries or methods from. The next generation of models are just going to be trained on the code generated by it's predecessors with now new inputs.
Being an LLM maximalist is basically freeze development in the present, now and forever.
If Google's AlphaEvolve is any indication, they already have LLM's writing faster algorithms than humans have discovered.[1]
[1]https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...
I'm not thinking algorithms. Let's say someone write a new web framework. If there is no code samples available, I don't think whatever is going to be in the documentation will be enough data, then the LLMs doesn't have the training data and won't be able to utilize it.
Would you ever be able to tell e.g. CoPilot: I need a web framework with these specs, go create that framework for me. The later have Claude actually use that framework?
You can just collect the docs and stuff them in context with a few code examples that can be hand coded if needed, or you can separately get the LLM to try and code samples from the docs and keep the ones that work and look idiomatic.
> Would you ever be able to tell e.g. CoPilot: I need a web framework with these specs, go create that framework for me. The later have Claude actually use that framework?
Sure, why not?
The "magic sauce" of LLMs is that they understand what you mean. They've ingested all the thinking biases and conceptual associations humans have through their training on the entire training corpus, not just code and technical documentation. When Copilot cobbles together a framework for you, it's going to name the functions and modules and variables using domain terms. For Claude reading it, those symbols aren't just meaningless tokens with identity - they're also words that mean something in English in general, as well as in the web framework domain specifically; between that and code itself having common, cross-language pattern, there's more than enough information for an LLM to use a completely new framework mostly right.
Sure, if your thing is unusual enough, LLMs won't handle it as well as something that's over-represented in their training set, but then the same is true of humans, and both benefit from being provided some guidelines and allowed to keep notes.
(Also, in practice, most code is very much same-ish. Every now and then, someone comes up with something conceptually new, but most of the time, any new framework or library is very likely to be reinventing something done by another library, possibly in different language. Improvements, if any, tend to be incremental. Now, the authors of libraries and frameworks may not be aware they're retracing prior art, but SOTA LLMs very likely seen it all, across most programming languages ever used, and can connect the dots.)
And in the odd case someone really invents some unusual, new, groundbreaking pattern, it's just a matter of months between it getting popular and LLMs being trained on it.
were you immediately more productive in Cursor specifically?
my point is exactly inline with your comment. The tools you get immediate value out of will vary based on circumstance. There's no silver bullet.
I use Aider, and I was already quite good at working with AI before that so there wasn't much of a learning curve other than figuring out how to configure it to automatically do the ruff/mypy/tests loop I'd already been doing manually.
They key is that I've always had that prompt/edit/verify loop, and I've always leaned heavily on git to be able to roll back bad AI changes. Those are the skills that let me blow past my peers.
Let’s see the GitHub project for an easy 10k line day.
Not public on github, but here's the cloc for an easy 5k one day (10k is sweats).
github.com/AlDanial/cloc v 2.04 T=0.05 s (666.3 files/s, 187924.3 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- Python 24 1505 1968 5001 Markdown 4 37 0 121 Jinja Template 3 17 2 92 ------------------------------------------------------------------------------- SUM: 31 1559 1970 5214 -------------------------------------------------------------------------------
Note this project also has 199 test cases.
Initial commit for cred:
commit caff2ce26225542cd4ada8e15246c25176a4dc41 Author: redacted <redacted> Date: Thu May 15 11:32:45 2025 +0800
docs: Add README
And when I say easy, I was playing the bass while working on this project for ~3 hours. > we're back to counting programming projects in kloc, like it's the radical 1980's again
Yikes. But also lol.