Assuming that your workflow works, and the rest of us just need to learn to use LLMs equally effective, won't that plateau us at the current level of programming?
The LLMs learn from examples, but if everyone uses LLMs to generate code, there's no new code to learn new features, libraries or methods from. The next generation of models are just going to be trained on the code generated by it's predecessors with now new inputs.
Being an LLM maximalist is basically freeze development in the present, now and forever.
If Google's AlphaEvolve is any indication, they already have LLM's writing faster algorithms than humans have discovered.[1]
[1]https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...
I'm not thinking algorithms. Let's say someone write a new web framework. If there is no code samples available, I don't think whatever is going to be in the documentation will be enough data, then the LLMs doesn't have the training data and won't be able to utilize it.
Would you ever be able to tell e.g. CoPilot: I need a web framework with these specs, go create that framework for me. The later have Claude actually use that framework?
You can just collect the docs and stuff them in context with a few code examples that can be hand coded if needed, or you can separately get the LLM to try and code samples from the docs and keep the ones that work and look idiomatic.
> Would you ever be able to tell e.g. CoPilot: I need a web framework with these specs, go create that framework for me. The later have Claude actually use that framework?
Sure, why not?
The "magic sauce" of LLMs is that they understand what you mean. They've ingested all the thinking biases and conceptual associations humans have through their training on the entire training corpus, not just code and technical documentation. When Copilot cobbles together a framework for you, it's going to name the functions and modules and variables using domain terms. For Claude reading it, those symbols aren't just meaningless tokens with identity - they're also words that mean something in English in general, as well as in the web framework domain specifically; between that and code itself having common, cross-language pattern, there's more than enough information for an LLM to use a completely new framework mostly right.
Sure, if your thing is unusual enough, LLMs won't handle it as well as something that's over-represented in their training set, but then the same is true of humans, and both benefit from being provided some guidelines and allowed to keep notes.
(Also, in practice, most code is very much same-ish. Every now and then, someone comes up with something conceptually new, but most of the time, any new framework or library is very likely to be reinventing something done by another library, possibly in different language. Improvements, if any, tend to be incremental. Now, the authors of libraries and frameworks may not be aware they're retracing prior art, but SOTA LLMs very likely seen it all, across most programming languages ever used, and can connect the dots.)
And in the odd case someone really invents some unusual, new, groundbreaking pattern, it's just a matter of months between it getting popular and LLMs being trained on it.