> LLMs will never be able to figure out for themselves what your project's politics are and what trade-offs are supposed to be made.
I wouldn't declare that unsolvable. The intentions of a project and how they fit into user needs can be largely inferred from the code and associated docs/README, combined with good world knowledge. If you're shown a codebase of a GPU kernel for ML, then as a human you instantly know the kinds of constraints and objectives that go into any decisions. I see no reason why an LLM couldn't also infer the same kind of meta-knowledge. Of course, this papers over the hard part of training the LLMs to actually do that properly, but I don't see why it's inherently impossible.
> associated docs/README
Many (I would even argue most) professional codebases either do not have their documentation (including tutorials and architecture diagrams) in the codebase alongside the code, if there is even such formal documentation at all. It's axiomatic as well that documentation is frequently out-of-date, and in any case represents past political decisions, not future ones; human owners can and do change their minds about which trade-offs are required over the lifetime of a project.
A simple case may be to plot codebase complexity against required scale; early projects benefit from simpler implementations that will not scale, and only after usage and demand are proven does it make sense to make the project more complex in order to support additional scale. So if you are an LLM looking at a codebase in isolation, do you make changes to add complexity to support additional scale? Do you simplify the codebase? Do you completely rewrite it in a different language (say, TypeScript -> Go or Rust)? How could an LLM possibly know which of these are appropriate without additional sources of telemetry at the very least and probably also needing to converse with stakeholders (i.e. bordering on AGI)?