My typical approach is prompt, be disgusted by the output, tinker a little on my own, prompt again -- but more specific, be disgusted again by the output, tinker a littler more, etc.
Eventually I land on a solution to my problem that isn't disgusting and isn't AI slop.
Having a sounding board, even a bad one, forces me to order my thinking and understand the problem space more deeply.
Why not just write the code at that point instead of cajoling an AI to do it.
This is the part I don't get about vibe coding: I've written specification documents before. They frequently are longer and denser then the code required to implement them.
Typing longer and longer prompts to LLMs to not get what I want seems like a worse experience.
Code is a concise notation for specifications, one that is unambiguous. The reason we write specs in natural language is that it's more easier to alter when the requirements change and easier to read. Also code is tainted by accidental complexities that they're also solving.
I don't cajole the model to do it. I rarely use what the model generates. I typically do my own thing after making an assessment of what the model writes. I orient myself in the problem space with the model, then use my knowledge to write a more concise solution.