therealpygon 7 days ago

LLMs follow instructions. Garbage in = garbage out generally. When attention is managed and a problem is well defined and necessary materials are available to it, they can perform rather well. On the other hand, I find a lot of the loosely-goosey vibe coding approach to be useless and gives a lot of false impressions about how useful LLMs can be, both too positive and too negative.

7
GiorgioG 7 days ago

So what you’re saying is you need to be very specific and detailed when writing your specifications for the LLM to spit out the code you want. Sounds like I can just skip the middle man and code it myself.

AndrewKemendo 7 days ago

Not in 10 seconds

Zamaamiro 7 days ago

You probably didn’t write up a detailed prompt with perfect specifications in 10 seconds, either.

In my experience, it doesn’t matter how good or detailed the prompt is—after enough lines of code, the LLM starts making design decisions for you.

This is why I don’t accept LLM completions for anything that isn’t short enough to quickly verify that it is implemented exactly as I would have myself. Usually, that’s boilerplate code.

abalashov 7 days ago

> This is why I don’t accept LLM completions for anything that isn’t short enough to quickly verify that it is implemented exactly as I would have myself. Usually, that’s boilerplate code.

^ This. This is where I've landed as far as the extent of LLM coding assistants for me.

dingnuts 7 days ago

I've seen very long prompts that are as long as a school essay and those didn't take ten seconds either

darkwater 7 days ago

To some extent those fail in the same category of cheaters that put way more effort into cheating an exam than doing it properly. Or people paying 10/15 bucks a month to access a private Usenet server to download pirate content.

anonzzzies 7 days ago

The advantage of a llm in that case is that you can skip a lot of syntax: make a LOT of typos in your spec, even pseudo code, will result in a working program. Not so with code. Also small logjcal mistakes, messing up left/right, x/y etc are auto fixed, maybe to your frustration if they were not mistakes, but often they are and you won't notice as they are indeed just repaired for you.

therealpygon 6 days ago

No, but the better specifications you provide to your “development team”, the more likely you are to get what you expected… like always.

troupo 7 days ago

> LLMs follow instructions.

They don't

> Garbage in = garbage out generally.

Generally, this statement is false

> When attention is managed and a problem is well defined and necessary materials are available to it, they can perform rather well.

Keyword: can.

They can also not perform really well despite all the management and materials.

They can also work really well with loosey-goosey approach.

The reason is that they are non-deterministic systems whose performance is affected more by compute availability than by your unscientific random attempts at reverse engineering their behavior https://dmitriid.com/prompting-llms-is-not-engineering

AndrewKemendo 7 days ago

This seems to be what’s happened

People are expecting perfection from bad spec

Isn’t that what engineers are (rightfully) always complaining about to BD?

darkwater 7 days ago

Indeed. But that's the price an automated tool has to pay to take a job from humans' hands. It has to do it better with the same conditions. The same applies to self-driving cars: you don't want an accident rate equals to human drivers. You want two or three orders of magnitude better.

gpm 7 days ago

This hasn't been my experience (using the latest claude and gemini models). They'll produce poor code even when given a well defined easily achievable task with specific instructions. The code will usually more or less work with today's models, but it will do things like call a function to recreate a value that is already stored in a local variable... (and worse issues prop us the more design-work you leave to the LLM, even dead simple design work with really only one good answer)

I've definitely also found that the poor code can sometimes be a nice starting place. One thing I think it does for me is make me fix it up until it's actually good, instead of write the first thing that comes to mind and declare it good enough (after all my poorly written first draft is of course perfect). In contrast to the usual view of AI assisted coding, I think this style of programming for tedious tasks makes me "less productive" (I take longer) but produces better code.

geraneum 7 days ago

> LLMs follow instructions.

Not really, not always. To anyone who’s used the latest LLMs extensively, it’s clear that this is not something you can reliably assume even with the constraints you mentioned.

myvoiceismypass 7 days ago

They should maybe have a verifiable specification for said instructions. Kinda like a programming language maybe!

otabdeveloper4 7 days ago

> LLMs follow instructions.

No they don't, they generate a statistically plausible text response given a sequence of tokens.