mromanuk 2 days ago

Every time I ask an LLM to write some UI and model for SwiftUI I have to specify to use @Observable macro (is the new way), which they normally do, after asking for it.

The LLM tells me that they prefere the "older way" because it's more broadly compatible, that's ok if you are aiming for that. But If the programmer doesn't know about that they will be stuck with the LLM calling the shots for them.

5
bcrosby95 2 days ago

You need to create your own preamble that you include with every request. I generally have one for each codebase, which includes a style guide, preferred practices & design (lots of 'best practices' are cargo culted and the LLM will push them on you even when it doesn't make sense - this helps eliminate those), and declarations of common utility functions that may need to be used.

klntsky 2 days ago

Use always-enabled cursor (or your agentic editor of choice) rules.

starlust2 2 days ago

A thing people miss is that there are many different right ways to solve a problem. A legacy system might need the compatibility or it might be a greenfield. If you leave a technical requirement out of the prompt you are letting the LLM decide. Maybe that will agree with your nuanced view of things, but maybe not.

We're not yet at a point where LLM coders will learn all your idiosyncrasies automatically, but those feedback loops are well within our technical ability. LLM's are roughly a knowledgeable but naïve junior dev; you must train them!

Hint: add that requirement to your system/app prompt and be done with it.

maxwell 2 days ago

It's just a higher level abstraction, subject to leaks as with all abstractions.

How many professional programmers don't have assemblers/compilers/interpreters "calling the shots" on arbitrary implementation details outside the problem domain?

LorenPechtel 2 days ago

But we trust those tools to do the job correctly. The compiler has considerable latitude in messing with the details so long as the result is guaranteed to match what was ordered--when we find any deviation from that even in an edge case we consider it a bug (Borland Pascal debugger, I'm looking at you. I wasted a *lot* of time on the fact that in single step mode you peacefully "execute" an invalid segment register load!) LLMs lack this guarantee.

maxwell 1 day ago

We trust those tools to do the job correctly now.

https://vivekhaldar.com/articles/when-compilers-were-the--ai...

cpinto 2 days ago

Have you tried writing rules for how you want things done, instead of repeating the same things every time?

jasonjmcghee 2 days ago

The attempting backward compatibility trained behavior has never once been useful to me and is constantly an irritation.

> Please write this thing

> Here it is

> That's asinine why would you write it that way, please do this

> I rewrote it and kept backward compatibility with the old approach!

:facepalm:

diggan 2 days ago

Sounds like an OK default, especially since the "better" (in your opinion) way can be achieved by just adding "Don't try to keep backwards compatibility with old code" somewhere in your reusable system prompt.

It's mostly useful when you work a lot with "legacy code" and can't just remove things willy nilly. Maybe that sort of coding is over-represented in the datasets, as it tends to be pretty common in (typically conservative) larger companies.

dwaltrip 2 days ago

You will get better results if you reset the code changes, tweak the prompt with new guidelines (e.g. don’t do X), and then run it again in a fresh chat.

The less cruft and red herrings in the context, the better. And likewise with including key info, technical preferences, and guidelines. The model can’t read our minds, although sometimes we wish so :)

There are lots of simple tricks to make it easier for the model to provide a higher quality result.

Using these things effectively is definitely an complex skill set.