jclardy 21 hours ago

I don't get the whole "all-in" mentality around LLMs. I'm an iOS dev by trade, I continue to do that as I always have. The difference now is I'll use an LLM to quickly generate a one-off view based on a design. This isn't a core view of an app, the core functionality, or really anything of importance. It's a view that promotes a new feature, or how to install widgets, or random things. This would normally take me 30-60 min to implement depending on complexity, now it takes 5.

I also use it for building things like app landing pages. I hate web development, and LLMs are pretty good at it because I'd guess that is 90% of their training data related to software development. For that I make larger changes, review them manually, and commit them to git, like any other project. It's crazy to me that people will just go completely off the rails for multiple hours and run into a major issue, then just start over when instead you can use a measured approach and always continue forward momentum.

15
mritchie712 21 hours ago

How useful the various tools will be depends on the person and the problem. Take two hypothetical people working on different problems and consider if, for example, Cursor would be useful.

IF you're a:

* 10 year python dev

* work almost entirely on a very large, complex python code base

* have a pycharm IDE fine tuned over many years to work perfectly on that code base

* have very low tolerance for bugs (stable product, no room for move fast, break things)

THEN: LLMs aren't going to 10x you. An IDE like Cursor will likely make you slower for a very long time until you've learned to use it.

IF you're a:

* 1 year JS (react, nextjs, etc.) dev

* start mostly from scratch on new ideas

* have little prior IDE preference

* have high tolerance for bugs and just want to ship and try stuff

THEN: LLMs will to 10x you. An IDE like Cursor will immediately make you way faster.

infecto 21 hours ago

These all-or-nothing takes on LLMs are getting tiresome.

I get the point you’re trying to make, LLMs can be a force multiplier for less experienced devs, but the sweeping generalizations don’t hold up. If you’re okay with a higher tolerance for bugs or loose guardrails, sure, LLMs can feel magical. But that doesn’t mean they’re less valuable to experienced developers.

I’ve been writing Python and Java professionally for 15+ years. I’ve lived through JetBrains IDEs, and switching to VS Code took me days. If you’re coming from a heavily customized Vim setup, the adjustment will be harder. I don’t tolerate flaky output, and I work on a mix of greenfield and legacy systems. Yes, greenfield is more LLM-friendly, but I still get plenty of value from LLMs when navigating and extending mature codebases.

What frustrates me is how polarized these conversations are. There are valid insights on both sides, but too many posts frame their take as gospel. The reality is more nuanced: LLMs are a tool, not a revolution, and their value depends on how you integrate them into your workflow, regardless of experience level.

mritchie712 21 hours ago

my stance is the opposite of all-or-nothing. The note above is one example. How much value you get out of CURSOR specifically is going to vary based on person & problem. The Python dev in my example might immediately get value out of o3 in ChatGPT.

It's not all or nothing. What you get value out of immediately will vary based on circumstance.

infecto 20 hours ago

You say your stance isn’t all-or-nothing, but your original comment drew a pretty hard line, junior devs who start from scratch and have a high tolerance for bugs get 10x productivity, while experienced devs with high standards and mature setups will likely be slowed down. That framing is exactly the kind of binary thinking that’s making these conversations so unproductive.

ghufran_syed 20 hours ago

I wouldn’t classify this as binary thinking - isnt the comment you are replying just defining boundary conditions? Then those two points don’t define the entire space, but the output there does at least let us infer (but not prove) something about the nature of the “function” between those two points? Where the function f is something like f: experience -> productivity increase?

infecto 20 hours ago

You’re right that it’s possible to read the original comment as just laying out two boundary conditions—but I think we have to acknowledge how narrative framing shapes the takeaway. The way it’s written leads the reader toward a conclusion: “LLMs are great for junior, fast-shipping devs; less so for experienced, meticulous engineers.” Even if that wasn’t the intent, that’s the message most will walk away with.

But they drew boundaries with very specific conditions that lead the reader. It’s a common theme in these AI discussions.

web007 19 hours ago

> LLMs are great for junior, fast-shipping devs; less so for experienced, meticulous engineers

Is that not true? That feels sufficiently nuanced and gives a spectrum of utility, not binary one and zero but "10x" on one side and perhaps 1.1x at the other extrema.

The reality is slightly different - "10x" is SLoC, not necessarily good code - but the direction and scale are about right.

TeMPOraL 14 hours ago

That feels like the opposite of being true. Juniors have, by definition, little experience - the LLM is effectively smarter than them and much better at programming, so they're going to be learning programming skills from LLMs, all while futzing about not sure what they're trying to express.

People with many years or even decades of hands-on programming experience, have the deep understanding and tacit knowledge that allows them to tell LLMs clearly what they want, quickly evaluate generated code, guide the LLM out of any rut or rabbit hole it dug itself into, and generally are able to wield LLMs as DWIM tools - because again, unlike juniors, they actually know what they mean.

mritchie712 19 hours ago

no, those are two examples of many many possible circumstances. I intentionally made it two very specific examples so that was clear. Seems it wasn't so clear.

infecto 19 hours ago

Fair enough but if you have to show up in the comments clarifying that your clearly delineated “IF this THEN that” post wasn’t meant to be read as a hard divide, maybe the examples weren’t doing the work you thought they were. You can’t sketch a two-point graph and then be surprised people assume it’s linear.

Again I think the high level premise is correct as I already said, the delivery falls flat though. Your more junior devs have larger opportunity of extracting value.

bluecheese452 19 hours ago

I and others understood it perfectly well. Maybe the problem wasn’t with the post.

infecto 18 hours ago

And I along with others who upvoted me did not. What’s your point? Seems like you have none and instead just want to point fingers.

bluecheese452 18 hours ago

The guy was nice enough to explain his post that you got confused about. Rather than be thankful you used that as evidence that he was not clear and lectured him on it.

I gently suggested that the problem may have not been with his post but with your understanding. Apparently you missed the point again.

infecto 17 hours ago

If multiple people misread the post, clarity might be the issue, not comprehension. Dismissing that as misunderstanding doesn’t add much. Let’s keep it constructive.

motorest 4 hours ago

> I get the point you’re trying to make, LLMs can be a force multiplier for less experienced devs, but the sweeping generalizations don’t hold up. If you’re okay with a higher tolerance for bugs or loose guardrails, sure, LLMs can feel magical. But that doesn’t mean they’re less valuable to experienced developers.

I think you're trying very hard to pin LLMs as a tool for inexperienced developers. There's a hint of paternalism in them.

Back in the real world, LLMs are a tool that excels at generating and updating code based on their context and following your prompts. If you realize this fact, you'll understand that there's nothing in the description that makes them helpful exclusively to "inexperienced" developers. Do experience developers need to refactor code or write new software? Do you believe veteran software engineers are barred from writing proofs of concept? Is the job of pushing architecture changes a junior developer gig?

What exactly do you think an experienced developer does?

> What frustrates me is how polarized these conversations are. There are valid insights on both sides, but too many posts frame their take as gospel. The reality is more nuanced: LLMs are a tool, not a revolution, and their value depends on how you integrate them into your workflow, regardless of experience level.

I completely disagree: LLMs have a revolutionary impact on how software engineers do their job. Your workflows changed overnight. You can create new things faster, you can iterate faster, you can even rewrite whole applications and services in another tech stacks and frameworks in a few days. Things like TDD will become of critical importance as automates test suites are now a critical factor in providing feedback to LLMs. Things are no longer the way they were. At least to those who bothered learning.

lelandbatey 18 hours ago

This doesn't seem like an "all or nothing" take. This person is trying to be clear about their claims, but they're not trying to state these are the only possible takes. Add the word "probably" after each "then" and I image their intended tone becomes a little clearer.

mexicocitinluez 21 hours ago

> I get the point you’re trying to make, LLMs can be a force multiplier for less experienced devs, but the sweeping generalizations don’t hold up. If you’re okay with a higher tolerance for bugs or loose guardrails, sure, LLMs can feel magical. But that doesn’t mean they’re less valuable to experienced developers.

Amen. Seriously. They're tools. Sometimes they work wonderfully. Sometimes, not so much. But I have DEFINITELY found value. And I've been building stuff for over 15 years as well.

I'm not "vibe coding", I don't use Cursor or any of the ai-based IDEs. I just use Claude and Copilot since it's integrated.

voidhorse 21 hours ago

> Amen. Seriously. They're tools. Sometimes they work wonderfully. Sometimes, not so much. But I have DEFINITELY found value. And I've been building stuff for over 15 years as well.

Yes, but these lax expectation s are what I don't understand.

What other tools in software sometimes work and sometimes don't that you find remotely acceptable? Sure all tools have bugs, but if your compiler had the same failure rate and usability issues as an LLM you'd never use it. Yet for some reason the bar is so low for LLMs. It's insane to me how much people have indulged in the hype koolaid around these tools.

TeMPOraL 14 hours ago

> What other tools in software sometimes work and sometimes don't that you find remotely acceptable?

Other people.

Seriously, all that advice about not anthropomorphizing computers is taken way too seriously now, and is doing a number on the industry. LLMs are not a replacement for compilers or other "classical" tools - they're replacement for people. The whole thing that makes LLMs useful is their ability to understand what some text means - whether or not it's written in natural language or code. But that task is inherently unreliable because the problem itself is ill-specified; the theoretically optimal solution boils down to "be a simulated equivalent of a contemporary human", and that still wouldn't be perfectly reliable.

LLMs are able to trivially do tasks in programming that no "classical" tools can, tasks that defy theoretical/formal specification, because they're trained to mimic humans. Plenty of such tasks cannot be done to the standards you and many others expect of software, because they're NP-complete or even equivalent to halting problem. LLMs look at those and go, "sure, this may be provably not solvable, but actually the user meant X therefore the result is Y", and succeed with that reliably enough to be useful.

Like, take automated refactoring in dynamic languages. Any nontrivial ones are not doable "classically", because you can't guarantee there aren't references to the thing you're moving/renaming that are generated on the fly by eval() + string concatenation, etc. As a programmer, you may know the correct result, because you can understand the meaning and intent behind the code, the conceptual patterns underpinning its design. DAG walkers and SAT solvers don't. But LLMs do.

ljm 18 hours ago

People are way too quick to defend LLMs here, because it's exactly on point.

In an era where an LLM can hallucinate (present you a defect) with 100% conviction, and vibe coders can ship code of completely unknown quality with 100% conviction, the bar by definition has to have been set lower.

Someone with experience will still bring something more than just LLM-written code to the table, and that bar will stay where it is. The people who don't have experience won't even feel the shortcomings of AI because they won't know what it's getting wrong.

motorest 4 hours ago

> In an era where an LLM can hallucinate (present you a defect) with 100% conviction, (...)

I think you're trying very hard to find anything at all to criticize LLMs and those who use them, but all you manage to come up with is outlandish, "grasping at straws" arguments.

Yes, it's conceivable that LLMs can hallucinate. How often do they do, though? In my experience, not that much. In the rare cases they do, it's easy to spot and another iteration costs you a couple of seconds to get around it.

So, what are you complaining about, actually? Are you complaining about LLMs or just letting the world know how competent are you at using LLMs?

> Someone with experience will still bring something more than just LLM-written code to the table, and that bar will stay where it is.

Someone with experience leverages LLMs to do the drudge work, and bump up their productivity.

I'm not sure you fully grasp the implications. You're rehashing the kind of short-sighted comments that in the past brought comically-clueless assertions such as "the kids don't know assembly, so how can they write good programs". In the process, you are failing to understand the fact that the way software is written has already changed completely. The job of a developer is no longer typing code away and googling for code references. Now we can refactor and rewrite entire modules, iterate over the design, try a few alternative approaches, pin alternatives against each other, and pick up the one we prefer to post a PR. And then go to lunch. With these tools, some of your "experienced" developers turn out to be not that great, whereas "inexperienced" ones outperform them easily. How do you deal with that?

marcosdumay 17 hours ago

There were always lots of code generation tools that people expected to review and fix the output.

Anyway, code generation tools almost always are born unreliable, then improve piecewise into almost reliable, and finally get replaced by something with a mature and robust architecture that is actually reliable. I can't imagine how LLMs could traverse this, but I don't think it's an extraordinary idea.

infecto 20 hours ago

My compiler doesn’t write a complete function to visualize a DataFrame based on a vague prompt. It also doesn’t revise that function as I refine the requirements. LLMs can.

There’s definitely hype out there, but dismissing all AI use as “koolaid” is as lazy as the Medium posts you’re criticizing. It’s not perfect tech, but some of us are integrating it into real production workflows and seeing tangible gains, more code shipped, less fatigue, same standards. If that’s a “low bar,” maybe your expectations have shifted.

tbrownaw 18 hours ago

> What other tools in software sometimes work and sometimes don't that you find remotely acceptable?

Searching for relevant info on the Internet can take several attempts, and occasionally I end up not finding anything useful.

My ide intellisense tries to guess what identifier I want and put it at the top of the list, sometimes it guesses wrong.

I've heard that the various package repositories will sometimes deliberately refuse to work for a while because of some nonsense called "rate limiting".

Cloud deployments can fail due to resource availability.

mexicocitinluez 19 hours ago

> Yes, but these lax expectation s are what I don't understand.

It's pretty a really, really simple concept.

If I have a crazy Typescript error, for instance, I can throw it in and get a much better idea of what's happening. Just because that's not perfect, doesn't mean it isn't helpful. Even if it works 90% of the time, it's still better than 0% of the time (Which is where I was at before).

It's like google search without ads and with the ability to compose different resources together. If that's not useful to you, then I don't know what to tell you.

UncleEntity 19 hours ago

Hell, AI is probably -1x for me because I refuse to give up and do it myself instead of trying to get the robots to do it. I mean, writing code is for the monkeys, right?

Anyhoo... I find that there are times where you have to really get in there and question the robot's assumptions as they will keep making the same mistake over and over until you truly understand what it is they are actually trying to accomplish. A lot of times the desired goal and their goal are different enough to cause extreme frustration as one tends to think the robot's goal should perfectly align with the prompt. Once it fails a couple times then the interrogation begins since we're not making any further progress, obviously.

Case in point, I have this "Operational Semantics" document, which is correct, and a peg VM, which is tested to be correct, but if you combine the two one of the operators was being compiled incorrectly due to the way backtracking works in the VM. After Claude's many failed attempts we had a long discussion and finally tracked down the problem to be something outside of its creative boundaries and it needed one of those "why don't you do it this way..." moments. Sure, I shouldn't have to do this but that's the reality of the tools and, like they say, "a good craftsman never blames his tools".

cbm-vic-20 21 hours ago

I've got over 30 years of professional development experience, and I've found LLMs most useful for

* Figuring out how to write small functions (10 lines) in canonical form in a language that I don't have much experience with. This is so I don't end up writing Rust code as if it were Java.

* Writing small shell pipelines that rely on obscure command line arguments, regexes, etc.

maccard 21 hours ago

I’ve found the biggest thing LLMs and agents let me do are build the things that I really suck at to a prototype level. I’m not a frontend engineer, and pitching feature prototypes without a fronted is tough.

But with aider/claude/bolt/whatever your tool of choice is, I can give it a handful of instructions and get a working page to demo my feature. It’s the difference between me pitching the feature or not, as opposed to pitching it with or without the frontend.

CuriouslyC 21 hours ago

16 year python dev who's done all that, lead multiple projects from inception to success, and I rarely manually code anymore. I can specify precisely what I want, and how I want it built (this is the key part), stub out a few files and create a few directories, and let an agent run wild but configured for static analysis tools/test suite to run after every iteration with the instructions to fix their mistakes before moving on.

I can deliver 5k LoC in a day easily on a greenfield project and 10k if I sweat or there's a lot of boilerplate. I can do code reviews of massive multi-thousand line PRs in a few minutes that are better than most of the ones done by engineers I've worked with throughout a long career, the list just goes on and on. I only manually code stuff if there's a small issue that I see the LLM isn't understanding that I can edit faster than I can run another round of the agent, which isn't often.

LLMs are a force multiplier for everyone, really senior devs just need to learn to use them as well as they've learned to use their current tools. It's like saying that a master archer proves bows are as good as guns because the archer doesn't know how to aim a rifle.

mrweasel 19 hours ago

Assuming that your workflow works, and the rest of us just need to learn to use LLMs equally effective, won't that plateau us at the current level of programming?

The LLMs learn from examples, but if everyone uses LLMs to generate code, there's no new code to learn new features, libraries or methods from. The next generation of models are just going to be trained on the code generated by it's predecessors with now new inputs.

Being an LLM maximalist is basically freeze development in the present, now and forever.

Workaccount2 18 hours ago

If Google's AlphaEvolve is any indication, they already have LLM's writing faster algorithms than humans have discovered.[1]

[1]https://deepmind.google/discover/blog/alphaevolve-a-gemini-p...

mrweasel 16 hours ago

I'm not thinking algorithms. Let's say someone write a new web framework. If there is no code samples available, I don't think whatever is going to be in the documentation will be enough data, then the LLMs doesn't have the training data and won't be able to utilize it.

Would you ever be able to tell e.g. CoPilot: I need a web framework with these specs, go create that framework for me. The later have Claude actually use that framework?

CuriouslyC 9 hours ago

You can just collect the docs and stuff them in context with a few code examples that can be hand coded if needed, or you can separately get the LLM to try and code samples from the docs and keep the ones that work and look idiomatic.

TeMPOraL 14 hours ago

> Would you ever be able to tell e.g. CoPilot: I need a web framework with these specs, go create that framework for me. The later have Claude actually use that framework?

Sure, why not?

The "magic sauce" of LLMs is that they understand what you mean. They've ingested all the thinking biases and conceptual associations humans have through their training on the entire training corpus, not just code and technical documentation. When Copilot cobbles together a framework for you, it's going to name the functions and modules and variables using domain terms. For Claude reading it, those symbols aren't just meaningless tokens with identity - they're also words that mean something in English in general, as well as in the web framework domain specifically; between that and code itself having common, cross-language pattern, there's more than enough information for an LLM to use a completely new framework mostly right.

Sure, if your thing is unusual enough, LLMs won't handle it as well as something that's over-represented in their training set, but then the same is true of humans, and both benefit from being provided some guidelines and allowed to keep notes.

(Also, in practice, most code is very much same-ish. Every now and then, someone comes up with something conceptually new, but most of the time, any new framework or library is very likely to be reinventing something done by another library, possibly in different language. Improvements, if any, tend to be incremental. Now, the authors of libraries and frameworks may not be aware they're retracing prior art, but SOTA LLMs very likely seen it all, across most programming languages ever used, and can connect the dots.)

And in the odd case someone really invents some unusual, new, groundbreaking pattern, it's just a matter of months between it getting popular and LLMs being trained on it.

mritchie712 21 hours ago

were you immediately more productive in Cursor specifically?

my point is exactly inline with your comment. The tools you get immediate value out of will vary based on circumstance. There's no silver bullet.

CuriouslyC 20 hours ago

I use Aider, and I was already quite good at working with AI before that so there wasn't much of a learning curve other than figuring out how to configure it to automatically do the ruff/mypy/tests loop I'd already been doing manually.

They key is that I've always had that prompt/edit/verify loop, and I've always leaned heavily on git to be able to roll back bad AI changes. Those are the skills that let me blow past my peers.

WD-42 18 hours ago

Let’s see the GitHub project for an easy 10k line day.

CuriouslyC 18 hours ago

Not public on github, but here's the cloc for an easy 5k one day (10k is sweats).

github.com/AlDanial/cloc v 2.04 T=0.05 s (666.3 files/s, 187924.3 lines/s) ------------------------------------------------------------------------------- Language files blank comment code ------------------------------------------------------------------------------- Python 24 1505 1968 5001 Markdown 4 37 0 121 Jinja Template 3 17 2 92 ------------------------------------------------------------------------------- SUM: 31 1559 1970 5214 -------------------------------------------------------------------------------

Note this project also has 199 test cases.

Initial commit for cred:

commit caff2ce26225542cd4ada8e15246c25176a4dc41 Author: redacted <redacted> Date: Thu May 15 11:32:45 2025 +0800

    docs: Add README
And when I say easy, I was playing the bass while working on this project for ~3 hours.

WD-42 17 hours ago

This shows nothing.

otabdeveloper4 14 hours ago

> we're back to counting programming projects in kloc, like it's the radical 1980's again

Yikes. But also lol.

alvis 20 hours ago

20+ years dev here, started coding with AI when Javis was still a thing (before it became Jasper, and way before Copilot or ChatGPT).

Back then, Javis wasn’t built for code, but it was a surprisingly great coding companion. Yes. It only gave you 80% working code, but because you had to get your hands dirty, you actually understand what was happening. It didn't give me 10x but I'm happy with 2x with good understanding on what's going on.

Fast-forward to now: Copilot, Cursor, roo code, windsurf and the rest are shockingly good at output, but sometimes the more fluent the AI, the sneakier the bugs. They hand you big chunks of code, and I bet most of us don't have a clear picture of what's going on at ground 0 but just an overall idea. It's just too tempting to blindly "accept all" the changes.

It’s still the old wisdom — good devs are the ones not getting paged at 3am to fix bugs. I'm with the OP. I'm more happy with my 2x than waking up at 3am.

palmotea 13 hours ago

> IF you're a:

> * 1 year JS (react, nextjs, etc.) dev

> * start mostly from scratch on new ideas

> * have little prior IDE preference

> * have high tolerance for bugs and just want to ship and try stuff

> THEN: LLMs will to 10x you. An IDE like Cursor will immediately make you way faster.

And also probably dead-end you, and you'll stay the bug-tolerate 1 year JS dev for the next 10 years of your career.

It's like eating your seed corn. Sure you'll be fat and have it easy for a little while, but then next year...

prisenco 21 hours ago

Agreed but the 1 year JS dev should know they're making a deal with the devil in terms of building their skillset long term.

diggan 21 hours ago

I basically learned programming by ExpertSexchange, Google (and Altavista...), SourceForge and eventually StackOverflow + GitHub. Many people with more experience than me at the time always told me I was making a deal with the devil since I searched so much, didn't read manuals and asked so many questions instead of thinking for myself.

~15 years later, I don't think I'm worse off than my peers who stayed away from all those websites. Doing the right searches is probably as important as being able to read manuals properly today.

GeoAtreides 15 hours ago

>I basically learned programming by ExpertSexchange, Google (and Altavista...), SourceForge and eventually StackOverflow + GitHub. Many people with more experience than me at the time always told me I was making a deal with the devil since I searched so much, didn't read manuals and asked so many questions instead of thinking for myself.

no they didn't, no one said that

i know that because i was around then and everyone was doing the same thing

also, maybe there's a difference between searching and collating answers and just copy and pasting a solution _without thinking_ at all

andrekandre 5 hours ago

  > ExpertSexchange
maybe should have added a space somewhere in there?

whiplash451 21 hours ago

The jury is still out on that one.

Having a tool that’s embedded into your workflow and shows you how things can be done based on tons of example codebases could help a junior dev quite a lot to learn, not just to produce.

_heimdall 21 hours ago

Like anything else with learning, that will be heavily dependent on the individual's level of motivation.

Based on the classmates I had in college who were paying to get a CS degree, I'd be surprised if many junior devs already working a paid job put much effort into learning rather than producing.

whiplash451 18 hours ago

I wouldn't dismiss the implicit/subconscious aspect of learning by example that occurs when you are "just" producing.

_heimdall 14 hours ago

That still comes back to motivation in my opinion. Using an LLM to generate code and using it without studying the code and understanding it will teach you very little.

I'd still expect most junior Deva that use an LLM to get their job done won't be motivated to study the generated code enough to really learn it.

A student is also only as good as the teacher, though that's a whole other can of works with LLMs.

aqme28 18 hours ago

Maybe. But I also think that ignoring AI tools will hamper your long-term skillsets, as our profession adapts to these new tools.

bccdee 18 hours ago

Why would that be the case? If anything, each successive generation of AI tools gets easier to use and requires less prompt fiddling. I picked up Cursor and was comfortable with it in 20 minutes.

I'm not sure there's much of a skillset to speak of for these tools, beyond model-specific tricks that evaporate after a few updates.

StefanBatory 21 hours ago

From workplace perspective, they don't have a reason to care. What they'll care about is that you're productive right now - if you won't become better dev in the future? Your issue.

diggan 21 hours ago

Taking your "to 10x you" as hyperbole and to actually mean "more productive", if you replace "Python" with "Programming" and "IDE" with Neovim, that's basically me. And I'm way more productive with LLMs than without them. Granted, I stay far away from "vibe coding", only use LLMs for some parts and don't use "agentic LLMs" or whatever, just my own programmed "Human creates issue on GitHub, receive N PRs back with implementations" bot.

Basically, use LLMs as a tool for specific things, don't let them do whatever and everything.

stef25 21 hours ago

Recently I tried getting ChatGPT to help me build a Wordpress site (which I know nothing about), starting from an Adobe design file. It spent hours thinking, being confused and eventually failed completely.

However it's great for simple "write a function that does X", which I could do myself but it would take longer, be boring and require several iterations to get it right.

Having said that, blindly copying a few lines of ChatGPT code did lead to automated newsletters being sent out with the wrong content.

anonzzzies 21 hours ago

> until you've learned to use it

You have the copilot mode which takes no learning at all which might give you some speedup, especially if you are doing repetitive stuff, it might even 10x+ you.

You have cmdk mode which you need to prompt and seems to he a lobotomized version of chat. I find putting comments and waiting for the copilot mode to kick in better as then the way we got there is saved.

Then there is agentic editing chat: that is the timewaster you speak off I believe, but what is there to learn? Sometimes it generates a metric ton of code, including in legacy massive code bases, that help, and often it just cannot do whatever.

I don't think these cases you make, or at least, when the second one goes beyond the basics, are different. There is nothing to learn except that you need read all the code, decide what you want in tech detail and ask that of the agentic chat. Anything else fails beyond the basics and 'learning to use it' will be that but if you didn't know that after 5 minutes you definitely didn't do any 'fine tuned pycharm ide', ever.

It is a tool that customizes code it ingested for your case specifically, if it can. That is it. If it never saw a case, it won't solve it, no matter what you 'learn to use'. And I am fine doing that in public: we use LLMs a lot and I can give you very simple cases that, besides (and often even that doesn't work) typing up the exact code, it will never fix with the current models. It just gets stuck doing meaningless changes with confidence.

ekidd 21 hours ago

> You have the copilot mode which takes no learning at all which might give you some speedup, especially if you are doing repetitive stuff, it might even 10x+ you.

I have some grey hair and I've been programming since I was a kid. Using CoPilot autocompletion roughly doubles my productivity while cutting my code quality by 10%.

This happens because I can see issues in autocompleted code far faster than I can type, thanks to years of reading code and reviewing other people's code.

The 10% quality loss happens because my code is no longer lovingly hand-crafted single-author code. It effectively becomes a team project shared by me and the autocomplete. That 10% loss was inevitable as soon as I added another engineer, so it's usually a good tradeoff.

Based on observation, I think my productivity boost is usually high compared to other seniors I've paired with. I see a lot of people who gain maybe 40% from Copilot autocomplete.

But there is no world in which current AI is going to give me a 900% productivity boost when working in areas I know well.

I am also quite happy to ask Deep Research tools to look up the most popular Rust libraries for some feature, and to make me a pretty table of pros and cons to skim. It's usually only 90% accurate, but it cuts my research time.

I do know how to drive Claude Code, and I have gotten it to build a non-trivial web front-end and back-end that isn't complete garbage without writing more than a couple of dozen lines myself. This required the same skill set as working with an over-caffeinated intern with a lot of raw knowledge, but who has never written anything longer than 1,000 lines before. (Who is also a cheating cheater.) Maybe I would use it more if my job was to produce an endless succession of halfway decent 5,000-line prototypes that don't require any deep magic.

Auto-complete plus Deep Research is my sweet spot right now.

anonzzzies 20 hours ago

I get very good results with very little effort, but that is because I have written code for 40 years fulltime. Not because I know the tool better.

nis251413 17 hours ago

Even a single person may do different things that will change whether using an LLM helps or not.

Much of the time I spend writing code, not thinking about the general overview etc but about the code I am about to write itself, and if I actually care about the actual code (eg I am not gonna throw it away anyway by the end of the day) it is about how to make it as concise and understandable to others (incl future me) as possible, what cases to care about, what choices to make so that my code remain maintainable after a few days. It may be about refactoring previous code and all the decisions that go with that. LLM generated code, imo, is too bloated; them putting stuff like asserts is always a hit or miss about what they will think is important or not. Their comments tend to be completely trivial, instead of stating the intention of stuff, and though I have put some effort in getting them use a coding style similar to mine, they often fail there too. In such cases, I only use them if the code they write can be isolated enough, eg write a straightforward, auxiliary function here and there that will be called in some places but does not matter as much what happens in there. There are just too many decisions at each step that LLMs are not great at resolving ime.

I depend more on LLMs if I care less about maintenability of the code itself and more about getting it done as fast as possible, or if I am just exploring and do not actually care about the code at all. For example, it can be I am in a rush to get sth done and care about the rest later (granted they can actually do the task, else I am losing time). But when I tried this for my main work, it soon became a mess that would take more time to fix even if they seem like speeding me up initially. Granted, if my field was different and the languages I was using more popular/represented in training data, I may have found more uses for them, but I still think that after some point it becomes unsustainable to leave decisions to them.

ekianjo 21 hours ago

> THEN: LLMs will to 10x you. An IDE like Cursor will immediately make you way faster.

they will make you clueless about what the code does and your code will be unmaintanable.

belter 21 hours ago

We finally found a metric to identify the really valuable coders in my company :-)

jrh3 19 hours ago

> have high tolerance for bugs and just want to ship

LOL

stevepotter 21 hours ago

I do a variety of things, including iOS and web. Like you mentioned, LLM results between the two are very different. I can't trust LLM output to even compile, much less work. Just last night, it told me to use an API called `CMVideoFormatDescriptionGetCameraIntrinsicMatrix`. That API is very interesting because it doesn't exist. It also did a great job of digging some deep holes when dealing with some tricky Swift 6 concurrency stuff. Meanwhile it generated an entire nextjs app that worked great on the first shot. It's all about that training data baby

andrekandre 5 hours ago

  > `CMVideoFormatDescriptionGetCameraIntrinsicMatrix`. That API is very interesting because it doesn't exist. 
same experience, and its been not great for juniors around me cause they have no idea why its not compiling or that the thing it wrote doesn't even exist...

kartoffelsaft 18 hours ago

Honestly, with a lot of HN debating the merits of LLMs for generating code, I wish it were an unwritten rule that everyone states the stack they're using with it. It seems that the people who rave about it creating a whole product line in a weekend are asking it to write them a web iterface using [popular js framework] that connects to [ubiquitous database], and their app is a step or two away from being CRUD. Meanwhile, the people who say it's done nothing for them are writing against [proprietary in-house library from 2005].

The worst is the middleground of stacks that are popular enough to be known but not enough for an LLM to know them. I say worst because in these cases the facade that the LLM understands how to create your product will fall before you the software's lifecycle ends (at least, if you're vibe-coding).

For what it's worth, I've mostly been a hobbyist but I'm getting close to graduating with a CS degree. I've avoided using LLMs for classwork because I don't want to rob myself of an education, but I've occasionally used them for personal, weird projects (or tried to at least). I always give up with it because I tend to like trying out niche languages that the LLM will just start to assume work like python (ex: most LLMs struggle with zig in my experience).

englishspot 18 hours ago

> Meanwhile, the people who say it's done nothing for them are writing against [proprietary in-house library from 2005].

there's MCP servers now that should theoretically help with that, but that's its own can of worms.

maerch 21 hours ago

Exactly my thoughts. It seems there’s a lot of all-or-nothing thinking around this. What makes it valuable to me is its ability to simplify and automate mundane, repetitive tasks. Things like implementing small functions and interfaces I’ve designed, or even building something like a linting tool to keep docs and tests up to date. All of this has saved me countless hours and a good deal of sanity.

andy99 21 hours ago

Overshooting the capabilities of LLMs is pretty natural when you're exploring them. I've been using them to partially replace stack overflow or get short snippets of code for ~2 years. When Claude code came out, I gave it increased responsibility until I made a mess with it, and now I understand where it doesn't work and am back to using LLMs more for ideas and advice. I think this arc is pretty common.

arctek 20 hours ago

Similar to my experience, it works well for small tasks, replacing search (most of the time) and doing alot of boilerplate work.

I have one project that is very complex and for this I can't and don't use LLMs for.

I've also found it's better if you can get it code generate everything in the one session, if you try other LLMs or sessions it will quickly degrade. That's when you will see duplicate functions and dead end code.

spacemadness 15 hours ago

I've found LLMs are extremely hit or miss with iOS development. I think part of that might be how quickly Swift and SwiftUI is changing coupled with how bad Apple documentation is. I have loved using them to generate quick views and such for scaffolding purposes and quick iterations, but they tend to break down quickly around asynchronous coding and non-trivial business logic. I will say they're still incredibly useful to point you in a direction, but can be very misleading and send you down a hallucination rabbit hole easily.

jrvarela56 20 hours ago

You can use the LLm to decompose tasks. As you said, tasks that are simple and have solutions in the trainning data can save you time.

Most code out there is glue. So there’s a lot of trainning data on integrating/composing stuff.

If you take this as a whole, you could do that 30-60 min into 5 min for most dev work.

sublinear 21 hours ago

> I also use it for building things like app landing pages.

This is a reasonable usage of LLMs up to a certain point, and especially if you're in full control of all the requirements as the dev. If you don't mind missing details related to sales and marketing such as SEO and analytics, I think those are not really "landing pages", but rather just basic web pages.

> I hate web development, and LLMs are pretty good at it because I'd guess that is 90% of their training data related to software development.

Your previous sentence does not support this at all since web development is a much more broad topic than your perception of landing pages. Anything can be a web app, so most things are nowadays.

a7fort 21 hours ago

I think you're doing it right, it's just hard to resist the temptation to use AI for everything, when you're getting decent results for small things.

dfxm12 19 hours ago

I don't get the whole "all-in" mentality around LLMs.

They are being marketed a virtual assistants that will literally do all the work for you. If they become marketed truthfully, however, people will probably realize that they aren't worth the cost and it's largely more beneficial to search the web and/or crowdsource answers.

cosiiine 19 hours ago

The AI-Assist tools (Cursor, Windsurf, Claude Code, etc) want you to be "all-in" and that's why so many people end up fighting them. A delicate balance is hard to achieve when you're discarding 80% of the suggestions for 20% of the productivity boosts.

llm_nerd 21 hours ago

>I don't get the whole "all-in" mentality around LLMs

To be uncharitable and cynical for a moment (and talking generally rather than about this specific post), it yields content. It gives people something to talk about. Defining their personality by their absolutes, when in reality the world is an infinite shades of gradients.

Go "all in" on something and write about how amazing it is. In a month you can write your "why I'm giving up" the thing you went all in on and write about how relieved/better it is. It's such an incredibly tired gimmick.

"Why I dumped SQL for NoSQL and am never looking back" "Why NoSQL failed me"

"Why we at FlakeyCo are all in on this new JavaScript framework!" "Why we dumped that new JavaScript framework"

This same incredibly boring cycle is seen on here over and over and over again, and somehow people fall for it. Like, it's a huge indicator that the writer more than likely has bad judgment and probably shouldn't be the person to listen to about much.

Like most rational people that use decent judgement (rather than feeling I need to "all in" on something, as if the more I commit the more real the thing I'm committing to is), I leverage LLMs many, many times in my day to day. Yet somehow it has authored approximately zero percentage of my actual code, yet is still a spectacular resource.

rco8786 21 hours ago

There's a whole section in the doc called "A happy medium"

spiderfarmer 21 hours ago

People just do stupid stuff like "going all in" for their blog posts and videos. Nuance, like rationalism, doesn't get engagement.

dyauspitr 15 hours ago

As a previous iOS dev I was able to spin up a moderately complex app in a weekend, something that would have taken me probably at least a couple of weeks in the past. I have no idea what you’re on about. I don’t even use cursor and windsurf etc, I’m having chatGPT and Gemini just dump all their outputs into single files and manually breaking them up.