infecto 21 hours ago

These all-or-nothing takes on LLMs are getting tiresome.

I get the point you’re trying to make, LLMs can be a force multiplier for less experienced devs, but the sweeping generalizations don’t hold up. If you’re okay with a higher tolerance for bugs or loose guardrails, sure, LLMs can feel magical. But that doesn’t mean they’re less valuable to experienced developers.

I’ve been writing Python and Java professionally for 15+ years. I’ve lived through JetBrains IDEs, and switching to VS Code took me days. If you’re coming from a heavily customized Vim setup, the adjustment will be harder. I don’t tolerate flaky output, and I work on a mix of greenfield and legacy systems. Yes, greenfield is more LLM-friendly, but I still get plenty of value from LLMs when navigating and extending mature codebases.

What frustrates me is how polarized these conversations are. There are valid insights on both sides, but too many posts frame their take as gospel. The reality is more nuanced: LLMs are a tool, not a revolution, and their value depends on how you integrate them into your workflow, regardless of experience level.

4
mritchie712 20 hours ago

my stance is the opposite of all-or-nothing. The note above is one example. How much value you get out of CURSOR specifically is going to vary based on person & problem. The Python dev in my example might immediately get value out of o3 in ChatGPT.

It's not all or nothing. What you get value out of immediately will vary based on circumstance.

infecto 20 hours ago

You say your stance isn’t all-or-nothing, but your original comment drew a pretty hard line, junior devs who start from scratch and have a high tolerance for bugs get 10x productivity, while experienced devs with high standards and mature setups will likely be slowed down. That framing is exactly the kind of binary thinking that’s making these conversations so unproductive.

ghufran_syed 20 hours ago

I wouldn’t classify this as binary thinking - isnt the comment you are replying just defining boundary conditions? Then those two points don’t define the entire space, but the output there does at least let us infer (but not prove) something about the nature of the “function” between those two points? Where the function f is something like f: experience -> productivity increase?

infecto 20 hours ago

You’re right that it’s possible to read the original comment as just laying out two boundary conditions—but I think we have to acknowledge how narrative framing shapes the takeaway. The way it’s written leads the reader toward a conclusion: “LLMs are great for junior, fast-shipping devs; less so for experienced, meticulous engineers.” Even if that wasn’t the intent, that’s the message most will walk away with.

But they drew boundaries with very specific conditions that lead the reader. It’s a common theme in these AI discussions.

web007 19 hours ago

> LLMs are great for junior, fast-shipping devs; less so for experienced, meticulous engineers

Is that not true? That feels sufficiently nuanced and gives a spectrum of utility, not binary one and zero but "10x" on one side and perhaps 1.1x at the other extrema.

The reality is slightly different - "10x" is SLoC, not necessarily good code - but the direction and scale are about right.

TeMPOraL 14 hours ago

That feels like the opposite of being true. Juniors have, by definition, little experience - the LLM is effectively smarter than them and much better at programming, so they're going to be learning programming skills from LLMs, all while futzing about not sure what they're trying to express.

People with many years or even decades of hands-on programming experience, have the deep understanding and tacit knowledge that allows them to tell LLMs clearly what they want, quickly evaluate generated code, guide the LLM out of any rut or rabbit hole it dug itself into, and generally are able to wield LLMs as DWIM tools - because again, unlike juniors, they actually know what they mean.

mritchie712 19 hours ago

no, those are two examples of many many possible circumstances. I intentionally made it two very specific examples so that was clear. Seems it wasn't so clear.

infecto 19 hours ago

Fair enough but if you have to show up in the comments clarifying that your clearly delineated “IF this THEN that” post wasn’t meant to be read as a hard divide, maybe the examples weren’t doing the work you thought they were. You can’t sketch a two-point graph and then be surprised people assume it’s linear.

Again I think the high level premise is correct as I already said, the delivery falls flat though. Your more junior devs have larger opportunity of extracting value.

bluecheese452 18 hours ago

I and others understood it perfectly well. Maybe the problem wasn’t with the post.

infecto 18 hours ago

And I along with others who upvoted me did not. What’s your point? Seems like you have none and instead just want to point fingers.

bluecheese452 18 hours ago

The guy was nice enough to explain his post that you got confused about. Rather than be thankful you used that as evidence that he was not clear and lectured him on it.

I gently suggested that the problem may have not been with his post but with your understanding. Apparently you missed the point again.

infecto 17 hours ago

If multiple people misread the post, clarity might be the issue, not comprehension. Dismissing that as misunderstanding doesn’t add much. Let’s keep it constructive.

motorest 4 hours ago

> I get the point you’re trying to make, LLMs can be a force multiplier for less experienced devs, but the sweeping generalizations don’t hold up. If you’re okay with a higher tolerance for bugs or loose guardrails, sure, LLMs can feel magical. But that doesn’t mean they’re less valuable to experienced developers.

I think you're trying very hard to pin LLMs as a tool for inexperienced developers. There's a hint of paternalism in them.

Back in the real world, LLMs are a tool that excels at generating and updating code based on their context and following your prompts. If you realize this fact, you'll understand that there's nothing in the description that makes them helpful exclusively to "inexperienced" developers. Do experience developers need to refactor code or write new software? Do you believe veteran software engineers are barred from writing proofs of concept? Is the job of pushing architecture changes a junior developer gig?

What exactly do you think an experienced developer does?

> What frustrates me is how polarized these conversations are. There are valid insights on both sides, but too many posts frame their take as gospel. The reality is more nuanced: LLMs are a tool, not a revolution, and their value depends on how you integrate them into your workflow, regardless of experience level.

I completely disagree: LLMs have a revolutionary impact on how software engineers do their job. Your workflows changed overnight. You can create new things faster, you can iterate faster, you can even rewrite whole applications and services in another tech stacks and frameworks in a few days. Things like TDD will become of critical importance as automates test suites are now a critical factor in providing feedback to LLMs. Things are no longer the way they were. At least to those who bothered learning.

lelandbatey 18 hours ago

This doesn't seem like an "all or nothing" take. This person is trying to be clear about their claims, but they're not trying to state these are the only possible takes. Add the word "probably" after each "then" and I image their intended tone becomes a little clearer.

mexicocitinluez 21 hours ago

> I get the point you’re trying to make, LLMs can be a force multiplier for less experienced devs, but the sweeping generalizations don’t hold up. If you’re okay with a higher tolerance for bugs or loose guardrails, sure, LLMs can feel magical. But that doesn’t mean they’re less valuable to experienced developers.

Amen. Seriously. They're tools. Sometimes they work wonderfully. Sometimes, not so much. But I have DEFINITELY found value. And I've been building stuff for over 15 years as well.

I'm not "vibe coding", I don't use Cursor or any of the ai-based IDEs. I just use Claude and Copilot since it's integrated.

voidhorse 20 hours ago

> Amen. Seriously. They're tools. Sometimes they work wonderfully. Sometimes, not so much. But I have DEFINITELY found value. And I've been building stuff for over 15 years as well.

Yes, but these lax expectation s are what I don't understand.

What other tools in software sometimes work and sometimes don't that you find remotely acceptable? Sure all tools have bugs, but if your compiler had the same failure rate and usability issues as an LLM you'd never use it. Yet for some reason the bar is so low for LLMs. It's insane to me how much people have indulged in the hype koolaid around these tools.

TeMPOraL 13 hours ago

> What other tools in software sometimes work and sometimes don't that you find remotely acceptable?

Other people.

Seriously, all that advice about not anthropomorphizing computers is taken way too seriously now, and is doing a number on the industry. LLMs are not a replacement for compilers or other "classical" tools - they're replacement for people. The whole thing that makes LLMs useful is their ability to understand what some text means - whether or not it's written in natural language or code. But that task is inherently unreliable because the problem itself is ill-specified; the theoretically optimal solution boils down to "be a simulated equivalent of a contemporary human", and that still wouldn't be perfectly reliable.

LLMs are able to trivially do tasks in programming that no "classical" tools can, tasks that defy theoretical/formal specification, because they're trained to mimic humans. Plenty of such tasks cannot be done to the standards you and many others expect of software, because they're NP-complete or even equivalent to halting problem. LLMs look at those and go, "sure, this may be provably not solvable, but actually the user meant X therefore the result is Y", and succeed with that reliably enough to be useful.

Like, take automated refactoring in dynamic languages. Any nontrivial ones are not doable "classically", because you can't guarantee there aren't references to the thing you're moving/renaming that are generated on the fly by eval() + string concatenation, etc. As a programmer, you may know the correct result, because you can understand the meaning and intent behind the code, the conceptual patterns underpinning its design. DAG walkers and SAT solvers don't. But LLMs do.

ljm 18 hours ago

People are way too quick to defend LLMs here, because it's exactly on point.

In an era where an LLM can hallucinate (present you a defect) with 100% conviction, and vibe coders can ship code of completely unknown quality with 100% conviction, the bar by definition has to have been set lower.

Someone with experience will still bring something more than just LLM-written code to the table, and that bar will stay where it is. The people who don't have experience won't even feel the shortcomings of AI because they won't know what it's getting wrong.

motorest 4 hours ago

> In an era where an LLM can hallucinate (present you a defect) with 100% conviction, (...)

I think you're trying very hard to find anything at all to criticize LLMs and those who use them, but all you manage to come up with is outlandish, "grasping at straws" arguments.

Yes, it's conceivable that LLMs can hallucinate. How often do they do, though? In my experience, not that much. In the rare cases they do, it's easy to spot and another iteration costs you a couple of seconds to get around it.

So, what are you complaining about, actually? Are you complaining about LLMs or just letting the world know how competent are you at using LLMs?

> Someone with experience will still bring something more than just LLM-written code to the table, and that bar will stay where it is.

Someone with experience leverages LLMs to do the drudge work, and bump up their productivity.

I'm not sure you fully grasp the implications. You're rehashing the kind of short-sighted comments that in the past brought comically-clueless assertions such as "the kids don't know assembly, so how can they write good programs". In the process, you are failing to understand the fact that the way software is written has already changed completely. The job of a developer is no longer typing code away and googling for code references. Now we can refactor and rewrite entire modules, iterate over the design, try a few alternative approaches, pin alternatives against each other, and pick up the one we prefer to post a PR. And then go to lunch. With these tools, some of your "experienced" developers turn out to be not that great, whereas "inexperienced" ones outperform them easily. How do you deal with that?

marcosdumay 17 hours ago

There were always lots of code generation tools that people expected to review and fix the output.

Anyway, code generation tools almost always are born unreliable, then improve piecewise into almost reliable, and finally get replaced by something with a mature and robust architecture that is actually reliable. I can't imagine how LLMs could traverse this, but I don't think it's an extraordinary idea.

infecto 20 hours ago

My compiler doesn’t write a complete function to visualize a DataFrame based on a vague prompt. It also doesn’t revise that function as I refine the requirements. LLMs can.

There’s definitely hype out there, but dismissing all AI use as “koolaid” is as lazy as the Medium posts you’re criticizing. It’s not perfect tech, but some of us are integrating it into real production workflows and seeing tangible gains, more code shipped, less fatigue, same standards. If that’s a “low bar,” maybe your expectations have shifted.

tbrownaw 18 hours ago

> What other tools in software sometimes work and sometimes don't that you find remotely acceptable?

Searching for relevant info on the Internet can take several attempts, and occasionally I end up not finding anything useful.

My ide intellisense tries to guess what identifier I want and put it at the top of the list, sometimes it guesses wrong.

I've heard that the various package repositories will sometimes deliberately refuse to work for a while because of some nonsense called "rate limiting".

Cloud deployments can fail due to resource availability.

mexicocitinluez 19 hours ago

> Yes, but these lax expectation s are what I don't understand.

It's pretty a really, really simple concept.

If I have a crazy Typescript error, for instance, I can throw it in and get a much better idea of what's happening. Just because that's not perfect, doesn't mean it isn't helpful. Even if it works 90% of the time, it's still better than 0% of the time (Which is where I was at before).

It's like google search without ads and with the ability to compose different resources together. If that's not useful to you, then I don't know what to tell you.

UncleEntity 19 hours ago

Hell, AI is probably -1x for me because I refuse to give up and do it myself instead of trying to get the robots to do it. I mean, writing code is for the monkeys, right?

Anyhoo... I find that there are times where you have to really get in there and question the robot's assumptions as they will keep making the same mistake over and over until you truly understand what it is they are actually trying to accomplish. A lot of times the desired goal and their goal are different enough to cause extreme frustration as one tends to think the robot's goal should perfectly align with the prompt. Once it fails a couple times then the interrogation begins since we're not making any further progress, obviously.

Case in point, I have this "Operational Semantics" document, which is correct, and a peg VM, which is tested to be correct, but if you combine the two one of the operators was being compiled incorrectly due to the way backtracking works in the VM. After Claude's many failed attempts we had a long discussion and finally tracked down the problem to be something outside of its creative boundaries and it needed one of those "why don't you do it this way..." moments. Sure, I shouldn't have to do this but that's the reality of the tools and, like they say, "a good craftsman never blames his tools".