bgwalter 2 days ago

"Someone can be highly experienced as a software engineer in 2024, but that does not mean they're skilled as a software engineer in 2025, now that AI is here."

With that arrogance, my only question is: Where is your own code and what makes you more qualified than Linux kernel or gcc developers?

3
Workaccount2 2 days ago

Using AI to accelerate your productivity is not the same thing as letting AI do your job. The author seems to be pointing out that if you are someone who is going to dig in their heels as a "never-ai" dev, you are liable to be leaving productivity on the table.

skydhash 2 days ago

How so? For any mature project (aka anything past launch), most PRs are very small and you spend more time analyzing the tradeoffs of the solution than writing it.

q3k 2 days ago

Yeah, any of these AI accelerationist threads make me feel like I'm working in some parallel universe.

Writing code has never been a bottleneck for me. Planning out a change across multiple components, adhering to both my own individual vision and the project direction and style, fixing things along the way (but only when it makes sense), comparing approaches and understanding tradeoffs, knowing how to interpret loose specs... that's where my time goes. I could use LLM assistance, but given all of the above, it's faster for me to write it myself than to try to distill all of this into a prompt.

skydhash 2 days ago

And for most things, you already have a basic idea on how to implement it (or where to go to find information). But then you have to check your assumptions about the codebase and that's a time sink, especially in a collaborative environment. Like what is the condition for this modals to appear? And without realizing it, you're deep into the backend code deciphering the logic for a particular state with three design documents open and a slack chat going with 4 teammates.

elliotbnvl 2 days ago

If you have a good codebase, a good team, and strong product direction behind you a lot of the more abstract work you're describing goes away because most of those decisions were made weeks, months, or years ago by the time you're ready to put pen to paper on the actual code. Maybe that's part of why your experience is so different?

skydhash 2 days ago

Sometimes, one of those decisions was right in the past, but is wrong in the current context (eg. the company is now trying to get governments contracts). Changing one of these have rippling effects on all the decisions, so you're trying to reconcile the two sets in a way that minimizes the need to rewrite code. It's more like a research lab than a factory assembly line.

JackSlateur 2 days ago

Yes, it is because your job is to think

Those AI accelerationists are not thinking and, as such, are indeed boosted by a non-thinking machine

In the end, code is nothing but a way to map your intelligence into the physical world (using an interface called "computer")

bryanlarsen 2 days ago

One of the best tools for that task is rubber duck debugging. AI's are better than rubber ducks. Often not very much better, but sometimes an inane comment they make in reply triggers a eureka.

ghuntley 2 days ago

Exactly. One of my favorite things to do is to dump a code path into the context window and ask it to generate a mermaid sequence diagram or a class diagram explaining how everything is connected together.

ethanwillis 2 days ago

But you don't need an LLM to generate that kind of diagram do you?

elliotbnvl 2 days ago

It's not arrogance, since they're not asserting anything about themself. It's a factual observation with which you're free to disagree – but should be challenged directly rather than via ad hominem if you want to actually make a point rather than just collect internet snark kudos.

Also, there are far more generic web developers than there are Linux kernel developers, and they represent the vast majority of the market share / profit generation in software development, so your metric isn't really relevant either.

skydhash 2 days ago

So what has changed about the realm of programming that make all the skill obsolete, including the skill to learn new programming thingies?

The DOM API is old, All the mainstream backend languages are old, unix administrations has barely changed (only the way to use those tools have). Even Elasticsearch is 15 years old. Amazon S3 is past drinking age in many countries around the world. And that's just pertaining to web projects.

You just need to open a university textbook to realize how old many of the fundamentals are. Most shiny new things is old stuff repackaged.

elliotbnvl 2 days ago

A lot of people are rejecting AI because of how transformational it is. Those people will fall behind the people who adopt it aggressively.

It's akin to people who refused to learn C because they knew assembly.

skydhash 2 days ago

I don't think people has refused to learn C (which is not particularly hard to learn for someone who knows about assembly and various other languages at the time). A lot of compilers were buggy and people have lots of assembly snippets for particular routines. And that's not counting mature projects that was already in assembly and you have to maintain. A lot of programmers are actually fine trying new stuff out, but some are professionals and don't bring everything under the sun in their work projects.

elliotbnvl 2 days ago

You're missing the point. People refused to learn it not because it was technically challenging but because it was a transformation. It happens with every increase in abstraction; folks fall by the wayside because they don't want change.

The same thing is happening with LLMs. If anything, the gap is far smaller than between assembly and C, which only serves to prove my point. People who don't understand it or like it could easily experience massive productivity gains with a minimum of effort. But they probably never will, because their mindset is the limiting factor, not technical ability.

It really comes down to neural plasticity and willingness to adapt. Some people have it, some people don't. It's pretty polarizing, because for the people that don't want change it becomes an emotional conversation rather than a logical one.

What's the opportunity cost of properly exploring LLMs and learning what everybody else is talking about? Near zero. But there are plenty of people who haven't yet.

skydhash 2 days ago

I have, and it's not amazing. I'm pretty sure a lot of people have and agree with me. Why ask an LLMs when I can just open an API reference and have all the answers in a dense and succint format that gives me what I need?

Let's say I'm writing and Eloquent query (Laravel's ORM) and I forgot the signature for the where method. It's like 5 seconds to find the page and have the answer (less if I'm using Dash.app). It would take me longer to write a prompt for that. And I have to hope the model got it right.

For bigger use cases, a lot of times I already know the code, the reason I haven't written it yet is I'm thinking how it would impact the whole project architecture. Once I have a good feel, writing the code is a nice break from all of those thinking sessions. Like driving on a scenic route. Yeah you could have an AI drive you there, but not when you're worrying it taking the wrong turn at every intersection.

JackSlateur 2 days ago

If I wanted to babysit a retarded unit doing mindless things, I would have not chosen this career

I've yet to see a single occurrence at work (a single!) of something done better/quicker/easier with IA (as a dev) I've read lots of bullshit on the internet, sure, but from my day to day real-world experience, it was always a disaster disguised as glorious success story

thih9 2 days ago

> It's not arrogance, since they're not asserting anything about themself.

But you can be arrogant without referencing yourself directly.

After all, anything you say is implicitly prefixed with “I declare that”.

E.g. one of Feynman’s “most arrogant” statements is said to be: “God was always invented to explain the mystery. God is always invented to explain those things that you do not understand.”[1] - and there’s no direct self reference there.

[1]: https://piggsboson.medium.com/feynmans-most-arrogant-stateme...

falcor84 2 days ago

TFA didn't say that experienced software engineers in 2024 "necessarily aren't" skilled as a software engineer in 2025, but just that they "aren't necessarily" so, which is an entirely valid point regardless of AI.