There's some whistling past the graveyard in these comments. "You still need humans for the social element...", "LLMs are bad at debugging", "LLMs lead you astray". And yeah, there's lots of truth in those assertions, but since I started playing with LLMs to generate code a couple of years ago they've made huge strides. I suspect that over the next couple of years the improvements won't be quite as large (Pareto Principle), but I do expect we'll still see some improvement.
Was on r/fpga recently and mentioned that I had had a lot of success recently in getting LLMs to code up first-cut testbenches that allow you to simulate your FPGA/HDL design a lot quicker than if you were to write those testbenches yourself and my comment was met with lots of derision. But they hadn't even given it a try to form their conclusion that it just couldn't work.
This attitude is depressingly common in lots of professional, white-collar industries I'm afraid. I just came from the /r/law subreddit and was amazed at the kneejerk dismissal there of Dario Amodei's recent comments about legal work, and of those commenters who took them seriously. It's probably as much a coping mechanism as it is complacency, but, either way, it bodes very poorly for our future efforts at mitigating whatever economic and social upheaval is coming.
This is the response to most new technologies; folks simply don't want to accept the future before the ramifications truly hit. If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.
LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.
I don't think it is only (or even mostly) not wanting to accept it, I think it is at least equal measure just plain skepticism. We've seen all sorts of wild statements about how much something is going to revolutionize X and then turns out to be nothing. Most people disbelieve these sorts of claims until they see real evidence for themselves... and that is a good default position.
hedging the possibility that they get displaced economically before it happens is always prudent.
If the future didnt turn out to be revolutionary, you now have done some "unnecessary" work at worst, but might've acquired some skills or value at least. In the case of most well off programmers, i suspect buying assets/investments which can afford them at least a reasonable lifestyle is likely too.
So the default position of being stationary, and assuming the world continues the way it has been, is not such a good idea. One should always assume the worst possible outcome, and plan for that.
> One should always assume the worst possible outcome, and plan for that.
Maybe if you work e-commerce or in the military.
But how do you even translate this line of thought for today?
Is you EMP defenses up to speed?
Are you studying russian and chinese while selling kidneys in order to afford your retirement home on Mars?
My point being, you can never plan for every worst outcome. In reality you would have a secondary data center, backups and a working recovery routine.
None of which matters if you use autocomplete or not.
> If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.
Look, we see the forest. We are just not impressed by it.
Having unlimited chaos monkeys at will is not revolutionizing anything.
Lawyers don't even use version control software a lot of the time. They burn hundreds of paralegal hours reconciling revisions, a task that could be made 100x faster and easier with Git.
There's no guarantee a technology will take off, even if it's really, really good. Because we don't decide if that tech takes off - the lawyers do. And they might not care, or they might decide billing more hours is better, actually.
> billing more hours is better, actually
The guiding principle of biglaw.
Attorneys have the bar to protect them from technology they don’t want. They’ve done it many times before, and they’ll do it again. They are starting to entertain LLMs, but not in a way that would affect their billable hours.
“First thing we do, let’s kill all the lawyers”
History majors everywhere are weeping.
Many of us would prefer to see the technological leaps to be evenly distributed (so e.g. even clean drinking water that does not need to be boiled before consumption is not a baseline in 2025). So if you want to adapt to your new and improved position where you are just pushing buttons fine - but some of us are actually interested in how computers work (and are actually really uninterested in most companies' bottom lines). It's just how it is ;)
I think many people just settled in while we had no real technological change for 15 years. Real change, not an update to a web framework.
When I graduated high school, I had never been or knew anyone who had ever been on the internet at all. The internet was this vague "information superhighway" that I didn't know really what to make of.
If you are of a certain age though you would think a pointless update to react was all the change ever coming.
That time is over and we are back to reality.
> you are ... le LUDDITE
Or maybe they just know the nitty-gritty inherent limitations of technology better than you.
(inb4: "LLMs can't have limitations! Wait a few years and they will solve literally every possible problem!")
Friendly reminder that people like you were saying the exact same thing about metaverse, VR, web3, crypto, etc.
Yes. If you judge only from the hype, then you can't distinguish LLMs from crypto, or Nuclear Weapons from Nuclear Automobiles.
If you always say that every new fad is just hype, then you'll even be right 99.9% of the time. But if you want to be more valuable than a rock (https://www.astralcodexten.com/p/heuristics-that-almost-alwa...), then you need to dig into the object-level facts and form an opinion.
In my opinion, AI has a much higher likelihood of changing everything very quickly than crypto or similar technologies ever did.
I didn’t buy the hype of any of those things, but I believe AI is a going to change everything much like the introduction of the internet. People are dismissing AI because its code is not bug free, completely dismissing the fact that it generates PRs in minutes from a poorly written text prompt. As if that’s not impressive. In fact if you put a human engineer on the receiving end of the same prompt with the same context as what we’re sending to the LLM, I doubt they could produce code half as good in 10x the time. It’s science fiction coming true, and it’s only going to continue to improve.
Again, there were people just as sure about crypto as you are now about AI. They dismissed criticism because they thought the technology was impressive and revolutionary. That it was science fiction come true and only going to continue to improve. It's the exact same hype-driven rhetoric.
If you want to convince skeptics talk about examples, vibe code a successful business, show off your success with using AI. Telling people it's the future and if you disagree you have your head in the sand, is wholly unconvincing.
As someone who gleefully followed along as the Web3 hype train derailed, an important distinction is that crypto turns every believer into a salesperson, by design. There were some that were truly passionate about the potential applications for blockchain technology, but by and large they were drowned out by people who, having poured $10k into the memecoin of the week, wanted to see the price of that coin rise.
This doesn't feel like that. The applications of generative AI have become self-evident to anyone that's followed their rise. Specific applications of AI resemble snake oil, and there are hucksters who pivoted from crypto to AI, but the ratio of legit use cases to scams isn't even close.
If anything, the incentives for embellishment have flipped since crypto. VC-funded AI companies will dreamily fire press releases about AI taking us to Mars, but it doesn't have the pseudo-grassroots quality of cryptocurrency hype. The average worker is incentivized to be an AI skeptic. The rise of generative AI threatens workers in several fields today, and has already negatively impacted copywriters and freelance artists. I absolutely understand why people in those fields would respond by calling AI use unethical and criticize the shortcomings of today's models.
We'll see what the next few years hold. But personally, I foresee AI integration ramping up. Even if the models themselves completely stagnate from this point on, there's a lot of missing glue between the models and the real world.
You don't have to be able to vibe code an entire business from scratch to know that the technology behind AI is significantly more impressive than VR, crypto, web3 etc. What the free version of ChatGPT can do right now, not just coding; would've been unimaginable to most people just 5 years ago.
Don't people and companies using AI lazily to put out low quality content blind you to its potential as well as the reality of what it can do right now. Look at Google's VO3, most people in the world right now won't be able to tell you that it's AI generated and not real.
The value of these was always a far fetch, and requires a critical mass adopting it before becoming potentially useful. But LLMs value is much more immediate and doesn't require any change in the rest of the world. If you use it and are amplified by it, you are... simply better off.
In my small-minded opinion, llms are the better version of code-completion. Search and time-savings on an accelerated course.
They can’t write me a safety-critical video player meeting the spec with full test coverage using a proprietary signal that my customer would accept.
Frankly I disagree that LLMs value is immediate. What I do see is a whole lot of damage it's causing, just like the hype cycles before it. It's fine for us to disagree on this, but to say I'm burying my head in the sand not wanting to accept "the future" is exactly the same hype-driven bullshit the crypto crowd was pushing.
That's why it's what I define as immediate value. It's undeniably incredibly amplifying to me, whether you or others agree or not. No network effect required. It doesn't matter whether I convince anyone else of the value, I can capture it all on my own. Unlike ponzi-schemes like web3 or VR experiences that require an entire shift in everyday life and an ecology of supporting software.
I don't need to convince anyone that LLMs are enabling me to do a lot more. This is what makes this hype different. It has bones. Once you've found a way to leverage it, it's undeniably helpful regardless of your prior disposition. Everyone else can say they're not useful and it rings hollow because it obviously is to me. And thus probably useful to everyone else too.
Ah yes, please enjoy living in your moment and anticipating your entirely new world. I also hear all cars will be driving themselves soon and Jesus is coming back any day now.
I found it mildly amusing to contrast the puerile dismissiveness with your sole submission to this site: UK org's Red List of Endangered & Extinct crafts.
Adapt to your manager at bigcorp who is hyping the tech because it gives him something to do? No open source project is using the useless LLM shackles.
As if you'd know if they did.
Why would we not? If they were so effective, their effectiveness would be apparent, inarguable, and those making use of it would advertise it as a demonstration of just that. Even if there were some sort of social stigma against it, AI has enough proponents to produce copious amounts of counterarguments through evidence all on their own.
Instead, we have a tiny handful of one-off events that were laboriously tuned and tweaked and massaged over extended periods of time, and a flood of slop in the form of broken patches, bloated and misleading issues, and nonsense bug bounty attempts.
I think the main reason might be that when the output is good the developer congratulates themselves, and when it's bad they make a post or comment about how bad AI is.
Then the people who congratulate the AI for helping get yelled at by the other category.
As long as the AI people stay in their lane and work on their own projects, they're not getting yelled at. This is ignoring that AI has enough proponents to have enough projects of significant size. And even if they're getting shouted at from across the fence, again, AI has enough proponents who would brave getting yelled at.
We'd still have more than tortured, isolated, one-offs. We should have at least one well-known codebase maintained through the power of Silicon Valley's top silicon-based minds.
I think it's pretty reasonable to take a CEO's - any CEO in any industry - statements with a grain of salt. They are under tremendous pressure to paint the most rosy picture possible of their future. They actually need you to "believe" just as much as their team needs to deliver.
Just a grain? I say take it with a gargantuan Train loaded with salt on all cars. An entire salt mine's worth. Markets, and CEOs, are downright insane, and they are the only ones who stand to profit from this situation, and have everything to gain.
IMO it is a mixture of stupidity and denial.
I am not a software engineer but I just can't imagine my job is not automated in 10 years or less.
10 years is about the time between King – Man + Woman = Queen and now.
I think what is being highly underestimated is the false sense of security people feel because the jobs they interface with are also not automated, yet.
It is not hard to picture the network of automation that once one role is automated, connected roles to that role become easier to automate. So on and so on while the models keep getting stronger at the same time.
I expect we will have a recession at some point and the jobs lost are gone forever.
Lawyers say those things and then one law firm after another is frantically looking for a contractor to overpay them to install local RAG and chatbot combo.
In most professional industries getting to the right answer is only half the problem. You also need to be able to demonstrate why that is the right answer. Your answer has to stand up to criticism. If your answer is essentially the output of a very clever random number generator you can't ever do that. Even if an LLM could output an absolutely perfect legal argument that matched what a supreme court judge would argue every time, that still wouldn't be good enough. You'd still need a person there to be accountable for making the argument and to defend the argument.
Software isn't like this. No one cares why you wrote the code in your PR. They only care about whether it's right.
This is why LLMs could be useful in one industry and a lot less useful in another.
"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair
Programmers derided programming languages (too inefficient, too inflexible, too dumbing-down) when assembly was still the default. That phenomenon is at the same time entirely to be expected but also says little about the actual qualities of the new technology.
If you have something that generated 20 lines of assembly that takes 100x times more than the 2 lines of clever instructions you know, you'd have the same stance even if the higher level was easier to use. Then those kind of performance tricks ceases to matter. But reliability still do. And the reasons we use higher and higher level of programming languages is because they increase reliability and simplicity (at the cost of performance, but we're happy to pay those).
LLMs output are unreliable and productivity is still not proven for an end to end engineering cycle.
You are arguing based on the merits of the technology, which is fine, but wasn't my point. I was arguing that derision tends to happen no matter what, and thus doesn't indicate much about the merits of the technology one way or the other.
If the idea has staying power, then derision tends to be followed by hatred or anger. We're already seeing quite a bit of hatred and anger toward AI. I think even the people who are ridiculing AI currently would agree with the statement, "There will be more anger toward AI in the near future." As in, they already know it's here to stay, whether they admit it to themselves or not.
It seems like LLMs made really big strides for a while but don't seem to be getting better recently, and in some ways recent models feel a bit worse. I'm seeing some good results generating test code, and some really bad results when people go to far with LLM use on new feature work. Base on what I've seen it seems like spinning up new projects and very basic features for web apps works really well, but that doesn't seem to generalize to refactoring or adding new features to big/old code bases.
I've seen Claude and ChatGPT happily hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets.
> hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets
With many existing systems, you can pull documentation into context pretty quickly to prevent the hallucination of APIs. In the near future it's obvious how that could be done automatically. I put my engine on the ground, ran it and it didn't even go anywhere; Ford will never beat horses.
It's true that manually constraining an LLM with contextual data increases their performance on that data (and reduces performance elsewhere), but that conflicts with the promise of AI as an everything machine. We were promised an everything machine but if we have to not only provide it the proper context, but already know what constitutes the proper context, then it is not in any way an everything machine.
Which means it's back to being a very useful tool, but not the earth-shattering disruptor we hoped (or worried) it would be.
Depends on how good they get at realizing they need more context and tool use to look it up for you.
How would they reliably recognize the context needed without the necessary context?
in the case of hallucinating a library, give it access to an IDE's autocomplete or type checker or whatever so it can check if the functions it thinks exist actually do, if they don't, start feeding it documentation or type info about the library until it spits out stuff that type checks
For other stuff this is obviously harder.
the LLM's themselves are making marginal gains, but the tools for using LLMs productively are getting so much better.
This. MCP/tool usage in agentic mode is insanely powerful. Let the agent ingest a Gitlab issue, tell it how it can run commands, tests etc. in the local environment and half of the time it can just iterate towards a solution all by itself (but watching and intervening when it starts going the wrong way is still advisable).
Recently I converted all the (Google Docs) documentation of a project to markdown files and added those to the workspace. It now indexes it with RAG and can easily find relevant bits of documentation, especially in agent mode.
It really stresses the importance of getting your documentation and processes in order as well as making sure the tasks at hand are well-specified. It soon might be the main thing that requires human input or action.
Every time I’ve tried to do that it takes longer than it would take me, and comes up with fairly obtuse solutions. The cursor agent seems incapable of putting code in the appropriate files in a functional language.
I 100% agree that documenting requirements will be the main human input to software development in the near future.
In fact, I built an entirely headless coding agent for that reason: you put tasks in, you get PRs out, and you get journals of each run for debugging but it discourages micro-management so you stay in planning/documenting/architecting.
> don't seem to be getting better recently
o3 came out just one month ago. Have you been using it? Subjectively, the gap between o3 and everything before it feels like the biggest gap I've seen since ChatGPT originally came out.
I haven't used it extensively, but toyed around with it for Elixir code and I wasn't particularly impressed.
I'd like to agree with you and remain optimistic, but so much tech has promised the moon and stagnated into oblivion that I just don't have any optimism left to give. I don't know if you're old enough, but remember when speech-to-text was the next big thing? DragonSpeak was released in 1997, everyone was losing their minds about dictating letters/documents in MS Word, and we were promised that THIS would be the key interface for computing evermore. And.. 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then. In messenger applications people are sending literal voice notes -- audio clips -- back and forth because dictation is so unreliable. And audio clips are possibly the worst interface for communication ever (no searching, etc).
Remember how blockchain was going to change the world? Web3? IoT? Etc etc.
I've been through enough of these cycles to understand that, while the AI gimmick is cool and all, we're probably at the local maximum. The reliability won't improve much from here (hallucinations etc), while the costs to run it will stay high. The final tombstone will be when the AI companies stop running at a loss and actually charge for the massive costs associated with running these models.
> 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then
Have you tried talking to ChatGPT voice mode? It's mind blowing. You just have a conversation with it. In any language. About anything. The other day I wanted to know about the difference between cast iron and wrought iron, and it turned into a 10 or 15 minute conversation. That's maybe a good example of an "easy" topic for LLMs (lots of textbooks for it to memorize), but the world is full of easy topics that I know nothing about!
How can you possibly look at what LLMs are doing and the progress made in the last ~3 years and equate it to crypto bullshit? Also it's super weird to include IoT in there, seeing as it has become all but ubiquitous.
I'm not as bearish on AI, but its hard to tell if you can really extrapolate future performance based on past improvements.
Personally, I'm more interested in the political angle. I can see that AI will be disruptive because there's a ton of money and possibly other political outcomes depending on it doing exactly that.
ChatGPT-4o is scary good at writing VHDL.
Using it to prototype some low level controllers today, as a matter of fact!
Claude and Gemini are decent at it as well. I was surprised when I asked claude (and this was several months back) to come up with a testbench for some very old, poorly documented verilog. It did a very decent job for a first-cut testbench. It even collected common, recurring code into verilog tasks (functions) which really surprised me at the time.
Yes! It’s much better at using functional logic than I am - which I appreciate!
What kind of things is it doing?
I have a hard time imagining an LLM being able to do arbitrary things. It always feels like LLMs can do lots of the easy stuff, but if they can't do everything you need the skilled engineer anyway, who'd knock the easy things out in a week anyway.
What kind of things is it doing?
Wrote me:
- a SPI deserializer that sets a bit after 12 bits read in, to trigger a prefetch
- an SDC constraints file for the deserializer that correctly identified the SPI clock and bus clock as separate domains requiring their own statement
- a test bench that validated both that the prefetch bit was being set, and that it was being set at the proper time relative to the SPI clock
- a makefile with commands for build, headless test, and debug by loading the VCD into a waveform viewer
It always feels like LLMs can do lots of the easy stuff, but if they can't do everything you need the skilled engineer anyway, who'd knock the easy things out in a week anyway.
Nearly every part of the tool flow I just described, I would consider “tricky to get right”. Been doing this for ~15 years and it’s still tough to bootstrap something like this from scratch. ChatGPT-4o did this for me from zero in about 15 minutes.
I won’t lie: I love it. I can focus on the actual, bigger problems at hand, and not the tricky little details of HDLs.
People are either deluding themselves or ignorant of the capabilities of frontier models if they don’t believe LLMs offer a speedup in workflow.
I personally believe that most of the doubt and cynicism is due to:
1) a pretty big collective identity crisis among software professionals, and
2) a suspicion that LLMs make it so that anyone who is good at articulating the problem precisely no longer needs a software engineer as a translation specialist from specs to code.
I say this as an EE of ~15 years who’s always been able to articulate what I want, specifically, to a firmware counterpart, who then writes the code I need. I can turn years of practice in this skill into great prompts for an LLM, which effectively cuts out the middleman.
I really like it. It’s helped me take on a lot of projects that are just outside of my innate level of capability. It’s also helped me learn a lot of new things about these new software adjacent areas. ChatGPT is a great tutor!
Fair enough. I guess it's all about degrees - I work at a place where we use FPGAs for our hardware, and I find it really hard to imagine an LLM being remotely capable of solving the problems our FPGA guys do.
If the FPGA is mostly doing simpler stuff with lots of boilerplate, I can see current LLMs offering a lot for someone who doesn't regularly write code for them; I guess that's similar to the current case for software.
Using them to set up the initial flow is not a bad idea either - I know coworkers who use them to write the early code for a new system or driver, where it seems to work pretty well (probably because that's a huge part of the training set - loads of tutorials out there)
> I personally believe that most of the doubt and cynicism is due to:
> 1) a pretty big collective identity crisis among software professionals, and
> 2) a suspicion that LLMs make it so that anyone who is good at articulating the problem precisely no longer needs a software engineer as a translation specialist from specs to code.
... But "articulating the problem precisely" is a huge part of what software engineers do, and there's a mountain of evidence that other people are not very good at that.
I have a mountain of professional experience that indicates many software engineers are not very good at it either.
Why would I add a subpar translation layer into the process of achieving my goals? There’s no inherent value in that.
> Why would I add a subpar translation layer into the process of achieving my goals?
Because you don't have a choice. Your thoughts are not code.
I'd still take ChatGPT as that translation layer over all but the best SWEs I've worked with.
It's better-than-senior at a some things, but worse-than-junior at a lot of things.
It's more like better-than-senior 99% of the time. Makes mistakes 1% of the time. Most of the 'bad results' I've seen people struggle with ended up being the fault of the human, in the form of horrible context given to the AI or else ambiguous or otherwise flawed prompts.
Any skilled developer with a decade of experience can write prompts that return back precisely what we wanted almost every single time. I do it all day long. "Claude 4" rarely messes up.
Yet you are working on your own replacement, while your colleagues are taking the prudent approach.
Here’s the deal: if you won’t write your replacement, a competitor will do it and outprice your employer. Either way you’re out of a job. May be more prudent to adapt to the new tools and master them rather than be left behind?
Do you want to be a jobless weaver, or an engineer building mechanical looms for a higher pay than the weaver got?
I want to be neither. I either want to continue being a software engineer who doesn't need a tricycle for the mind, or move to law or medicine; two professions that have successfully defended themselves against extreme versions of the kind of anxiety, obedience and self hate that is so prevalent among software engineers.
Nobody is preventing people writing in Assembly, even though we have more advanced language.
You could even go back to punch cards if you want to. Literally nobody forcing you to not use it for your own fun.
But LLMs are a multiplier in many mundane tasks (I'd say about 80+% of software development for businesses), so not using them is like fighting against using a computer because you like writing by hand.
That grass is not #00FF00 there. Cory's recent essay on uber for nurses (doctors are next) and law is only second to coding on tbe AI disruptors radar plus both law and medicine have unfriendly hours for the most part.
Happy to hate myself but earn OK money for OK hours.
Funnily enough, I had a 3 or 4 hour chat with some co workers yesterday about an LLM related project and my feeling about LLM's is that it's actually opening up a lot of fun and interesting software engineering challenges if you want to figure out how to automate the usage of LLM's.
I think it's the wrong analogy. The prompt engineer who uses the AI to make code maps to the poorly-paid, low-skill power loom machine tender. The "engineer" is the person who created the model. But it's also not totally clear to me that we'll need humans for that either, in the near future.
Still, the loom tending engineer got paid better than the weaver. Well, my prejudice says he did, don’t actually have the historical facts.
Not all engineering is creating models though, sometimes there are simpler problems to solve.
I would absolutely love to write my own placement. When I can have AI do my job while I go to the beach you better believe I will be at the beach.
If AI can do your job, you don't have a job.
Compiler is the more apt analogy to a mechanical loom.
An LLM is more like outsourcing to a consultancy. Results may vary.
> Either way you’re out of a job.
Tools and systems which increase productivity famously always put everyone out of a job, which is why after a couple centuries of industrial revolution we're all unemployed.
This is kind of my point; people tend to not be silly enough to stay unemployed and starve. Instead when push comes to shove, sensible folks will adapt to the circumstances.
Ahh, the “don’t disturb the status quo” argument. See, we are all working on our replacement, newer versions, products, services and knowledge always make the older obsolete. It is wise to work on your replacement, and even wiser to be in charge of and operate the replacement.
No, nothing fundamentally new is created. Programmers have always been obsessed with "new" tooling and processes to distract from that fact.
"AI" is the latest iteration of snake oil that is foisted upon us by management. The problem is not "AI" per se, but the amount of of friction and productivity loss that comes with it.
Most of the productivity loss comes from being forced to engage with it and push back against that nonsense. One has to learn the hype language, debunk it, etc.
Why do you think IT has gotten better? Amazon had a better and faster website with far better search and products 20 years ago. No amount of "AI" will fix that.
Maybe I would be useful to zoom out a bit. We're in a time of technological change, and change its gonna. Maybe it isn't your job that will change, maybe it is? Maybe it's not even about you or what you do. More likely it's the processes that will change around you. Maybe it's not change for better or worse. Maybe it's just change. But it's gonna change.
> It is wise to work on your replacement...
Depends on the context. You have to keep in mind: it is not a goal of our society or economic system to provide you with a stable, rewarding job. In fact, the incentives are to take that away from you ASAP.
Before software engineers go celebrate this tech, they need realize they're going to end up like rust-belt factory workers the day after the plant closed. They're not special, and society won't be any kinder to them.
> ...and even wiser to be in charge of and operate the replacement.
You'll likely only get to do that if your boss doesn't know about it.
Speak for your own society, then. It should absolutely be our shared goal to keep as many people in stable, rewarding employment; if not for compassion, then at least pure egoism—it’s a lot more interesting to be surrounded by happy, educated people than an angry, poor mob.
Don’t let cynics rule your country. Go vote. There’s no rule that things have to stay awful.
Sure, but maybe we should do this before we make our own replacements. They aren't going to do it for us after the fact
> You have to keep in mind: it is not a goal of our society or economic system to provide you with a stable, rewarding job. In fact, the incentives are to take that away from you ASAP.
We seem to agree as this is more or less exactly the my point. Striving to keep the status quo is a futile path. Eventually things change. Be ready. The best advice I've ever got work (and maybe even life) wise is to always have alternatives. If you don't have alternatives, you literally have no choice.
> We seem to agree as this is more or less exactly the my point. Striving to keep the status quo is a futile path. Eventually things change. Be ready. The best advice I've ever got work (and maybe even life) wise is to always have alternatives. If you don't have alternatives, you literally have no choice.
Those alternatives are going to be worse for you, because if they weren't, why didn't you switch already? And if a flood of your peers are pursing alternatives at the same time, you'll probably experience an even poorer outcome than you expected (e.g. everyone getting laid off and trying to make ends meet driving for Uber at the same time). Then, AI is really properly understood as a "fuck the white-collar middle-class" tech, and it's probably going to fuck up your backup plans at about the same time as it fucks up your status quo.
You're also describing a highly individualistic strategy, for someone acting on his own. At this point, the correct strategy is probably collective action, which can at least delay and control the change to something more manageable. But software engineers have been too "special snowflake" about themselves to have laid the groundwork for that, and are acutely vulnerable.
Alternatives need not be better or worse. Just different. Alternatives need not be doing the same thing somewhere else, it might be seeking out something else to do where you are. It might be selling all your stuff and live on an island in the sun for all I know.
I do concur it is an individualistic strategy, and as you mentioned unionization might have helped. But, then again it might not. Developers are partially unionized where I live, and I'm not so sure it's going to help. It might absorb some of the impact. Let's see in a couple of years.
> Alternatives need not be better or worse. Just different. Alternatives need [not] be doing the same thing somewhere else, it might be seeking out something else to do where you are.
People have families to feed and lifestyles to maintain, anything that's not equivalent will introduce hardship. And "different" most likely means worse, when it comes to compensation. Even a successful career change usually means restarting at the bottom of the ladder.
And what's that "something else," exactly? You need to consider that may be disrupted at the same time you're planning on seeking it, or fierce competition from your peers makes it unobtainable to you.
Assuming there are alternatives waiting for you when you'll need them is its own kind of complacency.
> It might be selling all your stuff and live on an island in the sun for all I know.
Yeah, people with the "fuck-you" money to do that will probably be fine. Most people don't have that, though.
Being ahead of the curve is a recipe for not being left behind. There is no better time for action than now. And regarding the competition from peers, the key is likely differentiation. As it always has been.
Hardship or not, restarting from the bottom of the ladder or not, betting on status quo is a loosing game at the moment. Software development is being disrupted, I would expect developers to produce 2-4x more now than two years ago. However, that is the pure dev work. The architecture, engineering, requirements, specification etc parts will likely see another trajectory. Much due to the raise of automation in dev and other parts of the company. The flip side is that the raise of non-dev automation is coming, with the possibility of automating other tasks, in turn making engineers (maybe not devs though) vital to the companies process change.
Another, semi related, thought is that software development has automated away millions of jobs and it’s just developers time to be on the other end of the stick.
Carteling doesn't work bottom-up. When changes begin (like this one with AI), one of the things an individual can do is to change course as fast as they can. There are other strategies as well, not evolving is also one, but some strategies yield better results than others. Not keeping up just worsens the chances, I have found.
It does when it is called unionizing, however for some reason software developers have a mental block towards the concept.
I would be much more in favor of them as a software developer if I felt the qualifications of other software developers were reliable. But they never have been, and today it’s worse than ever. So I think it’s important to weed some of that out before we start talking about making it harder to fire people.
You're doing your boss's job for them by making this an engineer vs engineer thing instead of an engineer vs management thing.
No, today I can tell my manager when someone sucks at their job, and if enough other people are saying the same thing then that engineer will either be given help so they stop sucking or be fired so they don’t subtract from the team. Because this happens fairly often I wouldn’t want to add a lot of friction to that process. Idk, maybe you can protect people from being laid off without also complicating performance-based firings.
How does a union have an effect on what you described? Can't a union worker not be fired for poor performance?
Unions make it harder because you actually have to document the poor performance in order to show you’re complying with the rules. It effectively makes the cost of firing (and thus, hiring) higher.
Seems that aiming twice and shooting once is a good thing when the cost of a miss is destabilizing someones entire personal life for their entire family potentially
I fail to see the issue with that as well.
The sentiment I often see online is that "fewer rules = more freedom". And as I grew more experienced, I find that the opposite is true. In a lot of cases, freedom is upheld by the rules themselves. And this is not an orwellian "war is peace" kind of thing, but rather a GNU Free Software kind of thing, where the perpetuity of the freedom is guaranteed with the rules of participation.
I think that this discussion went in the same direction with the unions. But, of course, freedom in itself can have different interpretations for different people.
The reason might be that union members give a percentage of their income to a governing body which is barely distinct from organized crime in which they have no say in. The federal government already exists. You really want more boots on your neck?
I kinda agree with you, more boots is not really an ideal way to achieve this. Worker protections should come from the government itself, so much so that there is no need to form unions, like how it is at many places in Europe. I don't see how that could be created in the US though. I think trade unions are more their vibe - or not even that, of course, like in case with you. And of course, in US history we can see how some organization grow and get a life on their own, like the NRA. Not necessarily remaining on the path to protect past principles, or the welfare of the people.
Do you want to work with LLMs or H1Bs and interns… choose wisely.
Personally I’m thrilled that I can get trivial, one-off programs developed for a few cents and the cost of a clear written description of the problem. Engaging internal developers or consulting developers to do anything at all is a horrible experience. I would waste weeks on politics, get no guarantees, and waste thousands of dollars and still hear nonsense like, “you want a form input added to a web page? Aw shucks, that’s going to take at least another month” or “we expect to spend a few days a month maintaining a completely static code base” from some clown billing me $200/hr.
You can work with consulting oriented engineers who get shit done with relatively little stress and significant productivity. Productivity enhanced by AI but not replaced by it. If interested, reach out to me.
I don't think that this should be downvoted because it raises a really important issue.
I hate AI code assistants, not because they suck, but because they work. The writing is on the wall.
If we aren't working on our own replacements, we'll be the ones replaced by somebody else's vibe code, and we have no labor unions that could plausibly fight back against this.
So become a Vibe Coder and keep working, or take the "prudent" approach you mention - and become unemployed.
Personally I used them for a while and then just stopped using them because actually no, unfortunately those assistants don't work. They appear to work at first glance but there's so much babysitting needed that it's just not worth it.
This "vibe coding" seems just another way to say that people spend more time refining the output of these tools over and over again that what they would normally code.
I'm in this camp... today.
But there's going to be an inflection point - soon - as things continue to improve. The industry is going to change rapidly.
Now is the time to either get ready for that - by being ahead of the curve, at least by being familiar with the tooling - or switch careers and cede your job to somebody who will play ball.
I don't like any of this, but I see it as inevitable.
How is it inevitable when they are running out of text to train on and running out of funding at the same time?
You just said they worked one comment ago and now you agree that they don't?
I'm in the same camp in that I don't use them because they don't work well enough for my own tastes. But I know what I'm doing, and I'm picky.
Clearly they do work in a general sense. People who don't want to code are making things that work this way right now!
This isn't yet replacing me, but I'm certain it will relatively soon be the standard for how software is developed.
Maybe there's going to be an inflection point ... or maybe not, I feel like we're at the exact same point as the first release of Cursor in 2023.
I’ll work on fixing the vibe coders mess and make bank. Experience will prove valuable even more than before
Unrelated, but is this a case of the Pareto Principle? (Admittedly the first time I'm hearing of it) Wherein 80% of the effect is caused by 20% of the input. Or is this more a case of diminishing returns? Where the initial results were incredible, but each succeeding iteration seems to be more disappointing?
Pareto is about diminishing returns.
> but each succeeding iteration seems to be more disappointing
This is because the scaling hypothesis (more data and more compute = gains) is plateauing, because all text data is used and compute is reaching diminishing returns for some reason I’m not smart enough to say why, but it is.
So now we're seeing incremental core model advancements, variations and tuning in pre- and post training stages and a ton of applications (agents).
This is good imo. But obviously it’s not good for delusional valuations based exponential growth.
We're seeing diminishing returns in benchmark space, which is partly an artefact of construction, not an absolutely true commentary on how things are progressing.
Well yes but there is no better way to measure without resorting to pure hearsay. How would you make an accurate assessment of something so inherently vague?
Alter the benchmark space that we care about, for example focus only on ARC-AGI-2 and then suddenly the gains are no longer diminishing but are accelerating.