garciasn 7 days ago

This is the response to most new technologies; folks simply don't want to accept the future before the ramifications truly hit. If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.

LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.

9
eikenberry 7 days ago

I don't think it is only (or even mostly) not wanting to accept it, I think it is at least equal measure just plain skepticism. We've seen all sorts of wild statements about how much something is going to revolutionize X and then turns out to be nothing. Most people disbelieve these sorts of claims until they see real evidence for themselves... and that is a good default position.

chii 7 days ago

hedging the possibility that they get displaced economically before it happens is always prudent.

If the future didnt turn out to be revolutionary, you now have done some "unnecessary" work at worst, but might've acquired some skills or value at least. In the case of most well off programmers, i suspect buying assets/investments which can afford them at least a reasonable lifestyle is likely too.

So the default position of being stationary, and assuming the world continues the way it has been, is not such a good idea. One should always assume the worst possible outcome, and plan for that.

0points 7 days ago

> One should always assume the worst possible outcome, and plan for that.

Maybe if you work e-commerce or in the military.

But how do you even translate this line of thought for today?

Is you EMP defenses up to speed?

Are you studying russian and chinese while selling kidneys in order to afford your retirement home on Mars?

My point being, you can never plan for every worst outcome. In reality you would have a secondary data center, backups and a working recovery routine.

None of which matters if you use autocomplete or not.

0points 7 days ago

> If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.

Look, we see the forest. We are just not impressed by it.

Having unlimited chaos monkeys at will is not revolutionizing anything.

const_cast 7 days ago

Lawyers don't even use version control software a lot of the time. They burn hundreds of paralegal hours reconciling revisions, a task that could be made 100x faster and easier with Git.

There's no guarantee a technology will take off, even if it's really, really good. Because we don't decide if that tech takes off - the lawyers do. And they might not care, or they might decide billing more hours is better, actually.

heartbreak 7 days ago

> billing more hours is better, actually

The guiding principle of biglaw.

Attorneys have the bar to protect them from technology they don’t want. They’ve done it many times before, and they’ll do it again. They are starting to entertain LLMs, but not in a way that would affect their billable hours.

dgfitz 7 days ago

“First thing we do, let’s kill all the lawyers”

History majors everywhere are weeping.

v3xro 7 days ago

Many of us would prefer to see the technological leaps to be evenly distributed (so e.g. even clean drinking water that does not need to be boiled before consumption is not a baseline in 2025). So if you want to adapt to your new and improved position where you are just pushing buttons fine - but some of us are actually interested in how computers work (and are actually really uninterested in most companies' bottom lines). It's just how it is ;)

rxtexit 7 days ago

I think many people just settled in while we had no real technological change for 15 years. Real change, not an update to a web framework.

When I graduated high school, I had never been or knew anyone who had ever been on the internet at all. The internet was this vague "information superhighway" that I didn't know really what to make of.

If you are of a certain age though you would think a pointless update to react was all the change ever coming.

That time is over and we are back to reality.

otabdeveloper4 7 days ago

> you are ... le LUDDITE

Or maybe they just know the nitty-gritty inherent limitations of technology better than you.

(inb4: "LLMs can't have limitations! Wait a few years and they will solve literally every possible problem!")

ben-schaaf 7 days ago

Friendly reminder that people like you were saying the exact same thing about metaverse, VR, web3, crypto, etc.

drodgers 7 days ago

Yes. If you judge only from the hype, then you can't distinguish LLMs from crypto, or Nuclear Weapons from Nuclear Automobiles.

If you always say that every new fad is just hype, then you'll even be right 99.9% of the time. But if you want to be more valuable than a rock (https://www.astralcodexten.com/p/heuristics-that-almost-alwa...), then you need to dig into the object-level facts and form an opinion.

In my opinion, AI has a much higher likelihood of changing everything very quickly than crypto or similar technologies ever did.

abootstrapper 7 days ago

I didn’t buy the hype of any of those things, but I believe AI is a going to change everything much like the introduction of the internet. People are dismissing AI because its code is not bug free, completely dismissing the fact that it generates PRs in minutes from a poorly written text prompt. As if that’s not impressive. In fact if you put a human engineer on the receiving end of the same prompt with the same context as what we’re sending to the LLM, I doubt they could produce code half as good in 10x the time. It’s science fiction coming true, and it’s only going to continue to improve.

ben-schaaf 7 days ago

Again, there were people just as sure about crypto as you are now about AI. They dismissed criticism because they thought the technology was impressive and revolutionary. That it was science fiction come true and only going to continue to improve. It's the exact same hype-driven rhetoric.

If you want to convince skeptics talk about examples, vibe code a successful business, show off your success with using AI. Telling people it's the future and if you disagree you have your head in the sand, is wholly unconvincing.

tortasaur 7 days ago

As someone who gleefully followed along as the Web3 hype train derailed, an important distinction is that crypto turns every believer into a salesperson, by design. There were some that were truly passionate about the potential applications for blockchain technology, but by and large they were drowned out by people who, having poured $10k into the memecoin of the week, wanted to see the price of that coin rise.

This doesn't feel like that. The applications of generative AI have become self-evident to anyone that's followed their rise. Specific applications of AI resemble snake oil, and there are hucksters who pivoted from crypto to AI, but the ratio of legit use cases to scams isn't even close.

If anything, the incentives for embellishment have flipped since crypto. VC-funded AI companies will dreamily fire press releases about AI taking us to Mars, but it doesn't have the pseudo-grassroots quality of cryptocurrency hype. The average worker is incentivized to be an AI skeptic. The rise of generative AI threatens workers in several fields today, and has already negatively impacted copywriters and freelance artists. I absolutely understand why people in those fields would respond by calling AI use unethical and criticize the shortcomings of today's models.

We'll see what the next few years hold. But personally, I foresee AI integration ramping up. Even if the models themselves completely stagnate from this point on, there's a lot of missing glue between the models and the real world.

limflick 7 days ago

You don't have to be able to vibe code an entire business from scratch to know that the technology behind AI is significantly more impressive than VR, crypto, web3 etc. What the free version of ChatGPT can do right now, not just coding; would've been unimaginable to most people just 5 years ago.

Don't people and companies using AI lazily to put out low quality content blind you to its potential as well as the reality of what it can do right now. Look at Google's VO3, most people in the world right now won't be able to tell you that it's AI generated and not real.

AYBABTME 7 days ago

The value of these was always a far fetch, and requires a critical mass adopting it before becoming potentially useful. But LLMs value is much more immediate and doesn't require any change in the rest of the world. If you use it and are amplified by it, you are... simply better off.

dgfitz 7 days ago

In my small-minded opinion, llms are the better version of code-completion. Search and time-savings on an accelerated course.

They can’t write me a safety-critical video player meeting the spec with full test coverage using a proprietary signal that my customer would accept.

ben-schaaf 7 days ago

Frankly I disagree that LLMs value is immediate. What I do see is a whole lot of damage it's causing, just like the hype cycles before it. It's fine for us to disagree on this, but to say I'm burying my head in the sand not wanting to accept "the future" is exactly the same hype-driven bullshit the crypto crowd was pushing.

AYBABTME 6 days ago

That's why it's what I define as immediate value. It's undeniably incredibly amplifying to me, whether you or others agree or not. No network effect required. It doesn't matter whether I convince anyone else of the value, I can capture it all on my own. Unlike ponzi-schemes like web3 or VR experiences that require an entire shift in everyday life and an ecology of supporting software.

I don't need to convince anyone that LLMs are enabling me to do a lot more. This is what makes this hype different. It has bones. Once you've found a way to leverage it, it's undeniably helpful regardless of your prior disposition. Everyone else can say they're not useful and it rings hollow because it obviously is to me. And thus probably useful to everyone else too.

fumeux_fume 7 days ago

Ah yes, please enjoy living in your moment and anticipating your entirely new world. I also hear all cars will be driving themselves soon and Jesus is coming back any day now.

refulgentis 7 days ago

I found it mildly amusing to contrast the puerile dismissiveness with your sole submission to this site: UK org's Red List of Endangered & Extinct crafts.

bgwalter 7 days ago

Adapt to your manager at bigcorp who is hyping the tech because it gives him something to do? No open source project is using the useless LLM shackles.

xandrius 7 days ago

As if you'd know if they did.

jdiff 7 days ago

Why would we not? If they were so effective, their effectiveness would be apparent, inarguable, and those making use of it would advertise it as a demonstration of just that. Even if there were some sort of social stigma against it, AI has enough proponents to produce copious amounts of counterarguments through evidence all on their own.

Instead, we have a tiny handful of one-off events that were laboriously tuned and tweaked and massaged over extended periods of time, and a flood of slop in the form of broken patches, bloated and misleading issues, and nonsense bug bounty attempts.

xandrius 7 days ago

I think the main reason might be that when the output is good the developer congratulates themselves, and when it's bad they make a post or comment about how bad AI is.

Then the people who congratulate the AI for helping get yelled at by the other category.

jdiff 7 days ago

As long as the AI people stay in their lane and work on their own projects, they're not getting yelled at. This is ignoring that AI has enough proponents to have enough projects of significant size. And even if they're getting shouted at from across the fence, again, AI has enough proponents who would brave getting yelled at.

We'd still have more than tortured, isolated, one-offs. We should have at least one well-known codebase maintained through the power of Silicon Valley's top silicon-based minds.