Note that we are the first-wave of AI users. We are already well-equiped to ask the LLM the right questions. We already have experience with old-fashioned self-learning. So we only need some discipline to avoid skill atrophy.
But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
I was learning a new cloud framework for a side project recently and wanted to ask my dad about it since it's the exact same framework he's used for his job for many years, so he'd know all sorts of things about it. I was expecting him to give me a few ideas or have a chat about a mutual interest since this wasn't for income or anything. Instead all he said was "DeepSeek's pretty good, have you tried it yet?"
So I just went to DeepSeek instead and finished like 25% of my project in a day. It was the first time in my whole life that programming was not fun at all. I was just accomplishing work - for a side project at that. And it seems the LLMs are already more interested in talking to me about code than my dad who's a staff engineer.
I am going to use the time saved to practice an instrument and abandon the "programming as a hobby" thing unless there's a specific app I have a need for.
> It was the first time in my whole life that programming was not fun at all.
And learning new technologies in pursuit of resume-driven-development is fun?
I gotta say, if learning the intricacies of $LATEST_FAD is "fun" for you, then you're not really going to have a good time, employment-wise, in the age of AI.
If learning algorithms and data structures and their applicability in production is fun, then the age of AI is going to leave you with very in-demand skills.
> And learning new technologies in pursuit of resume-driven-development is fun?
Nothing to do with employment. I was just doing a "home-cooked app"[0] thing for fun that served a personal usecase. Putting it on my resume would be a nice-to-have to prove I'm still sharpening my skills, but it isn't the reason I was developing the app to begin with.
What I think at least is the administration and fault monitoring of lots of random machines and connected infrastructure in the cloud might be left somewhat untouched by AI for now, but if it's just about slinging some code to have an end product, LLMs are probably going to overtake that hobby in a few years (if anyone has such a weird hobby they'd want to write a bunch of code because it's fun and not to show to employers).
Tons of AIOps stuff related to observability, monitoring, and remediation going on. In fact, I found that one the big topics at Kubecon in London.
I find this to be an interesting anecdote because at a certain level for a long time the most helpful advice you could give is what would be the best reference for the problem at hand which might have been a book or website or wiki or Google for stack overflow and now a particular AI model might be the most efficient way to give someone a 'good reference.' I could certainly see someone recommending a model the same way they may have recommended a book or tutorial.
On point of discussing code.. a lot of cloud frameworks are boring but good. It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place. While I can't speak for your father I haven't met a programmer who doesn't get excited to talk about at least one coding topic this cloud framework just might not have been it.
> It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place.
I only read your comment after I posted mine, but my take is basically the same as yours: the GP thinks the IT learning-treadmill is fun and his dad doesn't.
It's not hard to see the real problem here.
I'm of two minds about this. I get more done with LLMs. I find the work I do assisted by LLM less satisfying. I'm not sure if I actually enjoyed the work before, or if I just enjoyed accomplishing things. And now that I'm off loading a lot of the work, I'm also off loading a lot of the feeling of accomplishment.
I recently did a side project that at first I thought would be fun, pretty complex (for me, at least), and a good learning experience. I decided to see how far AI would get me. It did the whole project. It was so un-fun and unsatisfying. My conclusion was, it must not have been technically complex enough?
> There is a good chance that there will be a generational skill atrophy in the future
We already see this today: a lot of young people do not know how to type in keyboards, how to write in word processors, how to save files, etc. A significant part of a new generation is having to be trained on basic computer things same as our grandparents did.
It's very intersting how "tech savvy" and "tech compentent" are two different things.
Those are all very specific technical IT related skills, if the next generation doesn't know how to do those things, it's because they don't need to. Not because they can't learn.
Except both corporations and academia require them, and it's likely you'll need them at some point in your everyday life too. You can't run many types of business on tablets and smartphones alone.
> Except both corporations and academia require them
And so the people who are aiming to go into that kind of work will learn these skills.
Academia is a tiny proportion of people. "Business" is larger but I think you might be surprised by just how much of business you can do on a phone or tablet these days, with all the files shared and linked between chats and channels rather than saved in the traditional sense.
As a somewhat related example, I've finally caved into to following all the marketing staff I hire and started using Canva. The only time you now need to "save a picture" is... never. You just hit share and send the file directly into the WhatsApp chat with the local print shop.
> Academia is a tiny proportion of people. "Business" is larger but I think you might be surprised by just how much of business you can do on a phone or tablet these days, with all the files shared and linked between chats and channels rather than saved in the traditional sense.
And this is exactly what is meant by generational skill atrophy. You no longer own your own files or manage your own data, it's all handled in cloud solutions outside of your control, on devices you barely understand how they work and in channels controlled by companies looking to earn a profit.
When any of those links break, you are suddenly non-functional. You can no longer access your files, you can no longer work on your device. This skill atrophy includes the ability to correctly analyze and debug problems with your devices or workflow in question.
...And the businessman in me tells me there will be a market for ever simpler business tools, because computer-illiterate people will still want to do business.
Yes, but they weren't field specific from the rise of the PC to the iPhone. The next life skill, homeEc skill, public forum, etc meant the average kid or middle class adult was being judged on whether they were working on these skills.
Jaron Lanier was a critic of the view that files were somehow an essential part of computing:
https://www.cato-unbound.org/2006/01/08/jaron-lanier/gory-an...
Typing on a keyboard, using files and writing on a word processor, etc. are accidental skills, not really essential skills. They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do not. But they don't because they don't need to: we now have very capable computing systems that don't need files at all, or at least don't need to surface them at the user level.
It could be that writing or understanding code without AI help turns out to be another accidental skill, like writing or understanding assembly code today. It just won't be needed in the future.
Waxing philosophical about accidental/essential kinda sweeps under the rug that it's an orthogonal dimension to practical for a given status quo. And that's what a lot of people care about even if it's possible to win a conversation by deploying boomer ad hominem.
I will lament that professionals with desk jobs can't touch-type. But not out of some "back in my day" bullshit. I didn't learn until my 20s. I eventually had an "oh no" realization that it would probably pay major dividends on the learning investment. It did. And then I knew.
I was real good at making excuses to never learn too. Much more resistant than the student/fresh grads I've since convinced to learn.
Typing was only a universally applicable skill for maybe the past three or four decades. PCs were originally a hard sell among the C suite. You mean before I get anything out of this machine, I have to type things into it? That's what my secretary is for!
So if anything, we're going back to the past, when typing need only be learned by specialists who worked in certain fields: clerical work, data entry, and maybe programming.
> They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do not
Writing cursive may not be the most useful skill (though cursive italic is easy to learn and fast to write), but there's nothing quite like being able to read an important historical document (like the US Constitution) in its original form.
> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future
Spot on. Look at the stark difference in basic tech troubleshooting abilities between millennials and gen z/alpha. Both groups have had computers most of their lives but the way that the computers have been "dumbed down" for lack of a better term has definitely accelerated that skill atrophy.
I'm far from an AI enthusiast but concerning:
> There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher. Or to be more technological: I'd have to learn how to make a bare OS capable of starting from a motherboard, it still does not prevent me from deploying k8s clusters and coding apps to run on it.
> I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher
You'd sing a different tune if there was a good chance from being poisoned by your butcher.
The two examples you chose are obvious choices because the dependencies you have are reliable. You trust their output and methodologies. Now think about current LLMs-based agents running your bank account, deciding on loans,...
Sure, but we still will need future generation people to want to learn how to butcher and then actually follow through on being butchers. I guess the implied fear is that people who lack fundamentals and are reliant on AI become subordinate to the machine's whimsy, rather than the other way around.
Maybe its not so much that it prevents anything, rather it will hedge toward a future where all we get is a jpeg of a jpeg of a jpeg. ie. everything will be an electron app or some other generational derivative not yet envisioned yet, many steps removed from competent engineering.
Lying is pretty amazingly useful. How are you going to teach your kid to not use that magical thing that solves every possible problem? - C.K. Louis
Replace lying with LLM and all I see is a losing battle.
This is a great quote, but for the opposite reason. Lying has been an option forever - people learn how to use it and how not to use, as befits their situation and agenda. The same will happen with AI. Society will adapt, us first-AI-users will use it far differently than people in 10, 20, 30+ years. Things will change, bad things will happen, good things will happen, maybe it will be Terminator, maybe it will be Star Trek, maybe it will be Star Wars or Mad Max or the Culture.
Current parents, though, aren't going to teach kids how to use it, kids will figure that out and it will take a while.
Remember that even the Star Trek utopia only happened after a nuclear WW3 that started in 2026 and lasted for 30+ years.
> WW3 that started in 2026
I thought it was cute when we had the "anniversary" for Back to the Future's timestamp, but for that one ... "too soon, man"
We also grew with internet and the newer generation is having a hard time following it.
However we were born post invention of photography and look at the havoc it's wreaking with post-truth.
The answer to that lies in reforming the education system so that we teach kids digital hygiene.
How on earth we still teach kids Latin in some places but not python? It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.
I've long maintained that kids must learn end to end what it takes to put content on the web themselves (registering a domain, writing some html, exposing it on a server, etc.) so they understand that _truly anyone can do this_. Learning both that creating "authoritative" looking content is trivial and that they are _not_ beholden to a specific walled garden owner in order to share content on the web.
> It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.
Perhaps that's also a reason why - tech is so large, there's no time in a traditional curriculum to teach all of it. And only teaching what's essential is going to be tricky because who gets to decide what's essential? And won't this change over time?
I don't think that argument holds. If you're going to pick anything in Tech to teach the masses, Python is a pretty good candidate.
There is no perfect solution, but most imperfect attempts are superior to doing nothing.
I'd argue it's a bad candidate because it doesn't run in a normal person computing environment. I can't write a Python application and give it to another normie and have them able to run it, it doesn't run on a Phone, it doesn't run on a web browser.
So it's teaching them a language they can't use to augment their work between or pass their work to other non-techies.
I'm not sure that's what we're solving for. There is no silver bullet. No single language runs on every phone.
If we're teaching everyone some language, we could very much decide that this language ought to be installed in the "normal person computing environment".
I definitely don't want people to learn to write code from JavaScript as it has way too many issues to be deemed representative of the coding experience.
What normal person computing environment has tools to program? Only thing I can think of is spreadsheet functions.
Javascript addresses most of your concerns, if you also teach how to deploy it.
(I'm guessing that's what you were hinting at.)
Yes, you can, actually.
Pyinstaller will produce PE, ELF, and Mach-O executables, and
Py2wasm will produce wasm modules that will run in just about any modern browser.
How is someone just learning coding expected to understand half the words you just typed.
Are grammar rules surrounding past participles and infinitives, or the history of the long-dead civilizations that were ultimately little more than footnotes throughout history really more important than basic digital literacy?
Some people would argue that understanding ancient civilaztions and cultures is a worthy goal. I don't think it has to be an either/or thing.
Also digital literacy is a fantastic skill - I'm all for it. And I think that digital (and cultural) literacy leads me to wonder if AI is making the human experience better, or if it is primarily making corporations a lot of money to the detriment of the majority of people's lives.
Right - if you see these things as useless trivia, why waste your time with them when you could be getting trained by your betters on the most profitable current form of ditch-digging?
It likely no longer matters. Not in the sense that AI replaces programmers and engineers, but it is a fact of life. Like GPS replacing paper navigation skills.
I grew up never needing paper maps. Once I got my license, GPS was ubiquitous. Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.
I think learning and critical thinking are skills in and of themselves and if you have a magic answering machine that does not require these skills to get an answer (even an incorrect one), it's gonna be a problem. There are already plenty of people that will repeat whatever made up story they hear on social media. With the way LLMs hallucinate and even when corrected double down, it's not going to make it better.
>Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.
That's absolutely not the case, paper maps don't have a blue dot showing your current location. Paper maps are full of symbols, conventions, they have a fixed scale...
Last year I bought a couple of paper maps and went hiking. And although I am trained in reading paper maps and orientating myself, and the area itself was not that wild and was full of features, still I had moments when I got lost, when I had to backtrack and when I had to make a real effort to translate the map. Great fun, though.
Relevant game that was posted recently:
3D Army Land Navigation Courses - https://news.ycombinator.com/item?id=43624799 - April 2025 (46 comments)
This is the worst form of AI there will ever be, it will only get better. So traditional self-learning might be completely useless if it really gets much better
> it will only get better
I wanted to highlight this assumption, because that's what it is, not a statement of truth.
For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.
But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.
I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.
The real problem with AI is that you will never have an AI. You will have access to somebody else's AI, and that AI will not tell you the truth, or tell you what advances your interests... it'll tell you what advances its owner's interests. Already the public AIs have very strong ideological orientations, even if they are today the ones that the HN gestalt also happens to agree with, and if they aren't already today pushing products in accordance with some purchased advertising... well... how would you tell? It's not like it's going to tell you.
Perhaps some rare open source rebels will hold the line, and perhaps it'll be legal to buy the hardware to run them, and maybe the community will manage to keep up with feature parity with the commercial models, and maybe enough work can be done to ensure some concept of integrity in the training data, especially if some future advance happens to reduce the need for training data. It's not impossible, but it's not a sure thing, either.
In the super long run this could even grow into the major problem that AIs have, but based on how slow humanity in general has been to pick up on this problem in other existing systems, I wouldn't even hazard a guess as to how long it will take to become a significant economic force.
> The real problem with AI is that you will never have an AI.
I wanted to draw attention to Moore's Law and the supercomputer in your pocket (some of them even ship with on-board inference hardware). I hear you that the newest hottest thing will always require lighting VC money on fire but even today I believe one could leverage the spot (aka preemptable) market to run some pretty beefy inference without going broke
Unless I perhaps misunderstood the thrust of your comment and you were actually drawing attention to the infrastructure required to replicate Meta's "download all the web, and every book, magazine, and newspaper to train upon petabytes of text"
Marc Adreesen has pretty much outright acknowledged him and many others in Silicon Valley supported Trump because of the limits the Biden-Harris administration wanted to put on AI companies.
So yeah, the current AI companies are making it very difficult for public alternatives to emerge.
Makes sense, I also don’t think llms are that useful or improve but I meant in a more general sense, it seems like there will eventually be much more capable technology than LLMs. Also agree it can be worse x months/years from now so what I wrote doesn’t make that much sense in that way
I felt this way until 3.7 and then 2.5 came out, and O3 now too. Those models are clear step-ups from the models of mid-late 2024 when all the talk of stalling was coming out.
None of this includes hardware optimizations either, which lags software advances by years.
We need 2-3 years of plateauing to really say intelligence growth is exhausted, we have just been so inundated with rapid advance that small gaps seem like the party ending.
I can get productivity advantages from using power tools, yet regular exercise has great advantages, too.
It's a bit similar with the brain, learning and AI use. Except when it comes to gaining and applying knowledge, the muscle that is trained is judgement.
Meanwhile, in 1999, somewhere on Slashdot:
"This is the worst form of web there will ever be; it will only get better."
Great way to put it. People who can't imagine a worse version are sorely lacking imagination.
I for one can't wait to be force fed ads with every answer.
people say this but the models seem to be getting worse over time
Are you saying the best models are not the ones out today, but those of the past? I don't see that happening with the increased competition, nobody can afford it, and it disagrees with my experience. Plateauing, maybe, but that's only as far as my ability to discern.
Models are getting better, like Gemini 2.5 Pro is incredible, compare to what we had a year ago it's on a completely different level.
That's optimistic. Sci-fi has taught us that way worse forms of AI are possible.
Seems like the opposite could be true though. AI models now have all been trained on real human-generated texts but as more of the web gets flooded with slop the models will be increasingly trained on their own outputs.
I have this idea that a lot of issues we are having today are not with concrete thing X, but with concrete thing X running amok in this big, big world of ours. Take AI for example: give a self-aware, slightly evil AI to physically and news-isolated medieval villagers somewhere. If they survive the AI's initial havoc, they will apply their lesson right way. Maybe they will isolate the AI in a cave with a big boulder on the door, to be removed only when the village needs advice regarding the crops or some disease. Kids getting near that thing? No way. It was decided in a town hall that that was a very bad idea.
Now, compare that with our world: even if thing X is obviously harming the kids, there is nothing we can do.
It’s still unconvincing that the shift to AI is fundamentally different than the shift to compiled languages, the shift to high level languages, the shift to IDEs, etc. In each of those stages something important was presumably lost.
The shift to compiled languages and from compiled languages to high level languages brought us Wirth's law.
> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.
Just like there is already generational gap with developers who don't understand how to use a terminal (or CS students who don't understand what file systems are).
AI will ensure there are people who don't think and just outsource all of their thinking to their llm of choice.
This is going to be like that thing where we have to fix printers for the generation above and below us isn't it, haha
Damn kids, you were supposed to be teasing me for not knowing how the new tech works by now.
Is AI going to be meaningfully different from vanilla Google searching though? The difference is a free extra clicks to yield mostly the same level of results.