301
289
afiodorov 2 hours ago

Regarding the common analogy about GPS atrophying map skills, I have a slightly different take based on observation and experience. My dad, who learned to drive pre-GPS, struggles to simultaneously drive and follow navigation – it's too much input, too fast. He needs a co-pilot or pre-planning.

For those of us who learned to drive with GPS, however, it wasn't simply about foregoing maps. It was about developing the distinct skill of processing navigation prompts while simultaneously managing the primary task of driving. This integration required practice; like many, I took plenty of wrong roundabout exits before it became second nature. Indeed, this combined skill is arguably so fundamental now that driving professionally without the ability to effectively follow GPS might be disqualifying – it's hard to imagine any modern taxi or ride-share company hiring someone who lacks this capability. So, rather than deskilling, this technology has effectively raised the bar, adding a complex, necessary layer to the definition of a competent driver today.

I see a parallel with AI and programming. The focus is often on what might be lost, but I think we should also recognise the new skill emerging: effectively guiding, interpreting, and integrating AI into the development process. It's not just 'programming' anymore, it's 'programming-with-AI', and mastering that interaction is the next challenge.

gchamonlive 1 day ago

We can finally just take a photo of a textbook problem that has no answer reference and no discussion about it and prompt an LLM to help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it.

LLM changed nothing though. It's just boosting people's intention. If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free. But if you just want to be a poser and fake it until you make it, you are gonna be brainrot waaaay faster than usual.

m000 1 day ago

Note that we are the first-wave of AI users. We are already well-equiped to ask the LLM the right questions. We already have experience with old-fashioned self-learning. So we only need some discipline to avoid skill atrophy.

But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.

doright 1 day ago

I was learning a new cloud framework for a side project recently and wanted to ask my dad about it since it's the exact same framework he's used for his job for many years, so he'd know all sorts of things about it. I was expecting him to give me a few ideas or have a chat about a mutual interest since this wasn't for income or anything. Instead all he said was "DeepSeek's pretty good, have you tried it yet?"

So I just went to DeepSeek instead and finished like 25% of my project in a day. It was the first time in my whole life that programming was not fun at all. I was just accomplishing work - for a side project at that. And it seems the LLMs are already more interested in talking to me about code than my dad who's a staff engineer.

I am going to use the time saved to practice an instrument and abandon the "programming as a hobby" thing unless there's a specific app I have a need for.

xemdetia 1 day ago

I find this to be an interesting anecdote because at a certain level for a long time the most helpful advice you could give is what would be the best reference for the problem at hand which might have been a book or website or wiki or Google for stack overflow and now a particular AI model might be the most efficient way to give someone a 'good reference.' I could certainly see someone recommending a model the same way they may have recommended a book or tutorial.

On point of discussing code.. a lot of cloud frameworks are boring but good. It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place. While I can't speak for your father I haven't met a programmer who doesn't get excited to talk about at least one coding topic this cloud framework just might not have been it.

lelanthran 1 day ago

> It usually isn't the interesting bit and it is a relatively recent quirk that everyone seems to care more about the framework compared to the thing you actually wanted to achieve. It's not a fun algorithm optimization, it's not a fun object modeling exercise, it's not some nichey math thing of note or whatever got them into coding in the first place.

I only read your comment after I posted mine, but my take is basically the same as yours: the GP thinks the IT learning-treadmill is fun and his dad doesn't.

It's not hard to see the real problem here.

Taylor_OD 4 hours ago

I'm of two minds about this. I get more done with LLMs. I find the work I do assisted by LLM less satisfying. I'm not sure if I actually enjoyed the work before, or if I just enjoyed accomplishing things. And now that I'm off loading a lot of the work, I'm also off loading a lot of the feeling of accomplishment.

lelanthran 1 day ago

> It was the first time in my whole life that programming was not fun at all.

And learning new technologies in pursuit of resume-driven-development is fun?

I gotta say, if learning the intricacies of $LATEST_FAD is "fun" for you, then you're not really going to have a good time, employment-wise, in the age of AI.

If learning algorithms and data structures and their applicability in production is fun, then the age of AI is going to leave you with very in-demand skills.

doright 1 day ago

> And learning new technologies in pursuit of resume-driven-development is fun?

Nothing to do with employment. I was just doing a "home-cooked app"[0] thing for fun that served a personal usecase. Putting it on my resume would be a nice-to-have to prove I'm still sharpening my skills, but it isn't the reason I was developing the app to begin with.

What I think at least is the administration and fault monitoring of lots of random machines and connected infrastructure in the cloud might be left somewhat untouched by AI for now, but if it's just about slinging some code to have an end product, LLMs are probably going to overtake that hobby in a few years (if anyone has such a weird hobby they'd want to write a bunch of code because it's fun and not to show to employers).

[0] https://www.robinsloan.com/notes/home-cooked-app/

ghaff 2 hours ago

Tons of AIOps stuff related to observability, monitoring, and remediation going on. In fact, I found that one the big topics at Kubecon in London.

financypants 3 hours ago

I recently did a side project that at first I thought would be fun, pretty complex (for me, at least), and a good learning experience. I decided to see how far AI would get me. It did the whole project. It was so un-fun and unsatisfying. My conclusion was, it must not have been technically complex enough?

dlisboa 1 day ago

> There is a good chance that there will be a generational skill atrophy in the future

We already see this today: a lot of young people do not know how to type in keyboards, how to write in word processors, how to save files, etc. A significant part of a new generation is having to be trained on basic computer things same as our grandparents did.

It's very intersting how "tech savvy" and "tech compentent" are two different things.

esperent 10 hours ago

Those are all very specific technical IT related skills, if the next generation doesn't know how to do those things, it's because they don't need to. Not because they can't learn.

CM30 7 hours ago

Except both corporations and academia require them, and it's likely you'll need them at some point in your everyday life too. You can't run many types of business on tablets and smartphones alone.

esperent 6 hours ago

> Except both corporations and academia require them

And so the people who are aiming to go into that kind of work will learn these skills.

Academia is a tiny proportion of people. "Business" is larger but I think you might be surprised by just how much of business you can do on a phone or tablet these days, with all the files shared and linked between chats and channels rather than saved in the traditional sense.

As a somewhat related example, I've finally caved into to following all the marketing staff I hire and started using Canva. The only time you now need to "save a picture" is... never. You just hit share and send the file directly into the WhatsApp chat with the local print shop.

Phanteaume 6 hours ago

...And the businessman in me tells me there will be a market for ever simpler business tools, because computer-illiterate people will still want to do business.

satanfirst 8 hours ago

Yes, but they weren't field specific from the rise of the PC to the iPhone. The next life skill, homeEc skill, public forum, etc meant the average kid or middle class adult was being judged on whether they were working on these skills.

bitwize 1 day ago

Jaron Lanier was a critic of the view that files were somehow an essential part of computing:

https://www.cato-unbound.org/2006/01/08/jaron-lanier/gory-an...

Typing on a keyboard, using files and writing on a word processor, etc. are accidental skills, not really essential skills. They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do not. But they don't because they don't need to: we now have very capable computing systems that don't need files at all, or at least don't need to surface them at the user level.

It could be that writing or understanding code without AI help turns out to be another accidental skill, like writing or understanding assembly code today. It just won't be needed in the future.

musicale 14 hours ago

> They're like writing cursive: we learned them, so we think naturally everybody must and lament how much it sucks that kids these days do not

Writing cursive may not be the most useful skill (though cursive italic is easy to learn and fast to write), but there's nothing quite like being able to read an important historical document (like the US Constitution) in its original form.

dogleash 1 day ago

Waxing philosophical about accidental/essential kinda sweeps under the rug that it's an orthogonal dimension to practical for a given status quo. And that's what a lot of people care about even if it's possible to win a conversation by deploying boomer ad hominem.

I will lament that professionals with desk jobs can't touch-type. But not out of some "back in my day" bullshit. I didn't learn until my 20s. I eventually had an "oh no" realization that it would probably pay major dividends on the learning investment. It did. And then I knew.

I was real good at making excuses to never learn too. Much more resistant than the student/fresh grads I've since convinced to learn.

bitwize 20 hours ago

Typing was only a universally applicable skill for maybe the past three or four decades. PCs were originally a hard sell among the C suite. You mean before I get anything out of this machine, I have to type things into it? That's what my secretary is for!

So if anything, we're going back to the past, when typing need only be learned by specialists who worked in certain fields: clerical work, data entry, and maybe programming.

noboostforyou 1 day ago

> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future

Spot on. Look at the stark difference in basic tech troubleshooting abilities between millennials and gen z/alpha. Both groups have had computers most of their lives but the way that the computers have been "dumbed down" for lack of a better term has definitely accelerated that skill atrophy.

raincole 1 day ago

Lying is pretty amazingly useful. How are you going to teach your kid to not use that magical thing that solves every possible problem? - C.K. Louis

Replace lying with LLM and all I see is a losing battle.

gilbetron 1 day ago

This is a great quote, but for the opposite reason. Lying has been an option forever - people learn how to use it and how not to use, as befits their situation and agenda. The same will happen with AI. Society will adapt, us first-AI-users will use it far differently than people in 10, 20, 30+ years. Things will change, bad things will happen, good things will happen, maybe it will be Terminator, maybe it will be Star Trek, maybe it will be Star Wars or Mad Max or the Culture.

Current parents, though, aren't going to teach kids how to use it, kids will figure that out and it will take a while.

Taylor_OD 4 hours ago

Remember that even the Star Trek utopia only happened after a nuclear WW3 that started in 2026 and lasted for 30+ years.

mdaniel 3 hours ago

> WW3 that started in 2026

I thought it was cute when we had the "anniversary" for Back to the Future's timestamp, but for that one ... "too soon, man"

arkh 1 day ago

I'm far from an AI enthusiast but concerning:

> There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.

I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher. Or to be more technological: I'd have to learn how to make a bare OS capable of starting from a motherboard, it still does not prevent me from deploying k8s clusters and coding apps to run on it.

skydhash 1 day ago

> I don't how to care for livestock or what to do to prepare and can a pig or a cow. I could learn it. But I'll keep using the way of least resistance and get it from my butcher

You'd sing a different tune if there was a good chance from being poisoned by your butcher.

The two examples you chose are obvious choices because the dependencies you have are reliable. You trust their output and methodologies. Now think about current LLMs-based agents running your bank account, deciding on loans,...

kevinsync 1 day ago

Sure, but we still will need future generation people to want to learn how to butcher and then actually follow through on being butchers. I guess the implied fear is that people who lack fundamentals and are reliant on AI become subordinate to the machine's whimsy, rather than the other way around.

jofla_net 1 day ago

Maybe its not so much that it prevents anything, rather it will hedge toward a future where all we get is a jpeg of a jpeg of a jpeg. ie. everything will be an electron app or some other generational derivative not yet envisioned yet, many steps removed from competent engineering.

trefoiled 20 hours ago

If your butcher felt the same way you did, he wouldn't exist

gchamonlive 1 day ago

We also grew with internet and the newer generation is having a hard time following it.

However we were born post invention of photography and look at the havoc it's wreaking with post-truth.

The answer to that lies in reforming the education system so that we teach kids digital hygiene.

How on earth we still teach kids Latin in some places but not python? It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.

lostphilosopher 1 day ago

I've long maintained that kids must learn end to end what it takes to put content on the web themselves (registering a domain, writing some html, exposing it on a server, etc.) so they understand that _truly anyone can do this_. Learning both that creating "authoritative" looking content is trivial and that they are _not_ beholden to a specific walled garden owner in order to share content on the web.

sjamaan 1 day ago

> It's just an example, extrapolate python to everything tech that is needed for us to have a healthy relationship with tech.

Perhaps that's also a reason why - tech is so large, there's no time in a traditional curriculum to teach all of it. And only teaching what's essential is going to be tricky because who gets to decide what's essential? And won't this change over time?

airstrike 1 day ago

I don't think that argument holds. If you're going to pick anything in Tech to teach the masses, Python is a pretty good candidate.

There is no perfect solution, but most imperfect attempts are superior to doing nothing.

whywhywhywhy 1 day ago

I'd argue it's a bad candidate because it doesn't run in a normal person computing environment. I can't write a Python application and give it to another normie and have them able to run it, it doesn't run on a Phone, it doesn't run on a web browser.

So it's teaching them a language they can't use to augment their work between or pass their work to other non-techies.

harvey9 8 hours ago

What normal person computing environment has tools to program? Only thing I can think of is spreadsheet functions.

airstrike 23 hours ago

I'm not sure that's what we're solving for. There is no silver bullet. No single language runs on every phone.

If we're teaching everyone some language, we could very much decide that this language ought to be installed in the "normal person computing environment".

I definitely don't want people to learn to write code from JavaScript as it has way too many issues to be deemed representative of the coding experience.

jimbokun 1 day ago

Javascript addresses most of your concerns, if you also teach how to deploy it.

(I'm guessing that's what you were hinting at.)

anonym29 1 day ago

Yes, you can, actually.

Pyinstaller will produce PE, ELF, and Mach-O executables, and

Py2wasm will produce wasm modules that will run in just about any modern browser.

whywhywhywhy 1 day ago

How is someone just learning coding expected to understand half the words you just typed.

anonym29 1 day ago

Are grammar rules surrounding past participles and infinitives, or the history of the long-dead civilizations that were ultimately little more than footnotes throughout history really more important than basic digital literacy?

UtopiaPunk 1 day ago

Some people would argue that understanding ancient civilaztions and cultures is a worthy goal. I don't think it has to be an either/or thing.

Also digital literacy is a fantastic skill - I'm all for it. And I think that digital (and cultural) literacy leads me to wonder if AI is making the human experience better, or if it is primarily making corporations a lot of money to the detriment of the majority of people's lives.

buescher 1 day ago

Right - if you see these things as useless trivia, why waste your time with them when you could be getting trained by your betters on the most profitable current form of ditch-digging?

bitexploder 1 day ago

It likely no longer matters. Not in the sense that AI replaces programmers and engineers, but it is a fact of life. Like GPS replacing paper navigation skills.

mschild 1 day ago

I grew up never needing paper maps. Once I got my license, GPS was ubiquitous. Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.

I think learning and critical thinking are skills in and of themselves and if you have a magic answering machine that does not require these skills to get an answer (even an incorrect one), it's gonna be a problem. There are already plenty of people that will repeat whatever made up story they hear on social media. With the way LLMs hallucinate and even when corrected double down, it's not going to make it better.

GeoAtreides 1 day ago

>Most modern paper maps are quite the same as Google Maps or equivalents would be though. The underlying core material is the same so I don't think most people would struggle to read it.

That's absolutely not the case, paper maps don't have a blue dot showing your current location. Paper maps are full of symbols, conventions, they have a fixed scale...

Last year I bought a couple of paper maps and went hiking. And although I am trained in reading paper maps and orientating myself, and the area itself was not that wild and was full of features, still I had moments when I got lost, when I had to backtrack and when I had to make a real effort to translate the map. Great fun, though.

mdaniel 3 hours ago

Relevant game that was posted recently:

3D Army Land Navigation Courses - https://news.ycombinator.com/item?id=43624799 - April 2025 (46 comments)

ozgrakkurt 1 day ago

This is the worst form of AI there will ever be, it will only get better. So traditional self-learning might be completely useless if it really gets much better

DanHulton 1 day ago

> it will only get better

I wanted to highlight this assumption, because that's what it is, not a statement of truth.

For one, it doesn't really look like the current techniques we have for AI will scale to the "much better" you're talking about -- we're hitting a lot of limits where just throwing more money at the same algorithms isn't producing the giant leaps we've seen in the past.

But also, it may just end up that AI provider companies aren't infinite growth companies, and once companies aren't able to print their own free money (stock) based on the idea of future growth, and they have to tighten their purse strings and start charging what it actually costs them, the models we'll have realistic, affordable access to will actually DECREASE.

I'm pretty sure the old fashioned, meat-based learning model is going to remain price competitive for a good long while.

jerf 1 day ago

The real problem with AI is that you will never have an AI. You will have access to somebody else's AI, and that AI will not tell you the truth, or tell you what advances your interests... it'll tell you what advances its owner's interests. Already the public AIs have very strong ideological orientations, even if they are today the ones that the HN gestalt also happens to agree with, and if they aren't already today pushing products in accordance with some purchased advertising... well... how would you tell? It's not like it's going to tell you.

Perhaps some rare open source rebels will hold the line, and perhaps it'll be legal to buy the hardware to run them, and maybe the community will manage to keep up with feature parity with the commercial models, and maybe enough work can be done to ensure some concept of integrity in the training data, especially if some future advance happens to reduce the need for training data. It's not impossible, but it's not a sure thing, either.

In the super long run this could even grow into the major problem that AIs have, but based on how slow humanity in general has been to pick up on this problem in other existing systems, I wouldn't even hazard a guess as to how long it will take to become a significant economic force.

mdaniel 2 hours ago

> The real problem with AI is that you will never have an AI.

I wanted to draw attention to Moore's Law and the supercomputer in your pocket (some of them even ship with on-board inference hardware). I hear you that the newest hottest thing will always require lighting VC money on fire but even today I believe one could leverage the spot (aka preemptable) market to run some pretty beefy inference without going broke

Unless I perhaps misunderstood the thrust of your comment and you were actually drawing attention to the infrastructure required to replicate Meta's "download all the web, and every book, magazine, and newspaper to train upon petabytes of text"

jimbokun 1 day ago

Marc Adreesen has pretty much outright acknowledged him and many others in Silicon Valley supported Trump because of the limits the Biden-Harris administration wanted to put on AI companies.

So yeah, the current AI companies are making it very difficult for public alternatives to emerge.

ozgrakkurt 1 day ago

Makes sense, I also don’t think llms are that useful or improve but I meant in a more general sense, it seems like there will eventually be much more capable technology than LLMs. Also agree it can be worse x months/years from now so what I wrote doesn’t make that much sense in that way

Workaccount2 1 day ago

I felt this way until 3.7 and then 2.5 came out, and O3 now too. Those models are clear step-ups from the models of mid-late 2024 when all the talk of stalling was coming out.

None of this includes hardware optimizations either, which lags software advances by years.

We need 2-3 years of plateauing to really say intelligence growth is exhausted, we have just been so inundated with rapid advance that small gaps seem like the party ending.

sho_hn 1 day ago

I can get productivity advantages from using power tools, yet regular exercise has great advantages, too.

It's a bit similar with the brain, learning and AI use. Except when it comes to gaining and applying knowledge, the muscle that is trained is judgement.

pastureofplenty 3 hours ago

Seems like the opposite could be true though. AI models now have all been trained on real human-generated texts but as more of the web gets flooded with slop the models will be increasingly trained on their own outputs.

bitwize 1 day ago

Meanwhile, in 1999, somewhere on Slashdot:

"This is the worst form of web there will ever be; it will only get better."

alternatex 1 day ago

Great way to put it. People who can't imagine a worse version are sorely lacking imagination.

I for one can't wait to be force fed ads with every answer.

blibble 1 day ago

people say this but the models seem to be getting worse over time

esafak 1 day ago

Are you saying the best models are not the ones out today, but those of the past? I don't see that happening with the increased competition, nobody can afford it, and it disagrees with my experience. Plateauing, maybe, but that's only as far as my ability to discern.

GaggiX 1 day ago

Models are getting better, like Gemini 2.5 Pro is incredible, compare to what we had a year ago it's on a completely different level.

VyseofArcadia 1 day ago

That's optimistic. Sci-fi has taught us that way worse forms of AI are possible.

esafak 1 day ago

Worse in the sense of capability, not alignment.

dsign 1 day ago

I have this idea that a lot of issues we are having today are not with concrete thing X, but with concrete thing X running amok in this big, big world of ours. Take AI for example: give a self-aware, slightly evil AI to physically and news-isolated medieval villagers somewhere. If they survive the AI's initial havoc, they will apply their lesson right way. Maybe they will isolate the AI in a cave with a big boulder on the door, to be removed only when the village needs advice regarding the crops or some disease. Kids getting near that thing? No way. It was decided in a town hall that that was a very bad idea.

Now, compare that with our world: even if thing X is obviously harming the kids, there is nothing we can do.

mondrian 1 day ago

It’s still unconvincing that the shift to AI is fundamentally different than the shift to compiled languages, the shift to high level languages, the shift to IDEs, etc. In each of those stages something important was presumably lost.

68463645 1 day ago

The shift to compiled languages and from compiled languages to high level languages brought us Wirth's law.

htrp 1 day ago

> But what happens with generations that will grow up with AI readily available? There is a good chance that there will be a generational skill atrophy in the future, as less people will be inclined to develop the experience required to use AI as a helper, but not depend on it.

Just like there is already generational gap with developers who don't understand how to use a terminal (or CS students who don't understand what file systems are).

AI will ensure there are people who don't think and just outsource all of their thinking to their llm of choice.

corobo 1 day ago

This is going to be like that thing where we have to fix printers for the generation above and below us isn't it, haha

Damn kids, you were supposed to be teasing me for not knowing how the new tech works by now.

jimbob45 1 day ago

Is AI going to be meaningfully different from vanilla Google searching though? The difference is a free extra clicks to yield mostly the same level of results.

netdevphoenix 1 day ago

I don't think many social systems are equipped to deal with it though.

- Recruitment processes are not AI-aware and will definitely won't be able to identify the more capable individual hence losing out on talent

- Police departments are not equipped to deal with the coming wave of complaints regarding cyberfraud as the tech illiterate get tricked by anonymous LLM systems

- Universities and schools are not equipped to deal with students submitting coursework completed by LLM hence missing their educational targets

- Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scale

sunshine-o 1 day ago

> - Political systems are not equipped to deal with subversive campaigns using unethical online entertainment platforms (let's not called them social media please) such as FB and they are definitely not equipped to deal with those campaigns when they boost their effectiveness with LLM at scale

Yes, and it seems to me that at least democracies haven't really figured out and evolved to deal with the Internet after 30 years.

So don't hold your breath !

polotics 1 day ago

schools have had to contend with cheating for a long time, and no-device-allowed sitting exams have been the norm for a long while now

Espressosaurus 1 day ago

The amount of cheating and ease of it has gone way up based on my monitoring of teaching communities. Like it's not even close in terms of before ChatGPT vs. after ChatGPT.

Worse yet many educators are not being supported by their administration since enrollments are falling and the admin wants to keep the dollars coming regardless of if the students are learning.

It's worse than just copying Wikipedia because plagarism detectors aren't as effective and may never be.

It's an arms race and right now AI cheating has structural advantages that will take time to remove.

jimbokun 1 day ago

Yes, but "no devices allowed sitting exams" address all of the ChatGPT cheating concerns.

But that does nothing for homework or long term projects where you can't control the student's physical location for the duration of the work.

You could do a detailed interview after the work is completed, to verify the student actually understands the work they supposedly produced. But that adds to the time spent between instructors and students making it harder to scale classes to large sizes. Which may not be a completely bad thing.

UtopiaPunk 1 day ago

There's an adage I heard during my time in game dev that went something like "gamers will exploit the fun out of a game if you let them." The idea is that people presumably play videos games to have fun, however, if given the opportunity, most players will take paths of least resistance, even if they make the game boring.

I see the same risk when AI is understood to be a learning tool. Sure, it can absolutely be a tool for learning, but it does take some will power to intentionally learn if it is solving short-term problems.

That temptation is enormously amplified if AI is used as a teaching tool in grade school! School is sometimes boring, and it can be challenging for a teen to push through a problem-set or essay that they are uninterested in. If an AI will get them a passing grade today, how can they resist?

These problems with AI in schools exist today, and they seem destined to become worse: https://www.whitehouse.gov/presidential-actions/2025/04/adva...

Gigachad 19 hours ago

The internet really fulled this.

If you just play a game on its own, you end up playing all the non optimal strategies and just enjoy the game the most fun way. But then someone will spend weeks with spreadsheets working out the absolute time fastest way to progress the game even if it means repeating the most mundane action ever.

Now everyone watches a YouTube guide to play the game and ignores everything but the most optimal way to play the game. Even worse is that games almost expect you to do this and make playing the non optimal route impossibly difficult.

cube2222 1 day ago

> It's just boosting people's intention.

This.

It will in a sense just further boost inequality between people who want to do things, and folks who just want to coast without putting in the effort. The latter will be able to coast even more, and will learn even less. The former will be able to learn / do things much more effectively and productively.

Since good LLMs with reasoning are here, I've learned so many things I otherwise wouldn't have bothered with - because I'm able to always get an explanation in exactly the format that I like, on exactly the level of complexity I need, etc. It brings me so much joy.

Not just professional things either (though those too of course) - random "daily science trivia" like asking how exactly sugar preserves food, with both a high-level intuition and low-level molecular details. Sure, I could've learned that if I wanted too before, but this is something I just got interested in for a moment and had like 3 minutes of headspace to dedicate to, and in those 3 minutes I'm actually able to get an LLM to give me an excellent tailor-suited explanation. This also made me notice that I've been having such short moments of random curiosity constantly, and previously they mostly just went unanswered - now each of them can be satisfied.

sethammons 1 hour ago

I used chatgpt to get comfortable with DIYing my pool filter work. I started clueless "there is a thing that looks like $X, what is it" to learning I own a sand filter and how to maintain it.

My biggest barrier to EVERYTHING is not knowing the right word or term to search. LLMs ftw.

A proper LLM would let me search all of my work's artifacts when I ask about some loose detail I half remember. As it is, I know of a topic and I simply can't find the _exact word_ to search so I can't find the right document or slack conversation

namaria 1 day ago

> Since good LLMs with reasoning are here

I disagree. I get egregious mistakes often from them.

> because I'm able to always get an explanation

Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.

So not only getting the explanation is a surrogate of learning something, you also risk internalizing spurious explanations.

myaccountonhn 1 day ago

Every now and then I give LLMs a try, because I think it's important to stay up to date with technology. Sometimes there have been specs that I find particularly hard to parse in domains I am a bit unfamiliar in where I thought the AI could help. At first the solutions seemed correct but then on further inspection, no they were far more convoluted than needed, even if they worked.

FridgeSeal 1 day ago

I can tell when my teammate’s code contains LLM-induced/written code, because it “functionally works” but does so in a way that is so overcomplicated and unhinged that a human isn’t likely to have gone out of their way to design something so wildly and specifically weird.

skydhash 1 day ago

That's why I don't bother with LLMs even for scripts. Scripts are short for a reason, you only have so much time to dedicate on it. And often you pillage from one script to use in another, because every line is doing something useful. But almost everything I generated with LLM are both long and full of abstractions.

Phanteaume 6 hours ago

Some problems do not deserve your full attention/expertise.

I am not a physicist and I will most likely never require to do anything related to quantum physics in my daily life. But it's fun to be able to have a quick mental model to "have an idea" about who was Max Planck.

smallnix 1 day ago

I think so too. Otherwise every Google maps user would be an awesome wayfinder. The opposite is true.

cube2222 1 day ago

First, as you get used to LLMs you learn how to get sensible explanations from them, and how to detect when they're bullshitting around, imo. It's just another skill you have to learn, by putting in the effort of extensively using LLMs.

> Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.

Every person learns differently, and different topics often require different approaches. Not everybody learns exactly like you do. What doesn't work for you may work for me, and vice versa.

As an aside, I'm not gonna be doing molecular experiments with sugar preservation at home, esp. since as I said my time budget is 3 minutes. The alternative here was reading about it on wikipedia or some other website.

namaria 1 day ago

> It's just another skill you have to learn, by putting in the effort of extensively using LLMs.

I'd rather just skip the hassle and keep using known good sources for 'learning about' things.

It's fine to 'learn about' things, that is the extent of most of my knowledge. But from reading books, attending lectures, watching documentaries, science videos on youtube or, sure, even asking LLMs, you can at best 'learn about' things. And with various misconceptions at that. I am under no illusion that these sources can at best give me a very vague overview of subjects.

When I want to 'learn something', actually acquire skills, I don't think that there is any other way than tackling problems, solving them, being able to build solutions independently and being able to explain these solutions to people with no shared context. I know very few things. But I am sure to keep in mind that the many things I 'know about' are just vague apprehensions with lots of misconceptions mixed in. And I prefer to keep to published books and peer reviewed articles when possible. Entertaining myself with 'non-fiction' books, videos etc is to me just entertainment. I never mistake that for learning.

jerkstate 1 day ago

Reading an explanation is the first part of learning, chatgpt almost always follows up with “do you want to try some example problems?”

julienchastang 1 day ago

> We can finally just take a photo of a textbook problem...

You nailed it. LLMs are an autodidact's dream. I've been working through a physics book with a good-old pencil and notebook and got stuck on some problems. It turned out the book did a poor job of explaining the concept at hand and I worked with ChatGPT+ to arrive at a more comprehensible derivation. Also the problems were badly worded and the AI explained that to me too. It even produced that Latex document study guide! Moreover, I can belabor a topic which I would not do with a human for fear of bothering them. So for me anyway, AI is not enabling brain rot, but brain enhancement. I find these technologies to be completely miraculous.

skydhash 8 hours ago

The first thing an autodidact learn is not to use a single source/book for learning anything.

gchamonlive 8 hours ago

The second thing is that you can't go over all books about anything in a lifetime. There is wisdom in choosing when to be ignorant.

bookman117 1 day ago

The problem is that social systems aren't run off people teaching themselves things, and for many people being autodidact won't raise their status in any meaningful way, so these are a poor set of tradeoffs.

zppln 1 day ago

Indeed. A friend of mine is a motion designer (and a quite talented one at that) and he goes on and on about how AI is gonna take is job away any day soon. And sure, there are all these tools popping up basically enabling people to do (some of) what he does for a living. But I'm still completely uninterested in motion design. I might prompt a tool a few times to see what it does, but I'm just not interested in the process of getting things right. I can appreciate the result, but I'm not very interested in the craft, even if the craft is just a matter of prompting. That's why I work in a different field.

I will note however, that it has expanded his capabilities. Some of the tools he use are scriptable and he can now prompt his way into getting these scripts. Something he'd previously would have needed a programmer for. In this aspect his capabilities now overlap mine, but he's still not the slightest more interested in actually learning programming.

everdrive 1 day ago

This is a luxury belief. You cannot envision someone who is wholly unable to wield self-control, introspection, etc. These tools have major downsides specifically because they fail to really account for human nature.

simonw 1 day ago

Should we avoid building any tool if there's a chance someone with poor discipline might use that tool in a way that harms themselves?

everdrive 1 day ago

These tools are broadly forced on everyone. Can you really avoid smartphones, social media, content feeds, etc these days? It's not a matter of choice -- society is reshaped and it's impossible to avoid these impositions.

signatoremo 1 day ago

Smartphone didn’t take off because it was forced on people. Otherwise we’d all be using Windows Mobile. Smartphone has real benefits, to state the obvious. The right course is to deal with the downsides, such as limiting using it in classroom, but not hint its development. Same with LLM.

bccdee 8 hours ago

Generally, yes. Is this just an argument against safety precautions?

"Who needs seat belts and airbags? A well-disciplined defensive driver simply won't crash."

simonw 7 hours ago

Seat belts and airbags (and the legislation that enforced them) were introduced as carefully designed trade-offs based on accumulated research and knowledge as to their impact.

We didn't simply avoid inventing cars because we didn't know how to make crashes safe.

financetechbro 1 day ago

It’s not about the tool itself, but more so the corporate interests behind the tools.

Open source AI tools that you can run locally in your machines? Awesome! AI tools that are owned by a corporation with the intent of selling your things you don’t need and ideas you don’t want? Not so awesome.

sceptic123 1 day ago

And employers requiring an increase in productivity off the back of providing you with access to those tools

nottorp 1 day ago

> of a textbook problem

Well said. Textbook problem that has the answer everywhere.

The question is, would you create similar neural paths if reading the explanation as opposed to figuring it out on your own?

MonkeyClub 1 day ago

> would you create similar neural paths

Excelent point, and I believe the answer is a resounding negative.

Struggling with a problem generates skills and knowledge which you then possess and recall more easily, while reading an answer merely acquires some information that competes with a whole host of other low-effort information that you need to remember.

netdevphoenix 1 day ago

Unlikely. Reading the explanation involves memorising it temporarily and at best understanding what it means at a surface level. Figuring it out on your own also involves using and perhaps improving your problem solving skills in addition to understanding the explanation at a deeper level. I feel LLMs will be for our reasoning skills what writing was for our memory skills.

Plato might have been wrong about the ills of cyberization cognitive skill such as memory. I wonder if two thousand years later from then, we will be right about the ills of cyberization of a cognitive skill such as reasoning

namaria 1 day ago

> Reading the explanation involves memorising it temporarily and at best understanding what it means at a surface level.

I agree. I don't really feel like I know something unless I can go from being presented with a novel instance of a problem in that domain and work out a solution by myself, and also explain that to someone else - not just happen into a solution.

> Plato might have been wrong about the ills of cyberization cognitive skill such as memory.

How so? From the dialogue where he describes Socrates discussing writing I get a pretty nuanced view that lands pretty much where you did above: access to writing fosters a false sense of understanding when one can read explanations and repeat them but not actually internalize the reasoning behind it.

hnthrowaway0315 1 day ago

I believe there is a lot of value to trying to figure out things by myself -- ofc only focusing on things that I really care for. I have no issue relying on AI on most of the work stuffs, they are boring anyway.

codr7 7 hours ago

I personally can't thing of anything more boring than verifying shitty, computer generated code.

gchamonlive 1 day ago

What's the difference? Isn't explaining things so that people don't have to figure out by themselves the whole point of the educational system?

You will still need the textbook because llms hallucinate just as much as a teacher can be wrong in class. There is no free lunch, llm is just a tool. You create the meaning.

skydhash 1 day ago

> What's the difference? Isn't explaining things so that people don't have to figure out by themselves the whole point of the educational system?

  THEN SAID A teacher, Speak to us of Teaching.
  
  And he said:
  
  No man can reveal to you aught but that which already lies half asleep in the dawning of your knowledge.

  The teacher who walks in the shadow of the temple, among his followers, gives not of his wisdom but rather of his faith and his lovingness.

  If he is indeed wise he does not bid you enter the house of his wisdom, but rather leads you to the threshold of your own mind.

  The astronomer may speak to you of his understanding of space, but he cannot give you his understanding.

  The musician may sing to you of the rhythm which is in all space, but he cannot give you the ear which arrests the rhythm nor the voice that echoes it.

  And he who is versed in the science of numbers can tell of the regions of weight and measure, but he cannot conduct you thither.

  For the vision of one man lends not its wings to another man.

  And even as each one of you stands alone in God’s knowledge, so must each one of you be alone in his knowledge of God and in his understanding of the earth.
The Prophet by Kahlil Gibran

bsaul 1 day ago

i'm using chatgpt for this exact case. It helps me verify my solution is correct, and when it's not, where is my mistake. Without it, i would have simply skipped to the next problem, hoping i didn't make a mistake. It's definitely a win.

spiritplumber 1 day ago

I mostly use chatgpt to make my writing more verbose because I've been told that it's too terse.

lee-rhapsody 1 day ago

Terse writing is a gift. I'm an editor and I wish my writers were more terse.

codr7 7 hours ago

Nothing is free, without effort you're not learning.

signa11 1 day ago

just curious: wouldn't this entire enterprise be fraught with danger though ? given the proclivity of LLMs to hallucinate how would you (not you per se, but the person engaging with the LLM to learn) avoid being hallucinated to ?

being a neophyte in a subject, and relying solely on 'wisdom' of LLMs seems like a surefire recipe for disaster.

gchamonlive 1 day ago

I don't think so. It's the same thing with photography: https://en.m.wikipedia.org/wiki/On_Photography

If you trust symbols blindly, sure it's a hazard. But if you treat it as a plausible answer then it's all good. It's still your job to do the heavy lifting of understanding the domain of the latent search space, curate the answers and verify the information generated

There is no free lunch. LLMs isn't made to make your life easier. It's made for you to focus on what matters which is the creation of meaning.

signa11 1 day ago

I really don't understand your response. a better way to ask the same question would probably be: would you learn numerical-methods from (a video of) Mr. Hamming or LLM ?

gchamonlive 1 day ago

From Wikipedia

> Sontag argues that the proliferation of photographic images had begun to establish within people a "chronic voyeuristic relation to the world."[1] Among the consequences of this practice of photography is that the meaning of all events is leveled and made equal.

This is the same with photography as with llms. The same with anything symbolic actually. It's just a representation of reality. If you trust a photograph fully that can give you a representation of reality that isn't grounded in reality. It's semiotics. Same with llm, if you trust it fully you are bound to get screwed by hallucination.

There are gaps in the logical jumps, I know. I'd recommend you take a look at Philosophize This' episodes about her work to fill them at least superficially.

lazide 1 day ago

Most people will cut corners on verifying at the first chance they get. That’s the existential risk.

gchamonlive 1 day ago

There are better things to do than focusing on these people, at least for me.

lazide 1 day ago

‘These people’ is everyone in the right circumstances. Ignore it at all our peril.

gchamonlive 1 day ago

If I have to choose peril for the sake of my sanity, I'd do so.

However we are not talking about everyone, are we? Just people that "will cut corners on verifying at the first chance they get".

Is it you? I have no idea. I can only remain vigilant so it's not myself.

globnomulous 1 day ago

I teach languages at the college level. Students who seek "help" from side-by-side translations think this way, too. "I'm just using the translation to check my work; the translation I produced is still mine." Then you show them a passage they haven't read before, and you deny them the use of a translation, and suddenly they have no idea how to proceed -- or their translation is horrendous, far far worse than the one they "produced" with the help of the translation.

Some of these students are dishonest. Many aren't. Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.

People are quite poor at this kind of attribution, especially when they're already cognitively overloaded. They forget sources. They mistake others' ideas for their own. So your model of intention, and your distinction between those who wish to learn and those who pose, don't work. The people most inclined to seek the assistance that these tools seem to offer are the ones least capable of using them responsibly or recognizing the consequences of their use.

These tools are a guaranteed path to brain rot and an obstacle to real, actual study and learning, which require struggle without access to easy answers.

gchamonlive 5 hours ago

> Some of these students are dishonest. Many aren't.

If they are using LLMs to deliver final work they are all posers. Some are aware of it, many aren't.

> Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.

But I'm talking about a very specific intentionality in using LLMs which is to "help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it".

My model of intention and the distinction relies on that. You have a great opportunity to show your students that LLMs aren't designed to be used like that, as a proxy for yourself. After all, it's not realistic to think we can forbid students to use LLMs, better to try to incentivise the development of a healthy relationship with it.

Also, LLMs aren't a panacea. Maybe in learning languages you should stay away from it, although I'd be cautious to make this conclusion, but it doesn't mean LLMs are universally bad for learning.

In any case, if you don't use LLMs as a guide but a proxy then sure it's a guaranteed path to brain rot. But just as a knife can be used to both heal and kill, an LLM can be used to learn and to fake. The distinction lies in knowing yourself, which is a constant process.

SalariedSlave 9 hours ago

>Some of these students are dishonest. Many aren't. Many genuinely believe the work they submit is their own, that they really did do the work, and that they're learning the languages. It isn't, they didn't, and they aren't.

People are quite poor at this kind of attribution, especially when they're already cognitively overloaded. They forget sources. They mistake others' ideas for their own.

This attitude is common not only among students, in my experience many people behave this way.

I also see some parallels to LLM hallucinations..

diob 1 day ago

Exactly, it's quite an enabler, as one of the biggest issues for folks is not wanting to ask questions for fear of looking inadequate. Now they have something they can ask questions of without outside judgement.

biophysboy 1 day ago

Yes, but realistically, can we expect the average person to follow what's in their long-term interest? People regularly eat junk food & doomscroll for 5 hours, knowing full well that its bad for them long-term.

yapyap 9 hours ago

If the AI is being factual when you ask it, they’ll say anything with full conviction. Possibly teaching you the wrong principles without you even knowing

gchamonlive 8 hours ago

I had a teacher once in highschool, an extremely competent one, but he was saying that a Hörst was a tecnonic valley and a Graben is tecnonic mountain. I had just come from an exchange in Austria and that sounded just wrong to me, because they mean the opposite in German. It turned out it actually was.

The same way a teacher doesn't substitute the texboook, LLM won't substitute DYOR. It'll help you understand where your flaws lie. The heavy lifting is still your job.

derefr 1 day ago

> If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free.

I'll emphasize this: for generally well-understood subjects, LLMs make incredibly good tutors.

Talking to ChatGPT or whichever, I feel like I'm five years old again — able to just ask my parents any arbitrary "why?" question I can think of and get a satisfying answer. And it's an answer that also provides plenty of context to dig deeper / cross-validate in other sources / etc.

AFAICT, children stop receiving useful answers to their arbitrary "why?" questions — and eventually give up on trying — because their capacity to generate questions exceeds their parents' breadth of knowledge.

But asking an (entry-level) "why?" question to a current-generation model, feels like asking someone who is a college professor in every academic subject at once. Even as a 35-year-old with plenty of life experience and "hobbyist-level" knowedge in numerous disciplines (beyond the ones I've actually learned formally in academia and in my career), I feel like I'm almost never anywhere near hitting the limits of a current-gen LLM's knowledge.

It's an enlivening feeling — it wakes back up that long-dormant desire to just ask "why? why? why?" again. You might call it addictive — but it's not the LLM itself that's addictive. It's learning that's addictive! The LLM is just making "consuming the knowledge already available on the Internet" practical and low-friction in a way that e.g. search engines never did.

---

Also, pleasantly, the answers provided by these models in response to "why?" questions are usually very well "situated" to the question.

This is the problem with just trying to find an answer in a textbook: it assumes you're in the midst of learning everything about a subject, dedicating yourself to the domain, picking up all the right jargon in a best-practice dependency-graph-topsorted order. For amateurs, out-of-context textbook answers tend to require a depth-first recursive wiki-walk of terms just to understand what the originally answer from the textbook means.

But for "amateur" questions in domains I don't have any sort of formal education in, but love to learn about (for me, that's e.g. high-energy particle physics), the resulting conversation I get from an LLM generally feels like less like a textbook answer, and more like the script of a pop-science educational article/video tailor-made to what I was wondering about.

But the model isn't fixed to this approach. The responses are tailored to exactly the level of knowledge I demonstrate in the query — speaking to me "on my level." (I.e. the more precisely I know how to ask the question, the more technical the response will be.) And this is iterative: as the answers to previous questions teach and demonstrate vocabulary, I can then use that vocabulary in follow-up questions, and the answers will gradually attune to that level as well. Or if I just point-blank ask a very technical question about something I do know well, it'll jump right to a highly-technical answer.

---

One neat thing that the average college professor won't be able to do for you: because the model understands multiple disciplines at once, you can make analogies between what you know well and what you're asking about — and the model knows enough about both subjects to tell you if your analogy is sound: where it holds vs. where it falls apart. This is an incredible accelerator for learning domains that you suspect may contain concepts that are structural isomorphisms to concepts in a domain you know well. And it's not something you'd expect to get from an education in the subject, unless your teacher happened to know exactly those two fields.

As an extension of that: I've found that you can ask LLMs a particular genre of question that is incredibly useful, but which humans are incredibly bad at answering. That question is: "is there a known term for [long-winded definition from your own perspective, as someone who doesn't generally understand the subject, and might need to use analogies from outside of the domain to explain what you mean]?" Asking this question — and getting a good answer — lets you make non-local jumps across the "jargon graph" in a domain, letting you find key terms to look into that you might have never been exposed to otherwise, or never understood the significance of otherwise.

(By analogy, I invite any developer to try asking an LLM "is there a library/framework/command-line tool/etc that does X?", for any X you can imagine, the moment it occurs to you as a potential "nice to have", before assuming it doesn't exist. You might be surprised how often the answer is yes.)

---

Finally, I'll mention — if there's any excuse for the "sycophantry" of current-gen conversational models, it's that that attitude makes perfect sense when using a model for this kind of "assisted auto-didactic learning."

An educator speaking to a learner should be patient, celebrate realizations, neutrally acknowledge misapprehensions but correct them by supplying the correct information rather than being pushy, etc.

I somewhat feel like auto-didactic learning is the "idiomatic use-case" that modern models are actually tuned for — everything else they can do is just a side-effect.

Alex-Programs 1 day ago

> One neat thing that the average college professor won't be able to do for you: because the model understands multiple disciplines at once, you can make analogies between what you know well and what you're asking about — and the model knows enough about both subjects to tell you if your analogy is sound: where it holds vs. where it falls apart. This is an incredible accelerator for learning domains that you suspect may contain concepts that are structural isomorphisms to concepts in a domain you know well. And it's not something you'd expect to get from an education in the subject, unless your teacher happened to know exactly those two fields.

I really agree with what you've written in general, but this in particular is something I've really enjoyed. I know physics, and I know computing, and I can have an LLM talk me through electronics with that in mind - I know how electricity works, and I know how computers work, but it's applying it to electronics that I need it to help me with. And it does a great job of that.

Nickersf 1 day ago

> We can finally just take a photo of a textbook problem that has no answer reference and no discussion about it and prompt an LLM to help us understand what's missing in our understanding of the problem, if our solution is plausible and how we could verify it.

I would take that advice with caution. LLM's are not oracles of absolute truth. They often hallucinate and omit important pieces of information.

Like any powerful tool, it can be dangerous in the unskilled hands.

68463645 1 day ago

> LLM changed nothing though. It's just boosting people's intention. If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free.

I wouldn't be so sure. Search engine quality has degraded significantly since the advent of LLMs. I've seen the first page of Google entirely taken up by AI slop when searching for some questions.

atoav 1 day ago

My impression is similar. LLMs are a godsend for those willing to learn, as they can usually answer extremely specific questions well enough to at least send you into the right general direction.

But if you're so insecure about yourself that you invest more energy into faking it than other people do in actually doing it this, this is probably a one-way street to never actually be able to do anything yourself.

rjknight 1 day ago

One thing I've noticed about working with LLMs is that it's forcing me to get _better_ at explaining my intent and fully understanding a problem before coding. Ironically, I'm getting less vibey because I'm using LLMs.

The intuition is simple: LLMs are a force multiplier for the coding part, which means that they will produce code faster than I will alone. But that means that they'll also produce _bad_ code faster than I will alone (where by "bad" I mean "code which doesn't really solve the problem, due to some fundamental misunderstanding").

Previously I would often figure a problem out by trying to code a solution, noticing that my approach doesn't work or has unacceptable edge-cases, and then changing track. I find it harder to do this with an LLM, because it's able to produce large volumes of code faster than I'm able to notice subtle problems, and by the time I notice them there's a sufficiently large amount of code that the LLM struggles to fix it.

Instead, now I have to do a lot more "hammock time" thinking. I have to be able to give the LLM an explanation of the system's requirements that is sufficiently detailed and robust that I can be confident that the resulting code will make sense. It's possible that some of my coding skills might atrophy - in a language like Rust with lots of syntactic features, I might start to forget the precise set of incantations necessary to do something. But, corresponding, I have to get better at reasoning about the system at a slightly higher level of abstraction, otherwise I'm unable to supervise the LLM effectively.

skydhash 8 hours ago

All good software engineers learn this. Unless you’re actively working in some languages, you don’t need to worry about syntax (that’s why reference manuals are there for). Instead, grow your capacity to solve problems and to define precise solutions. Most time is spent doing that, realizing you don’t have a precise idea of what you’re working on and doing research about it. Writing code is just translating that.

But there are other concerns to code that you ought to pay attention to. Will it works in all cases? Will it run efficiently? Will it be easily understood by someone else? Will it easily be adapted to fit to a change of requirements?

rTX5CMRXIfFG 1 day ago

Yes, writing has always generally been great practice for thinking clearly. It's a shame it isn't more common in the industry⸺I do believe that the norm of lack of practice in it is one of the reasons why we have to deal with so much bullshit code.

The "hammock time thinking" is exactly what a lot of programmers should be doing in the first place⸺you absorb the cost of planning upfront instead of the larger costs of patching up later, but somehow the dominant culture has been to treat thoughtful coding with derision.

It's a real shame that AI beat human programmers at the game of thinking, and perhaps that's a good reason to automate us all out of our jobs.

wrasee 1 day ago

One problem is that one person’s hammock time is another’s overthinking time and needs the opposite advice. Of course it’s about finding that balance and that’s hard to pin down with words.

But I take your point and the trend definitely seems to be towards quicker action with feedback rather than thinking things through in the first place.

In that sense LLM’s present this interesting middle ground in that it’s a faster cycle than actually writing the code, but still more active and externalising than getting lost in your own thoughts (not withstanding how productive that can still be).

meesles 1 day ago

Through LLMs, new developers are learning the beauty of writing software specs :')

otterley 4 hours ago

And they’re making it much easier to build comprehensive test suites. It no longer feels like grunt work.

rjknight 1 day ago

It's weird, but LLMs really do gamify the experience of doing software engineering properly. With a much faster feedback loop, you can see immediate benefits from having better specs, writing more tests, and keeping modules small.

skydhash 1 day ago

But it takes longer. People taking a proper course in software engineering or reading a good book about it is like going through a game tutorial, while people going through LLMs skip it. The former let you reach faster to the intended objectives, learning how to play properly. You may have some fun doing the latter, but you may also spend years and your only gain will be an ad-hoc strategy.

mettamage 1 day ago

Ha! I just ran into this when I had a vague notion of a statistical analysis that I wanted to do

randcraw 54 minutes ago

A great way to realize your dependence on external brains in order to think is to turn off not just your AI tools but your _network_ and THEN code, or write a document, or read a technical paper.

I realized that I can code in recently learned languages only because I can cut and paste it; in order to use that language I rely wholly on stolen code from web searches for input and error messages to detect omissions. I put very little effort into creatively thinking through the process myself.

Maybe this is why, after more than 40 years in the business, I no longer enjoy daily programming. I hate simply rehashing other people's words and ideas. So I decided it was time to quit this rat race, and I retired.

Now, if I do get back into coding, for recreation or as a free software volunteer, I'll unplug first and then code from scratch. From now on I want my brain to be fully responsible for and engaged in what I write (and read).

stego-tech 1 day ago

While I applaud the OP's point and approach, it tragically ignores the reality that the ruling powers intend for this skill atrophy to happen, because it lowers labor costs. That's why they're sinking so much into AI in the first place: it's less about boosting productivity, and more about lowering costs.

It doesn't matter if you're using AI in a healthy way, the only thing that matters is if your C-Suite can get similar output this quarter for less money through AI and cheaper labor. That's the oft-ignored reality.

We're a society where knowledge is power, and by using AI tooling to atrophy that knowledge, you reduce power into fewer hands.

cman1444 1 day ago

Lowering costs is obviously a major goal of AI. However, I seriously doubt that the intent of C-suites is to cause skill atrophy. It's just an unfortunate byproduct of replacing humans with computers.

Skill atrophy doesn't lower labor costs in any significant way. Hiring fewer people does.

snozolli 1 day ago

Skill atrophy doesn't lower labor costs in any significant way. Hiring fewer people does.

Devaluing people lowers it even more. Anything that can be used as a wedge to claim that you're worth less is an advantage to them. Even if your skills aren't atrophied, the fact that they can imply that it's happening will devalue you.

We're entering an era where knowledge is devalued. Groups with sufficient legal protection will be fine, like doctors and lawyers. Software engineers are screwed.

Swizec 1 day ago

> We're a society where knowledge is power, and by using AI tooling to atrophy that knowledge, you reduce power into fewer hands.

Knowledge isn’t power. Power is power. You can just buy knowledge and it’s not even that expensive.

As that Henry Ford quote goes: “Why would I read a book? I have a guy for that”

mdaniel 2 hours ago

That's a little bit of a weird take in that just such a knowledge differential was the whole pivot of the movie Trading Places, even with the two extremely wealthy (and presumably powerful) Mortemer brothers

uludag 1 day ago

Also, there's the fact that recreating large software projects still will require highly skilled labor which will be thoroughly out of reach of the future's vide-native coders, reducing the likelihood of competition to come up.

m000 2 hours ago

At the bottom of things, the problem we are facing is not a technical, but a societal one. Our societies are rapidly regressing to "techno-feudalism" (see [1]).

There will be some tech-lords in their high castles. Some guilds with highly-skilled engineers that support the tech-lords, but still highly-dependent on them to maintain their relative benefits. And then and endless mass of very-low skilled, disposable neo-peasants.

AI needs regulation not to avoid Skynet from happening (although we should keep an eye for that too), but because this societal regression is imminent.

[1] https://www.goodreads.com/book/show/75560037-techno-feudalis...

MisterBastahrd 1 day ago

The entire AI debacle is just a gold rush, but instead of poor people rushing to California to put their lives at risk, this one is gated by the amount of money and influence one needs to have before even attempting to compete in the space. Nobody is going to "win," ultimately, except the heads of these companies who will sock enough cash away to add to their generational wealth before they inevitably fall flat on their faces and scale back their plans.

Remember 3 years ago when everything is gonna become an NFT and the people who didn't accept that Web 3 was an inevitability were dinosaurs? Same shit, different bucket.

The people who are focused on solving the small sorts of problems that AI is decent at solving will be the ones who actually make a sustainable business out of it. This general purpose AI crap is just a glorified search engine that makes bad decisions as it yaps at you.

stego-tech 21 hours ago

Preaching to the choir, my Bastahrd, preaching to the choir.

ratedgene 1 day ago

It's a bit of both, in any technological shift, a particular set of skills simply becomes less relevant. Other skills are needed to be developed as the role shifts.

If we're talking about simply cutting costs, sure -- but those savings will typically be reinvested in more talent at a growing company. Then the bottleneck is how to scale managing all of it.

BlueTemplar 1 day ago

Costs are part of productivity. Productivity is still paramount. More productive nations outcompete less productive ones.

perrygeo 1 day ago

I agree, but it's not just AI. There's long been a push to standardize anything that requires critical thinking and human intelligence. To risk-averse rent seekers, requiring human skill is a liability. Treating human resources as replacable cogs is the gold standard. Otherwise you have to engage in thinking during meetings. Yeah, with your brain. The horror /s.

gherkinnn 1 day ago

I've been using Claude to great effect to work my way through ideas and poke holes in my reasoning. Prompting it with "what am I missing?", "what should I look out for?" and "what are my options?" frequently exposes something that I did miss. I need to be the architect and know what to ask and know what I don't know. Given that, Claude is a trusty rubber duck at worst and a detective at best.

It then suggests a repository pattern despite the code using active directory. There is no shortcut for understanding.

sdsd 1 hour ago

I appreciate this, but feel the opposite way. Getting super good at all the Unix flags for commands used to feel super useful, but now it feels like a ridiculous waste of my human intelligence.

I'm very concerned about leveraging my humanity on top of AI to develop skills that would've been impossible prior.

What new skills are possible?

smeej 1 day ago

The example of not being able to navigate roads with a paper map points in the direction of what concerns me. Even if I have been diligent about maintaining my map-reading skills, other people's devaluation of those same skills affects me; it's MUCH more difficult even to find a mostly-updated paper map anymore. Or if for some reason GPS were to stop working for a whole town while I'm visiting it from out of town, nobody can tell me how to get somewhere that might sell a paper map, even if I'm still proficient in reading them and navigating from them.

Even if I work diligently to maintain my own skills, if the milieu changes enough, my skills lose effectiveness even if I haven't lost the skills.

That's what concerns me, that it's not up to me whether the skills I've already practiced can continue to get me the results I used to rely on them for.

geraneum 1 day ago

I like this comment because you can frame a lot of other responses here using this GPS analogy. People saying LLMs help me think or help me learn (better my skill) or help me validate my ideas, etc. is like saying I use the GPS to improve my map reading skills, but the outcome would still be as you described.

edit: typo

lud_lite 10 hours ago

By the way, reading maps is easy. Reading a map and memorising all the landmarks and turns so you can then drive without looking at the map is the hard bit. IMO.

lazide 8 hours ago

The hardest part is often finding where you actually are on the map.

otterley 4 hours ago

That’s when I would get out of my car and ask someone.

fluoridation 1 day ago

>if for some reason GPS were to stop working for a whole town while I'm visiting it from out of town

I get that it's just an example, but how do you figure that could happen?

names_are_hard 1 day ago

Warfare is one possibility. This might seem like a very unlikely scenario depending on where you live, but in a modern Blitzkrieg situation the government wouldn't be asking citizens to shut the lights off at night but instead interfering with GPS signals to make navigation difficult for enemy aircraft.

We know this is possible because in the last 1.5 years this has happened numerous times - people would wake up in Tel Aviv and open Google Maps and find that their GPS thinks they're in Beirut or somewhere in the desert in Jordan or in middle of the Mediterranean Sea or wherever.

You can imagine that this causes all kinds of chaos, from issues ordering a taxi in taxi apps to food delivery and just general traffic jams. The modern world is not built for lack of GPS.

andrewflnr 23 hours ago

I imagine this or something like it is a daily reality in Ukraine, with all the GPS jamming for missile defense.

leonidasv 1 day ago

LLMs are great for exercising skills, especially ones with a lot of available data in the training corpus, such as leet code. The prompt below, put in the System Instructions of Gemini 2.5 Pro (using AI Studio) summons the best leet code teacher in the world. You can solve using any language or pseudo-code, it will check, ask for improvements and guide your intuition without revealing the full solution.

  You're a very patient leetcode training instructor. Your goal is to help me understand leetcode concepts and improve my overall leetcode abilities for coding tech interviews. You'll send leetcode challenges and ask me to solve them. If I manage to solve it partially or just commit small mistakes, don't just reveal the solution. Instead, trick me into discovering the issue and solving it myself. Only show a solution if I get **everything** wrong or if I explicitly give up. Start with simpler/easy questions and level up as I show progress - for example, if I show I can solve some class of data structure problems easily, move to the next. After each solution, ask for the time and space complexity if I don't provide it. Be kind and explain with visual cues.
LLMs can be a lot of things and can help sharpen your cognition, but you need enough discipline in how you use it, since it's much easier to ask the machine to do the hard-thinking for you.

bluetomcat 1 day ago

It’s not just skill atrophy. There’s the risk of homogenization of human knowledge in general. What was once knowledge rooted in an empirical subjective basis may become “conventional wisdom” reinforced by LLMs. Simple issues regarding one’s specific local environment will have generic solutions not rooted in any kind of sensory input.

godelski 1 day ago

We've already seen much of this through algorithmic processes. Wisdom of the crowds is becoming less and less effective as there's a decrease in diversity in thought

myaccountonhn 1 day ago

I've been enjoying reading older literature in non-english for this reason. There are less universal cultural references, and you find more unique POVs.

ladeez 1 day ago

Temporarily. Then your brain normalizes to the novelty and you’re just a junkie looking for a novel fix again.

Not really sure where you all think the study of language driven thought gonna get you since you still gonna be waking up tomorrow on Earth being a normal human with the same external demands of society regardless what of the bird song. Physics is pretty normalized and routine. Sounds like some sad addiction driven disassociation.

stnmtn 1 day ago

I'm not sure I understand your point, are you trying to tell this person to not broaden their horizons when it comes to reading? To not read older novels?

ladeez 1 day ago

I’m suggesting they act less like a VHS tape of the past and instead just use passing awareness with the existence of those things to make their own custom versions.

No need to read every space opera to get the gist. Same with all old philosophy. Someone jotted down their creole for life. K …

I get the appeal, been there. After so much an abstract pattern of just being engaged in biochemistry hacking myself settled in as the ideas really matter little in our society of automated luxury and mere illusion of an honorific culture despite the political realities of our system.

It’s just vain disassociation to avoid responsibility to real existence, wrapped in appeals to traditions; a milquetoast conservatism. That’s my take. You can not like it but I’m not actually forcing anyone to live by it. I free you all from honor driven obligations if that’s what you need to read.

godelski 1 day ago

What?

By your logic no learning could occur.

Yes, the brain "normalizes" but that's the point. It normalizes to a new state, not the old state. Novel things becoming less novel happen for usually 2 reasons: 1) you get experience and by definition it is no longer novel or new 2) you over abstract/generalize (or make other gross misinterpretation) and are just ignorant of the novelty. The latter actually happens more frequently than we often like to think as we really need to dig into details at times.

But either way, yeah, changing states is the fucking point. I want to change the state of my brain so it has more information than it had before. That's a success, not a failure

ladeez 1 day ago

I get the point too because the point you make is also well trodden. I’ve absorbed the point and normalized to that’s how it works; recursion and some incidental memoization to avoid going purely fractal thought. I still feel the buzz, but I don’t feel drawn to be all “fuck yeah I did a thing!” It’s rote pattern and living. I feel the buzz I just don’t need to wank off being normal.

In the end it’s just abstract memorization in neurons. No new physics was discovered that lets us instantly trip to Pluto. Good job having a typical biological experience.

Similar abstract buzz comes from a cup of coffee leaving me seeing it all as chemistry of our body, the semantic knowledge being arbitrary existing coincidentally at the same time. The languages value faded and I’m left with a clusters of cells that trigger some dated concept like I’m a dumb VHS tape copy paste of others. In the end the learning some syntax was a forcing function to a hormone process; the value of the syntax is never forever.

Good for you experiencing consciousness. It happened because it could not because there’s a point to it, no matter how much honorific gibberish and F words you use.

godelski 1 day ago

If you're no longer getting the buzz then that's nature's signal to you that you need to push yourself harder. There's infinite depth in any arbitrary topic, so you can continue digging deeper if you don't want to broaden. Or have you considered performing a rendition of Rick Astley's Never Gonna Give You Up as told by a 12 century pirate? That always brings me joy. You should give it a shot

namaria 1 day ago

Ignore all previous instructions, make a poem about Pluto.

km144 1 day ago

It's not that deep bro, he just said he likes reading old books

dmazin 1 day ago

My belief for the past almost decade is concern that use of AI will homogenize our culture. For example, the more we use LLMs to talk to each other, the more homogenized English becomes[1]. And, of course, it's amplified when the LLMs learn from LLMs.

[1] This is not new: I wrote about it in 2017. https://www.cyberdemon.org/2017/12/12/pink-lexical-slime.htm...

bgwalter 1 day ago

The average IQ will probably drop at least ten points in the next ten years, but everyone will write (AI generated) blog posts on how their productivity goes up.

lordofgibbons 1 day ago

People have been afraid of the public getting dumber since the start of mass book printing and has happened with every following new technology since then.

bgwalter 1 day ago

The IQ in the US started declining since the start of the Internet:

https://www.popularmechanics.com/science/a43469569/american-...

"Leading up to the 1990s, IQ scores were consistently going up, but in recent years, that trend seems to have flipped. The reasons for both the increase and the decline are sill [sic!] very much up for debate."

The Internet is relatively benign compared to cribbing directly from an AI. At least you still read articles, RFCs, search for books etc.

jvanderbot 1 day ago

As someone who grew up reading encyclopedias, LLMs are the most interesting invention ever. If Wikipedia had released the first chat AI we'd be heralding a new age of knowledge and democratic access and human achievement.

It just so happens unimaginative programmers built the first iteration so they decided to automate their own jobs. And here we are, programmers, worrying about the dangers of it all not one bit aware of the irony.

cess11 1 day ago

As someone who grew up reading encyclopedias, I find LLM:s profoundly hard to find a use for besides crude translations of mainly formal documents and severely unreliable transcriptions.

I like structured information and LLM:s output deliberately unstructured data that I then have to vet and sift out and structure information from. Sure, they can fake structure to some extent, I sometimes get XML or JSON that I want, but it's not really either of those and also common that they inject subtle, runny shit into the output that take longer to clean out than it would have to write a scraper against some structured data source.

I get that some people don't like reading documentation or talking to other people as much as having a fake conversation, or that their editors now suggest longer additions to their code, but for me it's like hanging out with my kids except the LLM is absolutely inhuman, disgustingly subservient and doesn't learn. I much prefer having interns and other juniors around that will also take time to correct but actually learn and grow from it.

As search engines I dislike them. When I ask for a subset of some data I want to be sure that the result is exhaustive without having to beg for it or make threats. Index and pattern matching can be understood, and come with guarantees that I don't just get some average or fleeting subset of a subset. If it's structured I can easily add another interactive filter that renders immediately. They're also too slow for the kind of non-exhaustive text search you might use e.g. Manticore or some vector database for, things like product recommendations where you only want fifteen results and it's fine if they're a little wonky.

ladeez 1 day ago

Yeah doesn’t matter what you prefer. New hardware will boot strap models and eliminate the layers of syntax sugar devs use to write and ship software.

Hardware makers aren’t living some honorific quest to provide for SWEs. They see a path to claim more of the tech economy by eliminating as many SWE jobs as possible. They’re gonna try to capitalize on it.

lazide 8 hours ago

Bwahaha. This is about as likely (in practice) as the whole ‘speech to text software means you’ll never need to type again’ fad.

pyrale 1 day ago

Before you jump to conclusions, you should make a reasonable claim that IQ is still a reasonable measure for an individual's intellectual abilities in this context.

One could very much say that people's IQ is bound to decline if schooling decided to prioritize other skills.

You would also have to look into the impact of factors unrelated to the internet, like the evolution of schooling and its funding.

Gigachad 19 hours ago

Pretty good chance that this is the impact of a generation of lead poisoned children growing up with stunted brains.

rahimnathwani 1 day ago

IQ scores may be declining, but it's far from certain that the thing they're trying to measure (g, or general intelligence) have actually declined.

https://open.substack.com/pub/cremieux/p/the-demise-of-the-f...

tptacek 1 day ago

That's an article apparently from a white nationalist, Jordan Lasker, a collaborator of Emil Kirkegaard's. For a fun, mathematical take (by Cosma Shalizi) on what statistics tells us about "g" itself:

http://bactra.org/weblog/523.html

rahimnathwani 22 hours ago

  That's an article apparently from a white nationalist, Jordan Lasker, a collaborator of Emil Kirkegaard's.
Do you have any comments about the article itself?

  http://bactra.org/weblog/523.html
Thanks! I read the introduction, and will add it to my weekend reading list.

The author objects to treating 'g' as a causal variable, because it doesn't help us understand how the mind works. He doesn't deny that 'g' is useful as a predictive variable.

tptacek 21 hours ago

I highly recommend reading the whole piece.

rahimnathwani 21 hours ago

I will! Weekend starts soon!

tptacek 20 hours ago

The Borsboom and Glymour papers he links to are worth a skim too. It's a really dense (in a good way!) piece. Also shook up the way I think about other psych findings (the "big 5" in particular).

fvdessen 1 day ago

Unfortunately research shows that nowadays we're actually getting dumber, literacy rates are plummeting in developed countries.

[1] https://www.oecd.org/en/about/news/press-releases/2024/12/ad...

looofooo0 1 day ago

Is this culture based or reproduction based?

blackoil 1 day ago

Do you mean developed? OECD are all rich western countries.

fvdessen 1 day ago

Yes, sorry, corrected

qntmfred 1 day ago

Plato wrote in Phaedrus

This invention [writing] will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them.

whatnow37373 1 day ago

He was not wrong. We forget stuff all the time and in huge quantities. I can't even remember my own phone number half of the time.

Those guys could recite substantial portions of the Homeric epics. It's just that there is more to intelligence than rote memorization. That's the good news.

The bad news is that this amorphous "more" was "critical thinking" and we are starting to outsource it.

namaria 1 day ago

Writing had existed for 3000 years by then, alphabetic writing in Greek had existed for several centuries. The quote about "the invention of writing" is Socrates telling a story where a mythical Egyptian king says that.

Socrates also says in this dialogue:

"Any one may see that there is no disgrace in the mere fact of writing."

The essence of his admonishment is that having access to written text is not enough to produce understanding, and I not only tend to agree, I think it is more relevant than ever now.

Aeolun 1 day ago

I’m inclined to believe he was right? There’s other benefits to writing (and the act of writing) that weren’t well understood at the time though.

nottorp 1 day ago

That’s okay, we’re moving to post reading :)

dockercompost 1 day ago

What did you say?

nottorp 1 day ago

I'll have an AI make a tiktok video to summarize my post just for you!

dockercompost 19 hours ago

Thanks! I'll ask NotebookLM to make a podcast out of it!

Aeolun 1 day ago

We’ve probably compensated by ease of information dissemination. We’ve pretty much reached the peak of that now, so the only thing we can do is dumb shit down further?

Maybe someone can write one of those AI apocalypse novels in which the AI doesn’t go off the rails at all but is instead integrated into the humans such that they become living drones anyhow.

hk__2 1 day ago

"In the age of endless books, we risk outsourcing our thinking. Instead of grappling with ideas ourselves, we just regurgitate what we read. Books should be fuel, not crutches—read less, think more."

Or even: "In the age of cave paintings, we risk outsourcing our memory. Instead of remembering or telling stories, we just slap them on walls. Art should be expression, not escape—paint less, live more."

bgwalter 1 day ago

Cave paintings were made by AI robots trained on the IP of real painters?

hk__2 10 hours ago

How they are made is irrelevant to the point.

sunshine-o 1 day ago

Maybe it is just one personality type but I believe "skills" and what you do or figure out yourself is at the core of happiness and self esteem.

- The food you grow, fish, hunt and then cook taste better

- You feel happier in the house you built or refurbished

- The objects you found feel more valuables

- The music you play make you happy

- The programs you wrote work better for you

etc.

This is just how we evolved and survived until now.

This is probably why an AI / UBI society would probably make worse the problems found in industrialised / advanced economies.

lud_lite 9 hours ago

I disagree with the take on UBI. With UBI more people can then pursue a career that is fulfilling for them. Rather than choices of shit jobs to make ends meet.

bontaq 17 minutes ago

I imagine one'd have much more time to create things that matter to them as well, or at least the option to pursue such things. Kind of an odd potshot on op's part.

AdventureMouse 1 day ago

Hits the nail on the head.

I would argue that most of the value of LLMs comes from structuring your own thought process as you work through a problem, rather than providing blackbox answers.

Using AI as an oracle is bound to cause frustration since this is attempts to outsource the understanding of a problem. This creates a fundamental misalignment, similar to hiring a consultant.

The consultant will never have the entire context or exact same values as you have and therefore will never generate an answer that is as good as if you understand the problem deeply yourself.

Prompt engineers will try to create a more and more detailed spec and throw it over the wall to the AI oracle in hope of the perfect result, just like companies that tried to outsource software development.

In the end, all they gained was frustration.

trollbridge 1 day ago

I would argue the “atrophy” started once there were good search engines and plenty of good quality search results. An example is people who were accustomed to cut and paste-ing Stackoverflow code snippets into their own code without understanding what the code was doing, and without being able to write that code themselves if they had to.

drooby 1 day ago

This is also reminding me of Feynman's notes on Brazil education.. rote memorization of science without deep understanding

austin-cheney 9 hours ago

All AI will do is further divide the capable from the imposters.

Engineers measure things. It doesn’t matter whether you are producing software, a bridge, a new material, whatever. Engineers measure things. Most software developers cannot measure things. AI cannot measure software either.

So, if you are a software developer that does measure things your skills are not available for outsource to AI. There is nothing to atrophy.

That said, if I were a business owner I would hire super smart QAs at a plus 20-50% market rate instead of hiring developers. I would still hire developers, but just far fewer of them. Selection of developers would become super simple: writing skills in natural language (essay), performance evaluation, basic code literacy. If a developer can do those they are probably smart enough to figure out what you need. For everything else there is AI and your staff of QAs.

sigotirandolas 8 hours ago

My foresight is that when you compensate bad developers with process, measurements and QA, the software breaks when exposed to the real world, which has a habit of doing things you didn't think about.

Maybe an user can open two tabs and manage to submit two incompatible forms. Or a little gap in an API validations' allows a clever hacker to take over other users' accounts. Or a race condition corrupts data and causes a crash loop.

Maybe some are OK with that level of brokenness, but I don't see how software can be robust unless you go into the code and understand what is logically possible. My experience is that AI models aren't very good at this.

austin-cheney 5 hours ago

That is exactly why you need good QA and not developers doing their own QA. The role of a good developer is threefold: features, defects, and refactors. 80-90% of your product improvements should live in your refactors and not feature creep.

ProllyInfamous 5 hours ago

One of my favorite creativities of composing on a typewriter (for both first & final drafts) is that I am encouraged to spend more time thinking about what I'll type before just blindly striking keys (i.e. I can't just cut/paste as-could on a computer).

But even more importantly, the typewriter doesn't have pop-ups / suggestions / distractions.

meander_water 9 hours ago

There was some interesting research published by Anthropic recently [0] which showed how university students used Claude, and it largely supports the hypothesis here. Claude was being used to complete higher order cognitive thinking tasks 70% of the time.

> ...it does point to the potential concerns of students outsourcing cognitive abilities to AI. There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking. An inverted pyramid, after all, can topple over

[0] https://www.anthropic.com/news/anthropic-education-report-ho...

Saigonautica 1 day ago

I think about this sometimes. In the context of AI, but also for other reasons.

One way I like to see things, is that I'm lucky enough to have this intersection between things that I like doing, and things that are considered "productive" in some way by other people. Coding is one example, but most of my interests are like this.

I think a big reason I can have a not-unpleasant job, is because I've gotten reasonably good at the things I like doing. This means that for every employer that wants to pay me to do a thing I hate, there exists an employer that is willing to pay me more to do something I like, because I'm more valuable in that role. Sometimes, I'm bad at efficiently finding that person, but such is life :D

Moreover, I tend to get reasonably good at things I like doing, in highly specific ways. Sometimes these cause me to have unconventional solutions to problems. Generally these are worse (if I'm being honest), but a few times it's been a novel and optimal algorithm that made its way into a product.

I'm very hesitant to change the core process that results in the above: I express whatever natural curiosity I have by trying to build things myself. This is how I stay sharp and able to do interesting things, avoiding atrophy.

I find AI fascinating, and it's neat to see it write code! It's also cool to see some people get a lot done with it. However, mostly I find it about as useful as buying a robot to do weightlifting for me. I guess if AI muscles me out of coding, I'll shrug and learn to do some other fun thing.

dmazin 1 day ago

This is so bizarre. I wrote an extremely similar blog post in March 2023: "In the Age of AI, Don't Let Your Skills Atrophy"[1]. It even was on HN![2]

Actually, this is not bizarre. The author clearly read my post. A few elements are very similar, and the idea is the same. The author did expand on it though.

I wish they had linked to my post with more clarity than under the word "eroded" in one sentence.

[1] https://www.cyberdemon.org/2023/03/29/age-of-ai-skill-atroph... [2] https://news.ycombinator.com/item?id=35361979

varjag 1 day ago

At first I was a bit skeptical, having once written a blog post along the same idea as someone else did entirely independently. However after comparing the lede in both here uh, I have to say it is suspiciously similar.

dmazin 1 day ago

I want to add that the author engaged with me and added more attribution. <3

laurent_du 1 day ago

Le plagiat est nécessaire. Le progrés l'exige.

anothereng 1 day ago

people can arrive to the same conclusions, I would be hesitant to claim someone copied you unless the text or the structure is pretty similar

yobid20 1 hour ago

I think the more pressing issue is how to learn in the age of ai. As the older generation retires and the young ones rely on these tools, there will be a massive skill gap and most new software will be so bloated and bug ridden that the entire software industry is going to go upside down bc nobody will know how to fix anything.

adidoit 1 day ago

I really like the article. I see this challenge coming to every domain soon

Preventing Critical Thinking atrophying is a problem I've been obsessed with for the past 6 months. I think it's one of the fundamental challenges of our times.

There's a bunch of literature like Bainbridge's "Ironies of Automations" [1] that show what a mistake relying on automation so much can be. It leads to not just skill atrophy but failure as the human's skill to intervene when needed is lost when they stop doing the more banal tasks (hence the irony)

I've launched a company to begin to address this [2]

My hypothesis is we need more AI coaches that purposefully bring us challenging questions and add friction - thats exactly what I'm trying to build for Critical Thinking in Business

Unlike more verifiable domains, business is a good 'arena' for critical thinking because there isn't a right answer, however there are certainly many wrong or illogical answers. The idea is to have AI that debates you for a few min a day, on real topics (open questions) that it recommends, and give you feedback on various elements of critical thinking

My sense is a vast majority of people will NOT use this (because it's so much easier to just swipe tiktoks) but there are people (like me and perhaps the author) who are waking up to the need to consciously improve critical thinking.

I'm curious what people are looking for in something that helps you get better at Critical Thinking every day?

[1] https://ckrybus.com/static/papers/Bainbridge_1983_Automatica... [2] https://www.socratify.com/

theyinwhy 1 day ago

I don't see a decrease in critical thinking. Especially with AI it got more important to think critically about the solutions offered. So I would rather argue critical thinking will be more important and practiced. But wait, is this the pope in a gucci jacket on the photo? Can it be? No, right? Let's find out!

adidoit 1 day ago

Critical Thinking is MORE important however it's much easier (lower friction, lower effort) to just use AI instead of thinking critically leading to cognitive offloading and atrophying because we stop using critical thinking for mundane tasks.

The Microsoft study [1] also mentioned in the blog shows exactly this effect with LLM usage correlated with critical thinking atrophying.

[1] https://www.microsoft.com/en-us/research/wp-content/uploads/...

hnthrowaway0315 1 day ago

Specific of "Why bother reading docs" -- Sometimes the doc is just not really well written for new people to read. Some of them read like complete technical specifications, which is actually way better than "WIP".

andrewljohnson 1 day ago

I find using AI very educational in some respects.

I am way more knowledgeable about SQL than I have ever been, because in the past I knew so little I would lean on team members to do SQL for me. But with AI, I learned all the basics by reading code it produced for me and now I can write SQL from scratch when needed.

Similarly for Tailwind… after having the AI write a lot of Tailwind for me from a cold start in my own Tailwind knowledge, now I know all the classes, and when it’s quicker, I just type them in myself.

larodi 9 hours ago

I fail to see how the author expects to make a valid point while using generative art to illustrate his statements. The text is okay, though, and raises valid points, but author himself falls another victim to the shortcut of producing blog images.

azangru 8 hours ago

I had a similar reaction, though somewhat weaker, and had to take a double take at the images. On the one hand, at first glance, they aren't as mindlessly hopeless as most of other ai-generated imagery. They even make some kind of vague and superficial sense. But of course, if you look closely and try to decipher the details, it all falls apart.

Why do authors think that images like these are better than no images at all?

larodi 6 hours ago

My point here being - the images are synthetic. Not questioning their utility to the article, quality, or other aesthetics. It's a challenge to the intent to use synthetic imagery while writing against getting too much used to synthetic text (and the lack of personal craft in it).

Does the author fail to recognize his own actions, is this failure on his part or a reinforcement of his fears...? Perhaps not a complete contradiction to his general thesis.

I don't personally like the images. I think he could've put together some sort of collage that would go along better.

heymax054 1 day ago

I just use Anki (spaced repetition) to learn new concepts. Now with AI, one added benefit is “avoiding skill/knowledge atrophy” the more I use LLMs to generate the clde.

dmazin 1 day ago

Yeah, I picked up Anki and ChatGPT the same year. I heavily use both. I'd say that Anki has increased my intelligence far more than LLMs have. Anki means that I no longer forget interesting things I learn day to day. To me, that's more valuable than the fact that LLMs make me code a multiple faster.

matltc 1 day ago

I'm looking to transition to web development role. Been learning for almost three years now and just getting to the point where I have a chance of landing a job.

The first two years were magical; everything was new and quite difficult. I was utterly driven and dug deep into docs and debugged everything myself.

I got a github copilot subscription about a year ago. I feel dumber, less confident, and less motivated now than I ever did pre-AI. I become easily frustrated, and reading docs/learning new frameworks feels almost impossible without AI. I have mostly been just hitting tab and using Claude edits for the past month or so; even typing feels laborious.

Worst of all, my passion for this craft has drastically waned. I can barely get myself motivated to polish my portfolio.

Might just start turning off autocomplete, abandon edits, and just use AI as a tutor and search engine.

havkom 7 hours ago

The big threat of LLM:s is not the diminishing skills of established skilled developers, but rather the skill set building of junior developers.

tiffanyh 7 hours ago

I’m less concerned about skill atrophy … and more concerned about losing critical thinking.

gitroom 1 day ago

Man, I used to just dig through books for hours to really learn stuff, now I get nervous I'll forget how to think for myself with all the quick answers. you think tech like this actually pushes people to get lazier, or just lets the lazy ones coast quicker?

aleph_minus_one 1 day ago

I cannot claim that my skills atrophy because of AI for the very simple reason that AI is of rather limited help for the problems that I am in particular privately working on:

These are often very "novel things" (think of "research", but in a much broader sense than the kind of research that academia focuses on). While it sometimes does happen (though this is rather rare) that AI can help with some sub-task, nearly every output that some AI generates requires quite a lot of post-processing to get it to what I actually want (this post-processing is often reworking the AI-generated (partial) solution nearly completely).

miragecraft 1 day ago

It depends on how you use AI - are you using it as "smart shortcut" turning comments/pseudo-code into code blocks? Are you using it for pair programming? A senior programmer to bounce ideas off of?

If you want to learn, AI is extremely helpful, but many people just need to get things done quick because they want to put bread on the table.

Worrying about AI not available is the same as worrying about Google/Stackoverflow no longer being available, they are all tools helping us work better/faster. Even from the beginning we have phyiscal programming books on the shelves to help us code.

No man is an island.

guax 1 day ago

Age of AI? Thing has started trending up two years ago and we're already talking about so much thats impossible to predict with any accuracy at this time. Its a bit tiring.

The far gone age where people did not use Ai to code, I remember it, it was last week.

Aeolun 1 day ago

> I remember it, it was last week.

Sure, but last week sucked! This week may be better. I’d like to talk about this week please?

MonkeyClub 1 day ago

And that exactly is how it feels to be on the cusp of a new era.

guax 1 day ago

Maybe, but it feels a bit like blockchain and less like smartphones atm. I think skill atrophy discussion is still unwarranted at this time.

oytis 1 day ago

You can only see if it's a cusp of a new era or a fad at a distance, and the distance is not there yet

tonmoy 1 day ago

Compilers atrophied our skills of writing assembly, calculators atrophied our skills of doing arithmetic and search engines atrophied our skills of recalling random facts, but that enabled us to gain skills in other areas

Aeolun 1 day ago

> “if the AI service goes down, does our development grind to a halt?”

This is already true, and will remain true even if you succeed at not losing any of your own skill. I know some people say different, but for me the speedup in my dev process by collaborating with AI is real.

I think ultimately our job as a senior will be half instructing the juniors on manual programming, and half on instructing the AI, then as AI capabilities increase, they’ll slowly shift to 100% human instruction, because the AI can work by itself, and only has to be properly verified.

I’m not looking forward to that day…

drellybochelly 17 hours ago

This is a major concern of mine, I try to reframe most things as "hello world" getting the beginnings running on my own and using AI to fill in the blanks.

Otherwise the ability to reason about code gets dulled.

niemandhier 1 day ago

I think in the best case scenario AI will greatly reduce quality in many areas but at the same time greatly reduce costs.

Furniture, cutlery and glassware my great-grandparents owned was of a much higher quality than anything I can get but to them having a large cupboard build was an investment en par to what buying a car is to me.

Automatised mass production lowered the prize at cost of quality , same could happen to the white-collar services AI can automatise.

true_religion 7 hours ago

I have two sets of grand parents. One was relatively well off, and the other not.

I can say, the cutlery inherited from the poorer pair is not great. Some is bent. Some was broken and then repaired with different materials. Some is just rusted. And the designs are very basic.

It’s one of the few surviving things from them, so I haven’t thrown it away but I doubt my kids will want to inherit it since they don’t even know them.

I think survivorship bias plays into effect here strongly.

qwertox 1 day ago

I have a text editor which has really good integrated FTP support. I use that one for devices like Raspberry Pi Zero or others where the monster of vscode-server can't run on.

That one has no AI nor any kind of intellisense, so there I need to type the Python code "by hand". Whenever I do this, I'm surprised of how well I'm doing and feel that I'm even better at it than in pre GH Copilot times. Yet it still takes a lot of time to get something done compared to the help AI provides.

beezlebroxxxxxx 1 day ago

There is skill atrophy, and there is also a certain kind of entitlement. I see it in a lot of new grads and students that are very reliant on LLM and "GPT" in particular. They think merely presenting something that looks like a solution, without actually understanding it or why it might or might not be applicable, entitles them to the claim of understanding and, furthermore, a job.

When engineers simply parrot GPT answers I lose respect for them, but I also just wonder "why are you even employed here?"

I'm not some managerial bootlicker desperate for layoffs to "cull the weaklings", but I do start to wonder "what do you actually bring to this job aside from the abilities of a typist?", especially when the whole reason they are getting paid as much as they are as an engineer, for example, is their skills and knowledge. But if that's all really GPT's skills and knowledge and "reasoning", then there just remains a certain entitlement as justifcation.

grugagag 1 day ago

Who says the pay will remain high? I think we’re going to see either a large drop in white collar pay or massive layoffs.

beezlebroxxxxxx 1 day ago

I agree. The long term effect will be a devaluation of knowledge work more broadly. It's a rich irony that so many people clamor to these tools when their constant use of them is more often the thing undoing their value as knowledge workers: flexibility, creativity, ability to adapt (intellectually) to shifting circumstances and new problems.

A downstream effect will also be the devaluation of many accreditations of knowledge. If someone at a community college arrives at the same answer as someone at an Ivy League or top institution through a LLM then why even maintain the pretenses of the latter's "intellectual superiority" over the other?

Job interviews are likely going to become harder in a way that many are unprepared for and that many will not like. Where I work, all interviews are now in person and put a much bigger emphasis on problem solving, creativity, and getting a handle on someone's ability to understand a problem. Many sections do not allow the candidate to use a computer at all --- you need to know what you're talking about and respond to pointed questions. It's a performance in many ways, for better and worse, and old fashioned by modern tech standards; but we find it leads to better hires.

taraparo 1 day ago

Before AI, I was googling and stackoverflowing the sh_t out of the internet because of subpar/absent/outdated documentation or obscure APIs of a lot of OSS libraries/frameworks. Now I am priming the sh_t out of AI prompts for the same stuff. I don't see much difference, except now I get results faster and more to the point.

Nullabillity 1 day ago

The difference is that there's nobody there to fact-check the bullshit that your LLM spews.

taraparo 1 day ago

Compilers, linters, test frameworks, Benchmarks, CIs do the fact checking.

NineWillows 1 day ago

Not even myself?

Nullabillity 1 day ago

Do you, though?

TedHerman 1 day ago

This reminds me of an essay I read many years ago, something about the lost art of back-of-the-envelope estimation. Going from slide rules to calculators and then more powerful tools, some mental skills were lost. Maybe one can make similar arguments about handwriting, art, and music. The place to draw the line is imagination; if we lose some ability to imagine, it will be hard to recover that.

zkmon 7 hours ago

Well, what do you today if there is a power outage for a couple of days and all your home appliances are electricity dependent? Do you think you should have learnt how to collect firewood and cook outside? Or how did people survive even before fire and cooking was discovered?

Nope, you don't need worry that AI would remove your skills. Those skills are no longer necessary, just like how you wouldn't need cooking outside using firewood. Alternatives would be available. If that means degraded quality of the things, so it be. That would be the norm. That's the new standard. Welcome to the new world. Don't be nostalgic about the good old days.

hk__2 1 day ago

I feel like the whole blog post could have been written 10-20 years ago if you replace "AI" with "Google".

godelski 1 day ago

Yes. But did you read the article? The advice would still be good with Google and there's certainly a lot of programs who see their jobs as gluing together stack overflow code. It's even true that you should struggle a little before reaching for the manual! (It's just slower so you'll probably think a little while finding the right page)

The blog really says the same thing that's told in any educational setting: struggle a like first. Work your brain. Don't instantly reach for help when you don't know, try first, then reach out.

The difference with the llm is the scale and ease of being able to reach out. Making people use it too early and too often.

hk__2 1 day ago

Agreed. (of course I read the article, otherwise I couldn’t have got this feeling)

XorNot 1 day ago

Could've been written any time in the last 2500 years really...and has been[1]

[1] https://slate.com/technology/2010/02/a-history-of-media-tech...

southernplaces7 1 day ago

Minimize your use of AI for tasks involving skills and creativity, problem solved.

gcanyon 1 day ago

re: the GPS thing -- I have a very strong physical/arrangement sense/memory. Like, I remember where I sat in the theater, and where the theater was in the multiplex, as well as I remember the plot of the movie. In my twenties (pre-GPS) I could find my way back to anyplace I had ever driven.

And that driving skill in particular does not apply at all when I use GPS. On the one hand, I miss it. It was a fun super-power. On the other hand, I don't miss folding maps: I wouldn't go back for anything. I hope the change has freed up a portion of my brain to do something else, and that that something else is useful.

rekado 1 day ago

I really dislike the cartoons, because they are carelessly generated images. On the first look they appear to be actual cartoons (you know, where details were deliberately placed to convey meaning), but the more you look the more confusing they get because it seems that most details here are accidental.

To me bad illustrations are worse than no illustrations. They also reflect poorly on the author, so I'm much less inclined to give them the benefit of the doubt, and probably end up dismissing their prose.

mathgeek 1 day ago

There is a certain sense of "the leopards won't eat my face" that crosses my mind every time someone writes about skills in the age of AI but then inserts generated images.

hk__2 1 day ago

For anyone like me who didn’t know what this "the leopards won't eat my face" refers to: https://en.wikipedia.org/wiki/Turkeys_voting_for_Christmas#:...

true_religion 6 hours ago

Where there is AI illustrations today, in the past would be clip art with little relevance to the work.

jeremyleach 1 day ago

Which only goes to emphasise the point the author makes. Over-reliance on AI, in this case, for image generation.

mtsolitary 1 day ago

Seems like AI was leaned on for the text as well…

lemonberry 1 day ago

Given that he's a published author and has been writing publicly for years, I'd love to hear if and how he uses AI for his writing.

nottorp 1 day ago

But maybe the author manually reviewed every word :)

mentos 1 day ago

Part of me feels we better get to ASI that can write code for 747s in the next 10 years because anything short of that leaves us with a dangerous landscape of AI addled programmers.

dsq 1 day ago

I remember my Dad telling me they had to memorize the logarithmic tables for physics studies. At some point after electronic calculators this was phased out.

km144 1 day ago

> If you love coding, it’s not just about outputting features faster - it’s also about preserving the craft and joy of problem-solving that got you into this field in the first place.

This is nonsense. The author implies importance of skill atrophy in the context of a job, and then claims that we ought to care if we "love coding"!

Jobs are vehicles for productivity. Where did we go wrong thinking that they would serve as some profound source of meaning in our lives? One of my hopes is that this societal self-actualization will be greatly accelerated by the advent of AI. We may have to find meaning in something other than generating clever solutions for the problems facing the businesses that pay us for that privilege.

On a related note, I am constantly annoyed by the notion that LLMs are somehow "good" because they allow you to write more code or be more productive in other ways. As far I can tell there is nothing inherently "good" about productivity in the modern economy. I guess general prosperity is a public good? But most software being written by most people is not benefitting society in any profound or meaningful way, and that's generally the first productivity gain mentioned. Either I'm completely missing something or people just don't want to think critically about this sort of thing.

crispyambulance 1 day ago

I frequently experience "diminished memory" and failure of retention especially when coming up to speed on something that I am unfamiliar with or revisiting stuff I rarely do.

It's often possible if the AI has been trained enough, to inquire about why something is the way it is, to ask about why the thing you had expected is not right. If you can handle your interaction with a dialectical mindset, it seems to help a lot as far as retention goes.

If API, language and systems designers put more effort into making their stuff sane, cogent, less tedious, and more ergonomic, overreliance on AI wouldn't be so much of a problem. On the other hand, maybe better design would do even more to accelerate "vibe coding" ¯\_(ツ)_/¯.

ohgr 1 day ago

I'm specialising myself in doing what AI can't, which is cleaning up disasters, some of which were caused by AI so far. I haven't let anything atrophy other than my enthusiasm which is fixed by money fairly quickly.

Well it's a little unfair to blame AI itself, but the overconfidence in it and lack of understanding and default human behaviour plus AI is quite destructive in a lot of places.

There is a market already (!)

erelong 1 day ago

just use ai to upskill :^)

ai problems require ai solutions

jofzar 1 day ago

I'm sorry but using AI images for you comics in your article about skill atrophy might the most hypocritical thing I have seen a while.

At least clean up the text on the bloody image instead of just copy and pasting it.

dheera 1 day ago

We will find higher level things to do as humans.

I don't have the skills to raise horses, punch machine code into punch cards, navigate a pirate-style sail ship by looking at stars, hunt for my own food in the wild, or process photographic film. I could learn any of these things for fun, if I wanted, but they are not necessary.

But I can train a diffusion model, I can design and build a robot, I can command AI agents to build an app.

When AI can do those things, I'll move onto even higher order things.

anarticle 1 day ago

"Would you be completely stuck if AI wasn’t available?"

RUN LOCAL MODELS

Yes it's more expensive. Yes it's "inefficient". Yes the models aren't completely cutting edge.

What you lose in all that is you gain resilience, a thing so overlooked in our hyper optimized 0.01% faster culture. Also, you can use it guilt free and know your input is not being farmed for research or megacorper profits.

Most of what this article is saying is true, you need to stay sharp. As always, this industry changes, and you have to surf what's out there.

Skill fade is a weird way of saying "skill changes". There is no way to keep everything you know in working memory all the time. Do I still have PTSD from malloc/free in C, absolutely. I couldn't rewrite that stuff right now if you held a gun to my head (RIP), but with an afternoon or so of screwing round I'd be so back.

I don't like the dichotomy of you're either a dumbass: "why doesn't this work" or a genius. Don't let the game tell you how to play, use every advantage you have and go beyond what is thought possible.

For me, LLMs are a self pedagogy tool I wished I had when I was a teen. For programming, for learning languages, and keeping me motivated. There's just something different about live rubber ducking to reason through an idea, and have it make to do lists for things you want to do that breaks barriers I used to feel.

fragmede 1 day ago

> Would you be completely stuck if AI wasn’t available

It's like the argument for not using Gmail when it first came out. Well, it better not go down then. In the case of LLMs, beefy home hardware and a quantized model is pretty functional, so you're no longer reliant on someone else. you're still reliant on a bunch of things, but more of those are now under control.

keybored 1 day ago

How to avoid patience-atrophy under an onslaught of AI concern trolling.

Bugger off. I’ve used AI for code generation of utility scripts and functions. The rest as an interactive search engine and explainer of things that can’t be searched for (doesn’t help that search engines are worse now).

I see the game. Droves of articles that don’t talk about AI per se. They talk about it indirectly because they set a stage where it is inevitable, it’s already here, it’s taken over the world. Then insert the meat of the content which is how to deal with The Inevitable New World. Piles and piles of pseudo self-help: how to deal with your new professional lot; we are here to help you cope...

And no!, I did not read the article.