cube2222 1 day ago

> It's just boosting people's intention.

This.

It will in a sense just further boost inequality between people who want to do things, and folks who just want to coast without putting in the effort. The latter will be able to coast even more, and will learn even less. The former will be able to learn / do things much more effectively and productively.

Since good LLMs with reasoning are here, I've learned so many things I otherwise wouldn't have bothered with - because I'm able to always get an explanation in exactly the format that I like, on exactly the level of complexity I need, etc. It brings me so much joy.

Not just professional things either (though those too of course) - random "daily science trivia" like asking how exactly sugar preserves food, with both a high-level intuition and low-level molecular details. Sure, I could've learned that if I wanted too before, but this is something I just got interested in for a moment and had like 3 minutes of headspace to dedicate to, and in those 3 minutes I'm actually able to get an LLM to give me an excellent tailor-suited explanation. This also made me notice that I've been having such short moments of random curiosity constantly, and previously they mostly just went unanswered - now each of them can be satisfied.

2
namaria 1 day ago

> Since good LLMs with reasoning are here

I disagree. I get egregious mistakes often from them.

> because I'm able to always get an explanation

Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.

So not only getting the explanation is a surrogate of learning something, you also risk internalizing spurious explanations.

myaccountonhn 1 day ago

Every now and then I give LLMs a try, because I think it's important to stay up to date with technology. Sometimes there have been specs that I find particularly hard to parse in domains I am a bit unfamiliar in where I thought the AI could help. At first the solutions seemed correct but then on further inspection, no they were far more convoluted than needed, even if they worked.

FridgeSeal 1 day ago

I can tell when my teammate’s code contains LLM-induced/written code, because it “functionally works” but does so in a way that is so overcomplicated and unhinged that a human isn’t likely to have gone out of their way to design something so wildly and specifically weird.

skydhash 1 day ago

That's why I don't bother with LLMs even for scripts. Scripts are short for a reason, you only have so much time to dedicate on it. And often you pillage from one script to use in another, because every line is doing something useful. But almost everything I generated with LLM are both long and full of abstractions.

smallnix 1 day ago

I think so too. Otherwise every Google maps user would be an awesome wayfinder. The opposite is true.

Phanteaume 16 hours ago

Some problems do not deserve your full attention/expertise.

I am not a physicist and I will most likely never require to do anything related to quantum physics in my daily life. But it's fun to be able to have a quick mental model to "have an idea" about who was Max Planck.

cube2222 1 day ago

First, as you get used to LLMs you learn how to get sensible explanations from them, and how to detect when they're bullshitting around, imo. It's just another skill you have to learn, by putting in the effort of extensively using LLMs.

> Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.

Every person learns differently, and different topics often require different approaches. Not everybody learns exactly like you do. What doesn't work for you may work for me, and vice versa.

As an aside, I'm not gonna be doing molecular experiments with sugar preservation at home, esp. since as I said my time budget is 3 minutes. The alternative here was reading about it on wikipedia or some other website.

namaria 1 day ago

> It's just another skill you have to learn, by putting in the effort of extensively using LLMs.

I'd rather just skip the hassle and keep using known good sources for 'learning about' things.

It's fine to 'learn about' things, that is the extent of most of my knowledge. But from reading books, attending lectures, watching documentaries, science videos on youtube or, sure, even asking LLMs, you can at best 'learn about' things. And with various misconceptions at that. I am under no illusion that these sources can at best give me a very vague overview of subjects.

When I want to 'learn something', actually acquire skills, I don't think that there is any other way than tackling problems, solving them, being able to build solutions independently and being able to explain these solutions to people with no shared context. I know very few things. But I am sure to keep in mind that the many things I 'know about' are just vague apprehensions with lots of misconceptions mixed in. And I prefer to keep to published books and peer reviewed articles when possible. Entertaining myself with 'non-fiction' books, videos etc is to me just entertainment. I never mistake that for learning.

jerkstate 1 day ago

Reading an explanation is the first part of learning, chatgpt almost always follows up with “do you want to try some example problems?”

sethammons 10 hours ago

I used chatgpt to get comfortable with DIYing my pool filter work. I started clueless "there is a thing that looks like $X, what is it" to learning I own a sand filter and how to maintain it.

My biggest barrier to EVERYTHING is not knowing the right word or term to search. LLMs ftw.

A proper LLM would let me search all of my work's artifacts when I ask about some loose detail I half remember. As it is, I know of a topic and I simply can't find the _exact word_ to search so I can't find the right document or slack conversation