namaria 1 day ago

> Since good LLMs with reasoning are here

I disagree. I get egregious mistakes often from them.

> because I'm able to always get an explanation

Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.

So not only getting the explanation is a surrogate of learning something, you also risk internalizing spurious explanations.

5
myaccountonhn 1 day ago

Every now and then I give LLMs a try, because I think it's important to stay up to date with technology. Sometimes there have been specs that I find particularly hard to parse in domains I am a bit unfamiliar in where I thought the AI could help. At first the solutions seemed correct but then on further inspection, no they were far more convoluted than needed, even if they worked.

FridgeSeal 1 day ago

I can tell when my teammate’s code contains LLM-induced/written code, because it “functionally works” but does so in a way that is so overcomplicated and unhinged that a human isn’t likely to have gone out of their way to design something so wildly and specifically weird.

skydhash 1 day ago

That's why I don't bother with LLMs even for scripts. Scripts are short for a reason, you only have so much time to dedicate on it. And often you pillage from one script to use in another, because every line is doing something useful. But almost everything I generated with LLM are both long and full of abstractions.

smallnix 1 day ago

I think so too. Otherwise every Google maps user would be an awesome wayfinder. The opposite is true.

Phanteaume 16 hours ago

Some problems do not deserve your full attention/expertise.

I am not a physicist and I will most likely never require to do anything related to quantum physics in my daily life. But it's fun to be able to have a quick mental model to "have an idea" about who was Max Planck.

cube2222 1 day ago

First, as you get used to LLMs you learn how to get sensible explanations from them, and how to detect when they're bullshitting around, imo. It's just another skill you have to learn, by putting in the effort of extensively using LLMs.

> Reading an explanation may feel like learning, but I doubt it. It is the effort of going from problem/doubt to constructing a solution - and the explanation is a mere description of the solution - that is learning. Knowing words to that effect is not exactly learning. It is an emulation of learning, a simulacrum. And that would be bad enough if we could trust LLMs to produce sound explanations every time.

Every person learns differently, and different topics often require different approaches. Not everybody learns exactly like you do. What doesn't work for you may work for me, and vice versa.

As an aside, I'm not gonna be doing molecular experiments with sugar preservation at home, esp. since as I said my time budget is 3 minutes. The alternative here was reading about it on wikipedia or some other website.

namaria 1 day ago

> It's just another skill you have to learn, by putting in the effort of extensively using LLMs.

I'd rather just skip the hassle and keep using known good sources for 'learning about' things.

It's fine to 'learn about' things, that is the extent of most of my knowledge. But from reading books, attending lectures, watching documentaries, science videos on youtube or, sure, even asking LLMs, you can at best 'learn about' things. And with various misconceptions at that. I am under no illusion that these sources can at best give me a very vague overview of subjects.

When I want to 'learn something', actually acquire skills, I don't think that there is any other way than tackling problems, solving them, being able to build solutions independently and being able to explain these solutions to people with no shared context. I know very few things. But I am sure to keep in mind that the many things I 'know about' are just vague apprehensions with lots of misconceptions mixed in. And I prefer to keep to published books and peer reviewed articles when possible. Entertaining myself with 'non-fiction' books, videos etc is to me just entertainment. I never mistake that for learning.

jerkstate 1 day ago

Reading an explanation is the first part of learning, chatgpt almost always follows up with “do you want to try some example problems?”