derefr 1 day ago

> If your intention is to learn, you are in luck! It's never been easier to teach yourself some skill for free.

I'll emphasize this: for generally well-understood subjects, LLMs make incredibly good tutors.

Talking to ChatGPT or whichever, I feel like I'm five years old again — able to just ask my parents any arbitrary "why?" question I can think of and get a satisfying answer. And it's an answer that also provides plenty of context to dig deeper / cross-validate in other sources / etc.

AFAICT, children stop receiving useful answers to their arbitrary "why?" questions — and eventually give up on trying — because their capacity to generate questions exceeds their parents' breadth of knowledge.

But asking an (entry-level) "why?" question to a current-generation model, feels like asking someone who is a college professor in every academic subject at once. Even as a 35-year-old with plenty of life experience and "hobbyist-level" knowedge in numerous disciplines (beyond the ones I've actually learned formally in academia and in my career), I feel like I'm almost never anywhere near hitting the limits of a current-gen LLM's knowledge.

It's an enlivening feeling — it wakes back up that long-dormant desire to just ask "why? why? why?" again. You might call it addictive — but it's not the LLM itself that's addictive. It's learning that's addictive! The LLM is just making "consuming the knowledge already available on the Internet" practical and low-friction in a way that e.g. search engines never did.

---

Also, pleasantly, the answers provided by these models in response to "why?" questions are usually very well "situated" to the question.

This is the problem with just trying to find an answer in a textbook: it assumes you're in the midst of learning everything about a subject, dedicating yourself to the domain, picking up all the right jargon in a best-practice dependency-graph-topsorted order. For amateurs, out-of-context textbook answers tend to require a depth-first recursive wiki-walk of terms just to understand what the originally answer from the textbook means.

But for "amateur" questions in domains I don't have any sort of formal education in, but love to learn about (for me, that's e.g. high-energy particle physics), the resulting conversation I get from an LLM generally feels like less like a textbook answer, and more like the script of a pop-science educational article/video tailor-made to what I was wondering about.

But the model isn't fixed to this approach. The responses are tailored to exactly the level of knowledge I demonstrate in the query — speaking to me "on my level." (I.e. the more precisely I know how to ask the question, the more technical the response will be.) And this is iterative: as the answers to previous questions teach and demonstrate vocabulary, I can then use that vocabulary in follow-up questions, and the answers will gradually attune to that level as well. Or if I just point-blank ask a very technical question about something I do know well, it'll jump right to a highly-technical answer.

---

One neat thing that the average college professor won't be able to do for you: because the model understands multiple disciplines at once, you can make analogies between what you know well and what you're asking about — and the model knows enough about both subjects to tell you if your analogy is sound: where it holds vs. where it falls apart. This is an incredible accelerator for learning domains that you suspect may contain concepts that are structural isomorphisms to concepts in a domain you know well. And it's not something you'd expect to get from an education in the subject, unless your teacher happened to know exactly those two fields.

As an extension of that: I've found that you can ask LLMs a particular genre of question that is incredibly useful, but which humans are incredibly bad at answering. That question is: "is there a known term for [long-winded definition from your own perspective, as someone who doesn't generally understand the subject, and might need to use analogies from outside of the domain to explain what you mean]?" Asking this question — and getting a good answer — lets you make non-local jumps across the "jargon graph" in a domain, letting you find key terms to look into that you might have never been exposed to otherwise, or never understood the significance of otherwise.

(By analogy, I invite any developer to try asking an LLM "is there a library/framework/command-line tool/etc that does X?", for any X you can imagine, the moment it occurs to you as a potential "nice to have", before assuming it doesn't exist. You might be surprised how often the answer is yes.)

---

Finally, I'll mention — if there's any excuse for the "sycophantry" of current-gen conversational models, it's that that attitude makes perfect sense when using a model for this kind of "assisted auto-didactic learning."

An educator speaking to a learner should be patient, celebrate realizations, neutrally acknowledge misapprehensions but correct them by supplying the correct information rather than being pushy, etc.

I somewhat feel like auto-didactic learning is the "idiomatic use-case" that modern models are actually tuned for — everything else they can do is just a side-effect.

1
Alex-Programs 1 day ago

> One neat thing that the average college professor won't be able to do for you: because the model understands multiple disciplines at once, you can make analogies between what you know well and what you're asking about — and the model knows enough about both subjects to tell you if your analogy is sound: where it holds vs. where it falls apart. This is an incredible accelerator for learning domains that you suspect may contain concepts that are structural isomorphisms to concepts in a domain you know well. And it's not something you'd expect to get from an education in the subject, unless your teacher happened to know exactly those two fields.

I really agree with what you've written in general, but this in particular is something I've really enjoyed. I know physics, and I know computing, and I can have an LLM talk me through electronics with that in mind - I know how electricity works, and I know how computers work, but it's applying it to electronics that I need it to help me with. And it does a great job of that.