datahack 6 days ago

There is a tremendous lack of understandings between the genx and millennial teachers and the way they see and use AI, and how younger people are using it.

Kids use AI like an operating system, seamlessly integrated into their workflows, their thinking, their lives. It’s not a tool they pick up and put down; it’s the environment they navigate, as natural as air. To them, AI isn’t cheating—it’s just how you get things done in a world that’s always been wired, always been instant. They do not make major life decisions without consulting their systems. They use them like therapists. It’s is far more than a Google replacement or a writing tool already.

This author’s fixation on “desirable difficulty” feels like a sermon from a bygone era, steeped in romanticized notions of struggle as the only path to growth. It’s yet another “you can’t use a calculator because you won’t always have one” — the same tired dogma that once insisted pen-and-paper arithmetic was the pinnacle of intellectual rigor (even after calculators arrived: they have in fact always been with us every day since).

The Butlerian Jihad metaphor is clever but deeply misguided casting AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.

The author laments students bypassing the grind of traditional learning, but what if that grind isn’t the sacred rite they think it is? What if “desirable difficulty” is just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers?

The reality is, AI’s not going away, and clutching pearls about its “grotesque” nature won’t change that. Full stop.

Students aren’t “cheating” when they use it… they’re adapting to a world where information is abundant and synthesis is king. The author’s horror at AI-generated essays misses the point: the problem isn’t the tech, it’s the assignments (and maybe your entire approach).

If a chatbot can ace your rhetorical analysis, maybe the task itself is outdated, testing rote skills instead of real creativity or critical thinking.

Why are we still grading students on formulaic outputs when AI can do that faster?

The classroom should be a lab for experimentation, not a shrine to 19th century pedagogy, which is most definitely is. I was recently lectured by a teacher about how he tries to make every one of his students a mathematician, and became enraged when I gently asked him how he’s dealing with the disruption to mathematicians as a profession that AI systems are currently doing. There is an adversarial response underneath a lot of teacher’s thin veneers of “dealing with the problem of AI” that is just wrong and such a cope.

That obvious projection leads directly to this “adversarial” grading dynamic. The author’s chasing a ghost, trying to police AI use with Google Docs surveillance or handwritten assignments. That’s not teaching. What it is standing in the way of civilization Al progress because it doesn’t fit your ideas. I know there are a lot of passionate teachers out there, and some even get it, but most definitely do not.

Kids will find workarounds, just like they always have, because they’re not the problem; the system is. If students feel compelled to “cheat” with AI, it’s because the stakes (GPAs, scholarships, future prospects) are so punishingly high that efficiency becomes survival.

Instead of vilifying them, why not redesign assessments to reward originality, process, and collaboration over polished products? AI could be a partner in that, not an enemy.

The author’s call for a return to pen and paper feels like surrender dressed up as principle and it’s rediculously out of touch.

It’s not about fostering “humanity” in the classroom; it’s about clinging to a nostalgic ideal of education that never served everyone equally anyway.

Meanwhile, students are already living in the future, where AI is as foundational as electricity.

The real challenge isn’t banning the “likeness bots” but teaching kids how to wield them critically, ethically, and creatively.

Change isn’t coming. It is already here. Resisting it won’t make us more human; it’ll just leave us behind.

Edit: sorry for so many edits. Many typos.

5
goatlover 6 days ago

ChatGPT is only 2.5 years old. How are kids using AI like it's always been around? I really hope they aren't making major life decisions consulting chatbots from big tech companies, instead of their relatives, teachers and friends. I'm old enough to recall when social media was viewed as this incredibly positive tech for humanity. How things have changed. One wonders how we'll view the impact AIs in a few years.

belZaah 4 days ago

I teach Enterpise Architecture on graduate level. I would absolutely not mind people using AI as an OS or an information source or a therapist. I would not mind them looking things up in an encyclopedia, so why mind them using AI.

What I do mind is: - the incredible generic slop AI generates. Let’s improve communication, make a better strategy, improve culture. - the unwavering belief in AI. I tell my students, why using AI will not give them a good grade. They get a case solved by all major LLMs, graded, with thorough feedback and a bad grade. I tell them, that literally writing anything at all as the answer would not give a much worse answer. And still they go and use AI and get bad grades. - the incredible intellectual laziness it seems to foster. I criticize TOGAF in my course (let’s not get into that) and explicitly state it to be outside of the course material. Repeatedly, in writing and verbally. And what do the students do? They ask a LLM, that inevitably starts referring to TOGAF. And the answer is copied in the case analysis without even an attempt to actually utilize TOGAF or to justify the choice made

My students actually get worse grades and are worse off in terms of being able to solve actual real-life problems, because they use AI. Getting a degree should increase their intellectual capabilities but people actively choose not to, thus wasting their time. And that’s I’m not OK with.

blackbear_ 6 days ago

How do you test "real creativity" and "critical thinking" in a way that is both scalable and reliably tells apart those who get it and those who don't?

fallinditch 6 days ago

It's interesting to note that your comment and my comment ended up right at the end, having been downvoted, with no downvoters commenting on why they disagree with you, or my, points.

I assume it's because many of the commenters of this post are skewed towards academia, and perhaps view the disruption by AI to the traditional methods of grading student work as a challenge to their profession.

As we have seen many times throughout history, when disruptive forces of technical or demographic changes or a new set of market forces occurs, incumbents often struggle to adapt to the new situation.

Established traditional education is a massive ship to turn around.

Your comments contain much food for thought and deserve to be debated. I agree with you that educators should not be branding students as cheaters. Using AI in an educational context is a rational and natural thing to do, especially for younger students.

> ... AI as some profane mimicry of the human mind ignores how it’s already reshaping cognition, not replacing it.

- Yes, this is such an important point and it's why we need enlightened policy making leading to meaningful education reform.

I do disagree with you about incorporating more pen and paper activities - I think this would provide balance and some important key skills.

No doubt AI is challenging to many areas of society, especially education. I'm not saying it's a wonderful thing that we don't need to worry about, but we do need to think deeply about its impacts and how we can harness its positive strengths and radically improve teaching and learning outcomes. It's not about locking students in exam rooms with high tech surveillance.

With AI it's disappointing that the prevalent opinions of many educators are seemingly stuck and struggling to adapt.

Meanwhile society will move on.

Edit: good to see you got a response!

TychoCelchuuu 6 days ago

Decades of research into learning shows that "desirable difficulty" is not, as you put it, "just a fetishized relic of an agrarian education system designed to churn out obedient workers, not creative thinkers." Rather, difficulty means you are encountering things you do not already understand. If you are not facing difficulties then your time is being wasted. The issue is that AI allows people to avoid facing difficulties and thus allows them to waste their time.

You think we will make progress by learning to use AI in certain ways, and that assignments can be crafted to inculcate this. But a moment's acquaintance with people who use AI will show you that there is a huge divide between some uses of AI and others, and that some people use AI in ways which is not creative and so on. Ideally this would prompt you to reflect on what characteristics of people incline them towards using AI in certain ways, and what we can do to promote the characteristics that incline people to use AI in productive and interesting ways, etc. The end result of such an inquiry will be something like what the author of this piece has arrived at, unfortunately. Any assignment you think is immune to lazy AI use is probably not. The only real solution is the adversarial approach the author adopts.