I teach math at a large university (30,000 students) and have also gone “back to the earth”, to pen-and-paper, proctored and exams.
Students don’t seem to mind this reversion. The administration, however, doesn’t like this trend. They want all evaluation to be remote-friendly, so that the same course with the same evaluations can be given to students learning in person or enrolled online. Online enrollment is a huge cash cow, and fattening it up is a very high priority. In-person, pen-and-paper assessment threatens their revenue growth model. Anyways, if we have seven sections of Calculus I, and one of these sections is offered online/remote, then none of the seven are allowed any in person assessment. For “fairness”. Seriously.
I think you've identified the main issue here:
LLMs aren't destroying the University or the essay.
LLMs are destroying the cheap University or essay.
Cheap can mean a lot of things, like money or time or distance. But, if Universities want to maintain a standard, then they are going to have to work for it again.
No more 300+ person freshman lectures (where everyone cheated anyways). No more take-home zoom exams. No more professors checked out. No more grad students doing the real teaching.
I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.
>I guess, I'm advocating for the Oxbridge/St. John's approach with under 10 class sizes where the proctor actually knows you and if you've done the work. And I know, that is not a cheap way to churn out degrees.
I could understand US tuition if that were the case. These days with overworked adjuncts make it McDonalds at Michelin star prices.
Funnily enough I only had 10-person-classes when I paid $125 for summer courses in a community college between expensive uni semesters.
This matches my experience. I attended the local community college, which works closely and matches curriculum with Ohio State University. The same classes, with the same content, were taught at both schools.
The biggest difference between them is the community college offering class sizes of about 20 people, while the university equivalent was taught in a lecture hall with hundreds of students, and cost significantly more.
Given that the adjuncts often aren't paid all that much better than the McDonalds workers...
Believe it or not, 300-person freshman lectures can be done well. They just need a talented instructor who's willing to put in the prep, and good TAs leading sections. And if the university fosters the right culture, the students mostly won't cheat.
But yeah, if the professor is clearly checked out and only interested in his research, and the students are being told that the only purpose of their education is to get a piece of paper to show to potential employers, you'll get a cynical death-spiral.
(I've been on both sides of this, though back when copy-pasting from Wikipedia was the way to cheat.)
> though back when copy-pasting from Wikipedia was the way to cheat
Back when I was teaching part time, I had a lot of fun looking at the confused looks on my students' faces when I said "you cannot use Wikipedia, but you'll find a lot of useful links at the bottom of any article there..."
Over here in Finland, higher education is state funded, and the funding is allocated to universities mostly based on how many degrees they churn out yearly. Whether the grads actually find employment or know anything is irrelevant.
So, it's pretty hard for universities over here to maintain standards in this GenAI world, when the paying customer only cares about quantity, and not quality. I'm feeling bad for the students, not so much for foolish politicians.
Gosh, I'm so myopic here. I'm mostly talking about US based systems.
But, of course, LLMs are affecting the whole world.
Yeah, I'd love to hear more about how other countries are affected by this tool. For Finland, I'd imagine that the feedback loop is the voters, but that's a bit too long and the incentives and desires of the voting public get a bit too condensed into a few choice to matter [0].
What are you seeing out there as to how students feel about LLMs?
[0] funnily enough, like how the nodes in the neural net of an LLM get too saturated if they don't have enough parameters.
After a short stint as a faculty member at a McU institution, I agree with much of this.
Provide machine problems and homework as exercises for students to learn, but assign a very low weight to these as part of an overall grade. Butt in seat assessments should be the majority of a course assessment for many courses.
>> (where everyone cheated anyways)
This is depressing. I'm late GenX, I didn't cheat in college (engineering, RPI), nor did my peers. Of course, there was very little writing of essays so that's probably why, not to mention all of our exams were in person paper-and-pencil (and this was 1986-1990, so no phones). Literally impossible to cheat. We did have study groups where people explained the homework to each other, which I guess could be called "cheating", but since we all shared, we tended to oust anyone who didn't bring anything to the table. Is cheating through college a common millenial / gen z thing?
Even before LLMs, if you walked into any frat and asked to see their test bank, you'd get thousands of files. Though not technically cheating, having every test a professor ever gave was a huge advantage. Especially since most profs would just reuse tests and HWs without any changes anyway.
To my generation, it wasn't that cheating was a 'thing' as much as it was impossible to avoid. Profs were so lazy that any semi-good test prep would have you discover that the profs were phoning it in and had been for a while. Things like not updating the course page with all the answers on them were unfortunately common. You could go and tell the prof, and most of us did, but then you'd be at a huge disadvantage relative to your peers who did download the answer key. Especially since the prof would still not update the questions! I want to make it clear: this is a common thing at R1 universities before LLMs.
The main issue is that at most R1s, the prof isn't really graded on their classes. That's maybe 5% of their tenure review. The thing they are most incentivized by is the amount of money they pull in from grants. I'm not all that familiar with R2 and below, but I'd imagine they have the same incentives (correct me if I'm wrong!). And with ~35% of students that go to R2 and below, the incentives for the profs for ~65% of students isn't well correlated with teaching said students.
Seems to me that studying a collection of every test over the years, without knowing what questions will be on the exam is... actually learning? >_<
It's a lot easier to memorize AABBCCBDDADBADABCCABAD than the actual information.
Did you have a lot of multiple choice tests in higher education? I know Americans used them a lot in high school, but didn't realise that extended to college.
Not really. I had fellow students who understood nothing, could not program at all, but could tell you the answer to question 6 of the 2015 Java exam because they had memorized it all.
Then I would hire that person to be a requirements & specifications archival expert! ;)
Don’t know about frats, but I went to a lowly ranked “third tier” university and a “top 10” one.
While most of the classes were taught pretty well at both, the third tier ones were taught much better. Just couldn’t get an interview upon graduation despite near 4.0…
It is utterly bizarre that we use graduate research dollars to evaluate the quality of undergraduate education.
Here's how cheating advanced since then.
1. People in the Greek system would save all homework sets and exams in a "library" for future members taking a given course. While professors do change (and a single professor will try to mix up problems) with enough time you eventually have an inventory of all the possible problems, to either copy outright or study.
2. Eventually a similar thing moved online, both with "black market" hired help, then the likes of Chegg Inc.
3. All the students in a course join a WhatsApp or Discord group and text each other the answers. (HN had a good blog about this from a data science professor, but I can't find it now. College cheating has been mentioned many times on HN).
Cheap "universities" are fine for accreditation. Exams can be administered via in-person proctoring services, which test the bare minimum. The real test would be when students are hired, in the probationary period. While entry-level hires may be unreliable, and even in the best case not help the company much, this is already a problem (perhaps it can be solved by the government or some other outside organization paying the new hire instead of the company, although I haven't thought about it much).
Students can learn for free via online resources, forums, and LLM tutors (the less-trustworthy forums and LLMs should primarily be used to assist understanding the more-trustworthy online resources). EDIT: students can get hands-on-experience via an internship, possibly unpaid.
Real universities should continue to exist for their cutting-edge research and tutoring from very talented people, because that can't be commodified. At least until/if AI reaches expert competence (in not just knowledge but application), but then we don't need jobs either.
> Real universities should continue to exist for their cutting-edge research and tutoring from very talented people, because that can't be commodified. At least until/if AI reaches expert competence (in not just knowledge but application), but then we don't need jobs either.
Okay, woah, I hadn't thought of that. I'm sitting here thinking that education for it's own sake is one of the reasons that we're trying to get rid of labor and make LLMs. Like, I enjoy learning and think my job gets in the way of that.
I hand't thought that people would want to just not do education of any sort anymore.
That's a little mind blowing.
Some people go to college to learn, some go just to get a job. I think colleges should still exist for the former, but the latter should be able to instead use online resources then get accredited (which they'd do if it gave them the same job prospects).
That would also let professors devote more time towards teaching the former, and less time grading and handling grade complaints (from either group, since the former can also be graded by the accreditation and, if they get a non-academic job, in their probationary period).
I'm an autodidact. I've found leaked copies of university degree plans, pirated and read textbooks on all kinds of subjects, talk to experts for fun when I can etc.
American universities mostly get in the way of doing this sort of thing. You need a degree to be credentialed so you can get your "3 years of experience" that lets you apply for jobs. That's pretty much all its for these days.
> I'm an autodidact. I've found leaked copies of university degree plans, pirated and read textbooks on all kinds of subjects, talk to experts for fun when I can etc.
The last decade+ has been a goldmine for this, especially in computing-related topics. Between textbooks, school course sites, MOOCs etc, there's lifetimes of stuff out there.
> I enjoy learning and think my job gets in the way of that
Spot on, this gave me ideas, thank you for that!
There are excellent 1000-student lecture courses and shitty 15-student lecture courses. There are excellent take-home exams and shitty in-class exams. There are excellent grad student teaching assistants and shitty tenured credentialed professors. You can't boil quality down to a checklist.
No but you can observe and react to trends. Remote courses for me have me sitting directly at the Distraction 9000 (my computer) and rely entirely on "self discipline" in order for me to get anything out of it. This is fine for annual training that's utterly braindead and requires nothing from me but completing a basic quiz I get unlimited attempts for so my employer can tell whatever government agency I did the thing. If I want to actually get trained however, I always do in-person, both because my employer covers those expenses and who in the world turns down free travel, and because I retain nothing from remote learning. Full stop.
Of course that's only my experience and I can't speak for all of humanity. I'm sure people exist who can engage in and utilize remote learning to it's full potential. That said I think it's extremely tempting to lean on it to get out of providing classrooms, providing equipment, and colleges have been letting the education part of their school rot for decades now in favor of sports and administrative bloat, so forgive me if I'm not entirely trusting them to make the "right" call here.
Edit: Also on further consideration, remote anything but teaching very much included also requires a level of tech literacy that, at least in my experience, is still extremely optimistic. The number of times we have to walk people through configuring a microphone, aiming a webcam, sharing to the meeting, or the number of missed participants because Teams logged them out, or Zoom bugged out on their machine, or whatever. It just adds a ton of frustration.
On the edit: maybe two-way remote. One-way (read: remoting into conferences, music festivals, etc.) has been a revelation, and no more difficult to access than any other streaming service. I'm going to be sad to see YouTube's coverage of Coachella go away in a few years; losing SXSW was already quite painful.
I gather that that's not necessarily what you were referring to, but with the way that people tend to lump all remote experiences in the "inferior" basket together, I just wanted to point out that, in many cases, that kind of accessibility is better than the actual alternative: missing out.
I think this is where it's going to end up.
The masses get the cheap AI education. The elite get the expensive, small class, analog education. There won't be a middle class of education, as in the current system - too expensive for too little gain.
10 is a small number. There's a middle ground. When I studied, we had lectures for all students, and a similar amount of time in "work groups," as they were called. That resembled secondary education: one teacher, around 30 students, but those classes were mainly focused on applying the newly acquired knowledge, making exercises, asking questions, checking homework, etc. Later, I taught such classes for programming 101, and it was perfectly doable. Work group teachers were also responsible for reviewing their students' tests.
But that commercially oriented boards are ruining education, that's a given. That they would stoop to this level is a bit surprising.
Very common. Large lecture with a professor, and small "discussion sections" with a grad student for Q/A, homework help, exam review.
All of my classes with a dozen students were better than all of my classes with 2 dozen. My favorite class had 7 students.
All degrees are basically the same though and of 95% of the value is signaling nobody really cares about the education part
I see that pressure as well. I find that a lot of the problems we have with AI are in fact AI exposing problems in other aspects of our society. In this case, one problem is that the people who do the teaching and know what needs to be learned are the faculty, but the decisions about how to teach are made by administrators. And another problem is that colleges are treating "make money" as a goal. These problems existed before AI, but AI is exacerbating them (and there are many, many more such cases).
I think things are going to have to get a lot worse before they get better. If we're lucky, things will get so bad that we finally fix some shaky foundations that our society has been trying to ignore for decades (or even centuries). If we're not lucky, things will still get that bad but we won't fix them.
Instructors and professors are required to be subject matter experts but many are not required to have a teaching certification or education-related degree.
So they know what students should be taught but I don't know that they necessarily know how any better than the administrators.
I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.
> Instructors and professors are required to be subject matter experts but many are not required to have a teaching certification or education-related degree.
I attended two universities to get my computer science degree. The first was somewhat famous/prestigious, and I found most of the professors very unapproachable and cared little about "teaching well". The second was a no-name second tier public uni, but I found the professors much more approachable, and they made more effort to teach well. I am still very conflicted about that experience. Sadly, the students were way smarter at the first uni, so the intellectual rigor of discussions was much higher than my second uni. My final thoughts: "You win some; you lose some." This is universal. I’ve had largely the same experience. There’s several reasons for this.
1. Stupider people are better teachers. Smart people are too smart to have any empathic experience on what it’s like to not get something. They assume the world is smart like them so they glaze over topics they found trivial but most people found confusing.
2. They don’t need to teach. If the student body is so smart then the students themselves can learn without teaching.
3. Since students learn so well there’s no way to differentiate. So institutions make the material harder. They do this to differentiate students and give rankings. Inevitably this makes education worse.
It's simpler than that. "Prestigious" universities emphasize research prestige over all else on faculty. Faculty optimize for it and some even delight in being "hard" (bad) teachers because they see it as beneath them.
Less "prestigious" universities apply less of that pressure.
It can also be different within the same university, by department. I graduated from a university with a highly ranked and research oriented engineering department. I started in computer engineering which was in the college of engineering but ended up switching to computer science which was in the college of arts and sciences. The difference in the teachers and classroom experience was remarkable. It definitely seemed like the professors in the CS department actually wanted to teach and actually enjoyed teaching as compared to the engineering professors who treated it like it was wasting their time and expected you to learn everything from the book and their half-assed bullet point one way lectures. Unfortunately or fortunately, depending on your view, it also meant having to take more traditional liberal arts type electives in order to graduate.
I did once have a Physics lecturer say " When I took Quantum Mechanics back in my undergrad, I got an A but didn't actually understand anything" and then in the same lecture 20 minutes later: "What part of this do you not understand?" when the entire class was just blankly looking at the whiteboard.
At least at the undergrad level, it's not impossible to get an "A" without actually learning anything. Especially Freshman/Sophomore level classes. You just cram for the exams and regurgitate what you memorized. Within a few months time it's mostly gone.
Seriously, what so non-understandable in first 20 minutes of QM?
Probably depends on how it’s explained, no?
I could make arithmetic incomprehensible, let alone QM.
They never implied it was the first 20 minutes of the entire course
That's been my experience too, and I think it actually makes perfect sense from an evolutionary perspective - if the students are smart enough to learn well regardless of the level of the instruction, then the professors don't face any pressure to improve.
Taking this to the extreme, I think that a top-tier university could do very well for itself by only providing a highly selective admission system, good facilities and a rigorous assessment process, while leaving the actual learning to the students.
Universities don’t pick professors because they are good teachers, they pick them for their research publications. The fact that some professors end up being good teachers is almost coincidental.
For the most part, most universities, that is true. I was dissatisfied with the quality of my undergrad college education, and had the resources to try other universities. After two state schools, I figured out that Boston is The University City with 700,000 college students in the larger Boston area when I attended Boston University, MIT and Harvard. I found Boston's over sized undergraduate population created a credit sharing system for all the Boston area colleges, and if one wanted they could just walk onto anther campus and take their same class at your university. So, of course, I took at the classes I could at Harvard. I was formally an engineering student at BU, but as far as the professors at Harvard and MIT knew I was a student at their school. What I found was that at Harvard, and about 75% of the time at MIT, the professors are incredibly good, they are the educational best self actualizing as teachers. Every single Harvard professor took a personal interest in my learning their subject. I saw that no where else.
Yeah at that level you’re basically optimizing for all around excellence, and it’s hard to be a leader in your field without also being deeply interested in it at all levels — and being reasonably charismatic.
I’ve only taken classes at state schools, and my experience was that I’d often get a professor that was clearly brilliant at publishing but lacked even the most rudimentary teaching skills. Which is insightful in its own way…just not optimal for teaching.
This is true for research universities. There are many excellent teaching colleges where professors are hired to teach, and don't do research.
Sounds more like the unfortunate differences between teaching professors and research professors. Unfortunately some research schools force professors to teach N credits per semester even if that is not their speciality.
Your approach sounds too elitist for myself. I think you simply figure out the core skills of your professors. Maybe some teach undergrad well, others only advanced degrees. Maybe some should just be left to research with minimal classrooms etc.
I rather think it is a elitist concept of "I am a highly respected professor at a elite uni, how dare you bother me with your profane questions!"
I was at a Uni aiming for and then gaining "Elite" status in germany and I did not liked the concept and the changes.
I like high profile debates. As high as possible. But I don't like snobism. We all started as newbs.
>I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.
There is a lot more on the plate when you are kindergarten teacher - as the kids needs a lot of supervision and teaching outside the "subject" matters, basic life skills, learning to socialize.
Conversely, at a university the students should generally handle their life without your supervision, you can trust that all of them are able to communicate and to understand most of what you communicate to them.
So the subject matter expertise in kidnergartens is how to teach stuff to kids. Its not about holding a fork, or to not pull someones hair. Just as the subject matter expertise in an university can be maths. You rarely have both, and I don't understand how you suggest people get both a phd in maths, do enough research to get to be a professor and at the same time get a degree in education?
I was an instructor for a college credit eligible certification course. While I think that education degree is more than you need, providing effective and engaging instruction is a skill and is part of actual teaching at any level. Concepts like asking a few related open ended, no right answer questions at the beginning of a new topic to prime students’ thinking about that topic. Asking specific students “knowledge check” or “summarize/restate this topic” questions throughout keeps students from checking out. Alternating instruction with application type exercises help solidify concepts. Lesson plans/exercises/projects that build on each other and reincorporate previous topics. Consideration of how to assess students between testing and projects, for example a final vs a capstone project.
If you are just providing materials and testing, you aren’t actually teaching. Of course there are a ton of additional skills that go into childhood development, but just saying adults should figure it out and regurgitating material counts as “teaching” is BS.
Just watch out for who is certifying how things should be taught. It’s honestly one reason education is so bad and so slow to change.
Edit: and why perfectly capable professionals can’t be teachers without years of certification
> I've always found it weird that you need teaching certification to teach basic concepts to kindergartners but not to teach calculus to adults.
I think this is partially due to the age of the students, by the time you hit college the expectation is you can do a lot of the learning yourself outside of the classroom and will seek out additional assistance through office hours, self study, or tutors/classmates if you aren't able to understand from the lecture alone.
It's also down to cost cutting, instead of having entirely distinct teaching and research faculty universities require all professors to teach at least one class a semester. Usually though the large freshman and sophomore classes do get taught by quasi dedicated 'teaching' professors instead of a researcher ticking a box.
>don't know that they necessarily know how any better than the administrators.
If someone is doing something day in and day out, they do gain knowledge on what works and doesn't work. So just by doing that the professors typically know much more about how people should be taught than the administrators. Further, the administrators' incentives are not aligned towards insuring proper instruction. They are aligned with increasing student enrollment and then cashing out whenever they personally can.
This is very different in France. Studying to be a teacher at university level is a big deal.
Since the reform on University administration circa 2011, a big push was done towards 'evaluation continue' (basically regular tests), which now last until your third year in some Uni, to make public universities more like private schools, and against 'partiels' (two big batteries of standardized tests in person, with thousands in the same area, with only pen and papers, one early January, second in may, every year, over a week).
That push was accelerated because of COVID, but with the 'AI homework', it gave teachers a possibility to argue against that move and the trend seemed stopped last year (I don't now yet if it has reverted). In any case, I hope this AI trend will give more freedom to teachers, and maybe new ways of teaching.
And I'm not a big Llm fan in general, but in my country, in superior education, it seems good overall.
Ah, that's interesting, thanks. I lived in France in 2001-2 and was friends with someone who was studying for his partiels to become a chemistry teacher.
Once you're an adult some of the best lessons come from having bad teachers.
Nobody knows "how" things should be taught. Pedagogy is utter disaster.
I am pretty sure that early childhood education (until fifth grade) is a very active area of research in all highly developed nations. Almost, by definition, it you want to (a) become or (b) stay a highly developed nation, you need to have a high quality public education system.
My mother was a first grade teacher for 30+ years. In her school system, first grade is the year that students learn to read. Each year, she was also required to take professional training classes for a certain number of days. She told me that, in her career, there were many changes and improvements and new techniques developed to help children learn how to read. One thing that changed a lot: The techniques are way more inclusive, so non-normie kids can learn to read better at an earlier age.
A PhD was historically a teaching degree: that’s what the D stands for.
No?
PhD - Philosophy Doctor
Doctor is Latin for teacher; cf. "doctrine", "docent".
The instructors may not know the absolute best way to teach, but I think they do know more than the administrators. All my interaction with teacher training suggests to me that a large proportion of it is basically vacuous. On dimensions like the ones under discussion here (e.g., "should we use AI", "can we do this class online"), there is not really anything to "know": it's not like anyone is somehow a super expert on AI teaching. Teacher training in such cases is mostly just fads with little substantive basis.
Moreover, the same issues arise even outside a classroom setting. A person learning on their own from a book vs. a chatbot faces many of the same problems. People have to deal with the problem of AI slop in office emails and restaurant menus. The problem isn't really about teaching, it's about the difficulty of using AI to do anything involving substantive knowledge and the ease of using AI to do things involving superficial tasks.
I totally agree. I think the neo-liberal university model is the real culprit. Where I live, Universities get money for each student who graduates. This is up to 100k euros for a new doctorate. This means that the University and its admin want as many students to graduate as possible. The (BA&MA) students also want to graduate in target time: if they do, they get a huge part of their student loans forgiven.
What has AI done? I teach a BA thesis seminar. Last year, when AI wasn't used as much, around 30% of the students failed to turn in their BA thesises. 30% drop-out rate was normal. This year: only 5% dropped out, while the amount of ChatGPT generated text has skyrocketed. I think there is a correlation: ChatGPT helps students write their thesises, so they're not as likely to drop out.
The University and the admins are probably very happy that so many students are graduating. But also, some colleagues are seeing an upside to this: if more graduate, the University gets more money, which means less cuts to teaching budgets, which means that the teachers can actually do their job and improve their courses, for those students who are actually there to learn. But personally, as a teacher, I'm at loss of what to do. Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be? Nobody else seems to care. Or should I pass them, let them graduate, and reserve my energy to teach those who are motivated and are willing to engage?
I think one of the outcomes might be a devaluation of the certifications offered in the public job marketplace.
I can say from some working experience in the United States that way too many jobs require a university degree. I remember being an intern or my first job after uni (which I struggled a great deal to complete), looking around and thinking: "There is no way that all of these people need a uni degree to do their jobs." I couldn't believe how easy work was compared to my uni studies (it was hell). I felt like I was playing at life with a cheat code (infinite lives, or whatever). I don't write that to brag; I am sure many people here feel the same. So many jobs at mega corps require little more than common sense: Come to work on time, dress well, say your pleases and thank yous, be compliant, do what is asked, etc. Repeat and you will have a reasonable middle class life.
Then there's Europe, where making it easy to get a master's degree just let to jobs requiring people to waste time getting yet another unneeded degree.
This entire situation is something that is predictable, and I have personally called it out years ago - not because of some unique ability, but because this is what happened in India and China decades upon decades ago.
There’s only so many jobs which have you a good salary.
So everyone had to become a doctor lawyer or engineer. Business degrees were seen as washouts.
Even for the job of a peon, you had to be educated.
So people followed incentives and got degrees - in any way or form they could.
This meant that degrees became a measure, and they were then ruthlessly optimized for, till they stopped having any ability to indicate that people were actually engineers.
So people then needed more degrees and so on - to distinguish their fitness amongst other candidates.
Education is what liberal arts colleges were meant to provide - but this worked only in an economy that could still provide employment for all the people who never wanted to be engineers, lawyers or doctors.
This mess will continue constantly, because we simply cannot match/sort humans, geographies, skills, and jobs well enough - and verifiably.
Not everyone is meant to be a startup founder. Or a doctor. Or a plumber, or a historian or an architect or an archaeologist.
It’s a jobs market problem, and has been this way ever since the American economy wasn’t able to match people with money for their skills.
Yep, it's a job market problem. Only degrees that are somehow limited in their supply will continue to hold value, the rest approach worthlessness. Neither the state nor universities have any interest to limit the supply.
In my country doctors earn huge salaries and have 100% job security, because their powerful interest groups have successfully lobbied to limit the number of grads below job market's demand. Other degrees don't come even close.
I agree. I tend to think though that the best way forward is to ignore all of these education issues and just focus on raising the floor. The difference between a "good-paying job" and a "not-so-good-paying job" should be small, and everyone should be able to have a good life regardless of what job they have. Then people can choose to go to college if they want to learn about things, and maybe to learn about subjects related to a job they want, but not because they think it's a way to make more money.
Well, see Germany. They do it pretty well. The expected lifetime earnings difference between university graduates and someone who took the trade/apprenticeship route is very similar. Does anyone know of other countries that are similar? Is it also true in Austria or Switzerland?
> Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be?
No, you should fail them for turning in bad theses, just like you would before AI.
That's probably what should happen, but it's not what happens in reality. In grading I have to follow a very detailed grading matrix (made by some higher-ups) and the requirements for passing and getting the lowest grade are so incredibly low that it's almost impossible to fail, if the text even somewhat resembles a thesis. The only way I could fail a student, is if they cheated, plagiarised or fabricated stuff.
The person who used the AI slop blog for sources, we asked them to just remove them and resubmit. The person who hallucinated sources is however getting investigated for fabrication. But this is an incredibly long process to go through, which takes away time and energy from actual teaching / research / course prep. Most of the faculty is already overworked and on the verge of burnout (or are recovering post-burnout), so everybody tries to avoid it if they can. Besides, playing a cop is not what anybody wants to do, and its not what teaching should be about, as the original blog post mentioned. IF the University as an institution had some standards and actually valued education, it could be different. But it's not. The University only cares about some imaginary metrics, like international rankings and money. A few years ago they built a multi-million datacenter just for gathering data from everything that happens in the University, so they could make more convincing presentations for the ministry of education — to get more money and to "prove" that the money had a measurable impact. The University is a student-factory (this is a direct quote by a previous principal).
Yeah, our information and training systems are kinda failing at dealing with the reality of our actual information environment.
Take law for example and free speech - a central tenet to a functional democracy is effective ways to trade ideas.
A core response in our structure to falsehoods and rhetoric is counter speech.
But I can show you that counter speech fails. We have realms upon realms of data inside tech firms and online communities that shows us the mechanics of how our information economies actually work, and counter speech does diddly squat.
Education is also stuck in a bind. People need degrees to be employable today, but the idea of education is tied up with the idea of being a good educated thinking human being.
Meaning you are someone who is engaged with the ideas and concepts of your field, and have a mental model in your head, that takes calories, training and effort to use to do complex reasoning about the world.
This is often overkill for many jobs - the issue isn’t doing high level stats in a day science role, it’s doing boring data munging and actually getting the data in the first place. (Just an example).
High quality work is hard, and demanding, and in a market with unclear signals, people game the few systems that used to be signals.
Which eventually deteriorated signal till you get this mess.
We need jobs that give a living wage, or provide a pathway to achieving mastery while working, so that the pressure on the education lever can be reduced and spread elsewhere.
> A core response in our structure to falsehoods and rhetoric is counter speech.
> But I can show you that counter speech fails
Could you show me that? What's your definition of failure?
I get the feeling that you aren’t asking for the short version, because most people wouldn’t latch onto that point and create an account for it.
Hmmm.
An example - the inefficacy of Fact checking efforts. Fact checking is quintessentially counter speech, and we know that it has failed to stop the uptake and popularity of falsehoods. And I say this after speaking to people who work at fact checking orgs.
However, this is in itself too simple an example.
The mechanics of online forums are more interesting to illustrate the point - Truth is too expensive to compete with cheaper content.
Complex articles can be shared on a community, which debunk certain points, but the community doesn’t read it. They do engage heavily on emotional content, which ends up supporting their priors.
I struggle to make this point nicely, but The accuracy of your content is secondary to its value as an emotional and narrative utility for the audience.
People are not coming online to be scientists. They are coming online to be engaged. Counter speech solves the issue of inaccuracy, and is only valuable if inaccuracy is a negative force.
It is too expensive a good to produce, vs alternatives. People will coalesce around wounds and lacunae in their lives, and actively reject information that counters their beliefs. Cognitive dissonance results in mental strife and will result in people simply rejecting information rather than altering their priors.
Do note - this is a point about the efficacy of this intervention in upholding the effectiveness of the market where we exchange ideas. There will be many individual exchanges where counter speech does change minds.
But at a market level, it is ineffective as a guardian and tonic against the competitive advantage of falsehoods against facts.
——
Do forgive the disjointed quality in the response. It’s late here, and I wish I could have just linked you to a bunch of papers, but I dont think that would have been the response you are looking for.
I think this 3-part essay might be relevant to your argument: https://www.e-flux.com/journal/147/623330/society-of-the-psy...
I’ve been recommending network propaganda recently. The book has the data that makes the case better than I can about structural issues in the information ecosystem.
Also started going through this legal essay (paper?) recently, Lies, Counter-lies, and Disinformation in the Marketplace of Ideas
https://www.repository.law.indiana.edu/cgi/viewcontent.cgi?a...
The book "Nexus" by Yuval Noah Harari essentially makes this same point. The way he phrases it is that information's primary role throughout history hasn't necessarily been to convey objective truth but to connect people and enable large scale cooperation. So more information is not necessarily better.
Worth a read or you can check out one of his recent podcast appearances for a quicker download.
> The University is a student-factory
In The Netherlands, we have a three-tier tertiary system: MBO (practical job education / trades), HBO (college job education / applied college) and WO (scientific education / university).
A lot of the fancy jobs require WO. But in my opinion, WO is much too broad a program, because it tries to both create future high tier workers as well as researchers. The former would be served much better by a reduced, focused programme, which would leave more bandwidth for future researchers to get the 'true' university education they need.
> In grading I have to follow a very detailed grading matrix (made by some higher-ups) and the requirements for passing and getting the lowest grade are so incredibly low that it's almost impossible to fail, if the text even somewhat resembles a thesis. The only way I could fail a student, is if they cheated, plagiarised or fabricated stuff.
This is another example of "AI is exacerbating existing problems". :-) That kind of grading policy is absurd and should never have existed in the first place, but now AI is really making that obvious.
I've talked with professors at a major US research university. For Master's students, they are all paying a lot of money to get a credential. That's the transaction. They don't really care about cheating as long as they go through the motions of completing the assigned work. It's just a given, and like you say it takes more time than they have to go through the acacdemic dishonesty process for all the students who are getting outside help or (now) using AI.
> The person who used the AI slop blog for sources
That phrase is so utterly dystopian. I am laughing, but not in a good way. You should fail them.
The larger work that the intellectual and academic forces of a liberal democracy is that of “verification”.
Part of the core part of the output, is showing that the output is actually what it claims to be.
The reproducibility crisis is a problem Precisely because a standard was missed.
In a larger perspective, we have mispriced facts and verification processes.
They are treated as public goods, when they are hard to produce and uphold.
Yet they compete with entertainment and “good enough” output, that is cheaper to produce.
The choice to fail or pass someone doesn’t address the mispricing of the output. We need new ways to address that issue.
Yet a major part of the job you do. is to hold up the result to a standard.
You and the institutions we depend on will continue to be crushed by these forces. Dealing with that is a separate discussion from the pass or fail discussion.
Fail them. Only let the ai generated text that has been verified and edited to be true to pass.
If they want to use AI make them use it right.
> Some thesises had hallucinated sources, some had AI slop blogs as sources, the texts are robotic and boring. But should I fail them, out of principle on what the ideal University should be?
I don't think you should fail them - instead, give them feedback on how to improve their thesis themselves, and how to make better use of tools like ChatGPT.
If instead of flat out failing to turn in their thesis, instead they are submitting work that needs more iteration due to bad use of AI, that sounds like a net win to me. The latter can be turned into something useful.
In Australia Universities that have remote study have places where people can do proctored exams in large cities. The course is done remotely but the exam, which is often 50%+ of the final grade, is done in a place that has proctored exams as a service.
Can't this be done in the US as well ?
The Open University in the UK started in 1969. Their staff have a reputation for good interaction with students, and I have seen very high quality teaching materials produced there. I believe they have always operated on the basis of remote teaching but on-site evaluation. The Open University sounds like an all-round success story and I'm surprised it isn't mentioned more in discussions of remote education.
Variations in this system are in active use in the US as well.
Do you feel it is effective?
It seems to me that there is a massive asymmetry in the war here: proctoring services have tiny incentives to catch cheaters. Cheaters have massive incentives to cheat.
I expect the system will only catch a small fraction of the cheating that occurs.
> I expect the system will only catch a small fraction of the cheating that occurs.
The main kind of cheating we need them to prevent is effective cheating - the kind that can meaningfully improve the cheater's score.
Requiring cheaters to put their belongings in a locker, using proctor-provided resources, and being monitored in a proctor-provided room puts substantial limits on effective cheating. That's pretty much the minimum that any proctor does.
It may not stop 100% of effective cheating 100% of the time, but it would make a tremendous impact in eliminating LLM-based cheating.
If you're worried about corrupt proctors, that's another matter. National brands that are both self- and externally-policed and depend on a good reputation to drive business from universities would help.
With this system, I expect that it would not take much to avoid almost all the important cheating that now occurs.
Remote proctoring programs at least are pretty rough these days. Their environment conditions are pretty exacting and then they expect you to just stare at the screen and think for basically the whole exam. Minor normal webcam problems can invalidate the entire exam through no fault of the examee or if you look around or fidget a lot it can trigger their cheat detection as well. I'm glad I finished my test taking time before it became the norm.
I had to retake a multi-hour proctored test (and only got to do so after a ridiculous amount of back and forth with the school) because my cat jumped up on my computer table while I was taking it, and I looked over at her and gave her a few pets before looking back at the screen. Not joking in the least. It was maddening.
What do they do if you don't have a webcam? Or if your webcam is broken? Or if you don't feel comfortable sharing your video?
They tell you to come back when you’re ready to take the test. This can’t be surprising…
Most institutions use a remote test proctoring service. Before the test begins, the service verify all the requirements are met.
Anything short of all the requirements and they do not administer the test. In the case something is broken, there is usually a window rather than a specific time when you can take the test, so you have an opportunity to resolve it.
If a webcam is required, students are informed about this requirement before enrolling in the institution, and are usually required to purchase hardware capable of doing this as a condition of enrollment.
You get a working web cam. It's a requirement for many remote proctoring services and if you don't have access to one you're screwed.
I get why they use it, without it there's no way to know you're not on your phone or another device cheating since they can only really see what's on the device you've installed the proctor software/rootkit on.
Sadly Linus Tech Tips video of him taking the CompTIA A+ exam has been taken down after threatening letters from CompTIA but they demanded a basically baren room, 360 photos and spotless web cams.
Webcam is broken is now pretty universally interpreted as I don’t want to be on video.
But it also only catches cheating on exams. For homework/projects, you can't really have that be in person.
My take:
- Make “homework” ungraded. Many college classes already do this, and it has been easy to cheat on it way before AI by sharing solutions. Knowledge is better measured in exams and competence in projects. My understanding is that homework is essentially just practice for exams, and it’s only graded so students don’t skip it then fail the exams; but presumably now students cheat on it then fail exams, and for students who don’t need as much practice it’s busywork.
- Make take-home projects complex and creative enough that they can’t be done by AI. Assign one large project with milestones throughout the semester. For example, in a web development class, have students build a website, importantly that is non-trivial and theoretically useful. If students can accomplish this in good quality with AI, then they can build professional websites so it doesn’t matter (the non-AI method is obsolete, like building a website without an IDE or in jQuery). Classes where a beyond-AI-quality project can’t be expected in reasonable time from students (e.g. in an intro course, students probably can’t make anything that AI couldn’t), don’t assign any take-home project.
- If exams (and maybe one large project) aren’t enough, make in-class assignments and projects, and put the lectures online to be watched outside class instead. There should be enough class time, since graded assignments are only to measure knowledge and competence; professors can still assign extra ungraded assignments and projects to help students learn.
In summary: undergraduate college’s purpose is to educate and measure knowledge and competence. Students’ knowledge and competence should be measured via in-class assignments/exams and, in later courses, advanced take-home projects. Students can be educated via ungraded out-of-class assignments/projects, as well as lectures, study sessions, tutoring, etc.
> My understanding is that homework is essentially just practice for exams
There are a LOT of people that don't take exams well. When you combine that with the fact that the real world doesn't work like exams in 90% of cases, it makes a lot of sense for grades to _not_ based on exams (as much as possible). Going the other direction (based on nothing _but_ exams) is going to be very painful to a lot of people; people that do learn the material but don't test well.
I made another comment on this thread about that. Exams should be test important knowledge (not computation or trick questions) so they should be easy for students who learned the material, even those who traditionally have trouble with exams. Most of the grade should be frequent in-class assignments or long take-home projects, which test almost if not the same skills students would use professionally (e.g. debug a simulated server failure in-class; develop a small filesystem with a novel feature at home).
The in-class assignments should also be easier than the take-home projects (although not as easy as the exams). In-class assignments and exams would be more common in earlier classes, and long projects would be more common in later classes.
> proctoring services have tiny incentives to catch cheaters. Cheaters have massive incentives to cheat.
If they don’t catch them they don’t have a business model. They have one job. The University of London, Open University and British Council all have 50+ years experience on proctoring university exams for distance learning students and it’s not like Thomson Prometric haven’t thought about how to do it either, even if they (mostly?) do computerised exams.
The problem is that the business model is that when you outsource compliance (in this case that might be catching cheaters), the important thing is to be able to say that everyone did their best, and you don't necessarily need to do your best to say that.
Well, if they don't catch someone. They don't have much incentive to avoid false positives. Catching someone who did not cheat but failed to follow all the draconian rules, is probably a lot easier than to catch an actual cheater.
And I daresay most of the corporate certs from companies like Microsoft and Red Hat are probably have pretty well-proctored exams too. To what degree their processes are applicable to a University environment I don't know.
I took one last year. Microsoft offer a choice of remote proctoring or in-person at a Pearson VUE test centre. I chose the latter.
You put your stuff in a locker. They compare your face to some official photo ID and take your photo. You sit the test. They print out your results along with your mugshot. That's it. It was very painless.
Teachers typically also have years, sometimes decades, of experience running exams. Yet I've never seen a teacher that is any good at stopping cheating. And that's in person for the class that they are teaching.
> Teachers typically also have years, sometimes decades, of experience running exams. Yet I've never seen a teacher that is any good at stopping cheating. And that's in person for the class that they are teaching.
The difference is running exams is a small part of a teacher's job, and almost certainly not the part they're passionate about.
Also proctors demand things I've seen no teacher at any level demand (or be able to demand).
> I expect the system will only catch a small fraction of the cheating that occurs.
It'll depend a lot on who/where/how is doing the screening and what tools (if any) are permitted.
Remember that bogus program for TI8{3,4} series calculators that would clear the screen and print "MEMORY CLEAR"? If the proctor was just looking for that string and not actually jumping through the hoops to _actually_ clear the memory then it was trivial to keep notes / solvers ... etc on the calculator.
It's actually somewhat of a challenge to display "Mem cleared" without access to the lowercase font. You have access to any uppercase character, spaces, and BASIC functions. With stat vars, you also get lowercase "a" "b" "c" "d" "e" and "r". And you can display text at a specific row and column.
I ended up displaying "M" "e" "min(" "c" "log(" "e" "a" "r" "e" "d". Then covered up the "in(" with spaces.
Then you lower your contrast for the full effect.
Was I at university in a small window in time when a TI-89 and TI-92 was allowed?
In the years since, I’ve only ever heard mention of older models, not newer ones which makes me wonder if this is a special case and situation where technology is frozen in time intentionally to foster learning.
I was in such a window. TI-89 was allowed by mistake, we were allowed to keep using it since it was expensive. Next year they were back on TI-83s.
Oh yes, they're frozen in time, but since the people who pay for them are not the same people who demand they must be used, they're not frozen in price. It's the most expensive kilobytes you'll ever buy.
They are not "older models" just lower end. TI-92 came out in 1995 and discontinued in in 1998. TI-83 was introduced in 1996 and discontinued in 2004. TI-89 came out in 1998 and was discontinued in 2004.
At my high school we were allowed to have TI-83s but not TI-89s, because 89s had built in CAS (computer algebra system) and could do your algebra homework for you. When I went to college I already had an 83 so I didn't feel the need to upgrade.
I wasn’t allowed anything more complex than a Casio FX-300ES. Even my 991ES wasn’t allowed, let alone something like a TI83/4. This (from what I’ve heard) is pretty standard in Canadian universities for calc 1-3, linear algebra, discrete, etc.
Supposed to be the same thing in the UK but no one cares. In fact most of our students (undergrad mathematics) appear to have HP Prime now which has a full CAS built in. The questions are designed to break the CAS sometimes. Try expanding (a-2b)^1000 on a calculator to get a coefficient out. It gets stuck and hoses the whole calculator.
You can't stop people hiring someone who looks similar from sitting the exam, or messages in morse code via Bluetooth. It's hard to stop a palm card.
But it stops a casual cheater from having ChatGTP on a second device.
You can.
I did a remote proctored exam for the NREMT last year. They had me walk the camera around the room, under the desk, etc. All devices had to be in my backpack. No earbuds. They made me unplug the conference tv on the wall, lift picture frames etc. I had to keep my hands above the table the whole time, I couldn't look down if I was scratching an itch. They installed rootkit software and closed down all of the apps other than the browser running the test. They killed a daemon I run on my own pcs that is custom. They are recording from the webcam the whole time and have it angled so they can see. They record audio the whole time. I accidentally alt tabbed once and muted the mic with a wrong keyboard, those were first and second warning within 5 seconds.
When you take the test in a proctored testing center location they lock all of your stuff in a locker, check your hands, pockets, etc. They give you earplugs. You use their computer. They record you the whole time. They check your drivers license and take a fingerprint.
Those methods would stop a large % of your attack vectors.
As do the repercussions:
A candidate who violates National Registry policies, or the test center's regulations or rules, or engages in irregular behavior, misconduct and/or does not follow the test administrator's warning to discontinue inappropriate behavior may be dismissed from the test center. Exam fees for candidates dismissed from a test center will not be refunded. Additionally, your exam results may be withheld or canceled. The National Registry of EMTs may take other disciplinary action such as denial of National EMS Certification and/or disqualification from future National Registry exams.
At a minimum you're paying the $150 fee again, waiting another month to get scheduled and taking another 3 hours out of your day.
I'd rather go take the test in person than subject myself to such extreme surveillance of my own premises.
I'd agree, but I did it at work in a conference room. And I was able to schedule a day out virtually instead of a month out for in person, and I didn't want them taking my fingerprint.
I used a spare laptop I wipe.
Making them surveil your employer instead is not a bad idea either.
I'd pit my megacorp's security against theirs any day of the week, but as I said I just used and wiped a laptop just for the test.
I took 3 CompTIA certification tests at a community college testing center. This was the procedure, more or less.
> When you take the test in a proctored testing center location they lock all of your stuff in a locker, check your hands, pockets, etc. They give you earplugs. You use their computer. They record you the whole time. They check your drivers license and take a fingerprint.
While attending there, I also took a virtual Calculus class. The instructor was based in the satellite campus, several miles away. The virtual class required a TI graphing calculator, used Pearson textbook & video lectures, and all the tests and quizzes were in Canvas. I worked from home or the main campus, where there was a tutoring center, full of students and tutors making the rounds to explain everything. I received tutoring every other week.
But then our instructor posted the details on our final exams. We were expected to arrive in-person, for the first time of the semester, on that satellite campus at specified times.
I protested, because everything I'd ever done was on the main campus, and I rode public transit, and the distance and unfamiliarity would be a hardship. So the disability services center accommodated me.
They shut me into in a dimly lit one-person room with a desk, paper, and pencil, and I believe there was a camera, and no calculator required. The instructor had granted an extended period to complete the exam, and I finished at the last possible moment. I was so thankful to be done and have good results, because I had really struggled to understand Calculus.
wow, that’s intense. i wonder how much actual cheating they must have caught to arrive at such a draconian model. it would be interesting if they published their statistics to make it clear whether all these things are truly necessary.
What stats would convince you? A woman was jailed in the UK last week for taking in person tests on behalf of others. She wore a variety of wigs to fool test centre staff. Where there's demand there's people who will try to supply it.
i guess i would expect them to publish some rates of disciplinary actions per sitting and the type of attempted behavior.
ex “1% of test takers were disciplined for attempting to contact someone for help using a disallowed electronic device surreptitiously”
minimally as deterrance
The remote proctored exam is a major invasion of privacy, but nevertheless, there's at least a dozen ways you could cheat despite all of that.
I fear that remote-proctoring can be liable to more false positives, if they are going to flag actions that "might" indicate a cheating sort of behavior, but they can't reach in and unveil your secret cheat sheet or identify your accomplice. I don't know the whole process after the remote proctor flags something, but it would seem more difficult for the student to defend innocence.
It's quite unfair of them to basically say "we're not competent enough as proctors to come up with evidence of guilt, so we'll use a guilty-until-proven-innocent system instead."
Both of those are so hard and so expensive that usually just learning the material is more practical.
LLMs and remote exams changed the equation so now cheating is incredibly easy and super effective compared to trying to morse code someone with a button in your shoe.
From what I've seen it works.
There is definitely a war between cheaters and people catching them. But a lot of people can't be bothered and if learning the material can be made easier than cheating then it will work.
You can imagine proctoring halls of the future being Faraday cages with a camera watching people do their test.
Local LLMs are almost here, no Internet needed!
Almost?
I've been running a programming LLM locally, with a 200k context length with using system ram.
Its also an abliterated model, so I get none of the moralizing or forced ethics either. I ask, and it answers.
I even have it hooked up to my HomeAssistant, and can trigger complex actions from there.
Way back like 25 years ago in what we call high school in the US, my statistics teacher tried her damndest to make final exams fair. I said next to someone I had a huge crush on, and offered to take their exam for them. I needed a ‘c’ to ace the class, and she needed an ‘a’ to pass. 3 different tests and sets of questions/scantrons. I got her the grade she needed, she did not get me the grade I needed.
So to your point, it’s easy to cheat even if the proctor tries to prevent it.
I am confused by your pronouns and other plot holes.
You wanted to "ace the class", which is an "A" on your final report card? But your crush's exam tanked your grade? You passed the class anyway, right?
Did you swap Scantrons, then, and your crush sat next to you, writing answers on the dgfitz forms?
She wouldn't pass without an "A" on the exams, so her running point total was circling the drain, and your effort gave her a "C-" or something?
In what ways did your teacher make the exams "fair"? What percentage of the grade did they comprise?
Were the 3 tests administered on 3 separate occasions, so nobody caught you repeatedly cheating the same way?
> Were the 3 tests administered on 3 separate occasions, so nobody caught you repeatedly cheating the same way?
I imagine that it would be utterly trivial for two people to nearly-undetectably cheat in this way, by both of them simply writing the other person's name on their exam.
My impression was that in high school, girls and boys had pretty distinct handwriting.
Not sure if that impression is accurate though, or if it's true of mathematical writing.
Yes, high school boys and girls have clearly distinct handwriting.
If you're just filling in bubbles on a scantron, then handwriting isn't very visible and each person can just write their own name on the scantron they're submitting as their own.
If you've been to one of these testing centers, you'd realize it's not easy to cheat, and the companies that run them take cheating seriously. The audacity of someone to cheat in that environment would be exceptionally high, and just from security theater alone I suspect almost no actual cheating takes place.
I did a proctored exam for Harvard Extension at the British Council in Madrid. The staff is proctoring exams year-round for their in-house stuff so their motivation notwithstanding they know what they’re doing.
Where I'm studying its proctored-online. They have a custom browser and take over your computer while you're doing the exam. Creepy AF but saves travelling 1,300 km to sit an exam.
Wouldn't spending $300 on a laptop to cheat on an exam for a class you're paying thousands for make sense? It would probably improve your grade more than the text book.
You have to install an app that is a Bowser that at the same time locks the entire computer. Only this browser works. Install it, give it the needed admin permission and participate in your test or don't. This is also used in Australian schools for NAPLAN https://www.nap.edu.au/naplan/understanding-online-assessmen...
Can you tell us: Is "remote study" a relatively recent phenom in AU -- COVID era, or much older? I am curious to learn more. And, what is the history behind it? Was it created/supported because AU is so vast and many people a state might not live near the campus?
Also: I think your suggestion is excellent. We may see this happen in the US if AI cheating gets out of control (which it well).
It definitely existed before, particularly as a revenue stream for some of the smaller universities such as USQ. I think for the big ones it was a bit beneath them, then suddenly COVID came and we had lockdown for a long time in Melbourne. Now it's an expectation that students can access everything from home, but the flipside is everyone complains about how much campus life has declined. Students are paying more for a lower quality education and less amenity.
The same thing exists in South Africa, the university is called UNISA [1]. It has existed for a long time - my parents time. Lots of people that can't afford to go to university (as in, needs to earn an income) studies with them.
Not even just large cities. Decent sized towns have them too, usually with local high school teachers or the like acting as proctors.
Proctoring services done well could be valuable, but it’s smaller rural and remote communities that would benefit most. Maybe these services could be offered by local schools, libraries, etc.
> Students don’t seem to mind this reversion.
Those I ask are unanimously horrified that this is the choice they are given. They are devastated that the degree for which they are working hard is becoming worthless yet they all assert they don't want exams back. Many of them are neurodivergent who do miserably in exam conditions and in contrast excel in open tasks that allow them to explore, so my sample is biased but still.
They don't have a solution. As the main victims they are just frustrated by the situation, and at the "solutions" thrown at it by folks who aren't personally affected.
It is always interesting to me when people say they are "bad test takers". You mean you are bad at the part where we find out how much you know? Maybe you just don't know the material well enough.
caveat emptor - I am not ND so maybe this is a real concern for some, but in my experience the people who said this did not know the material. And the accommodations for tests are abused by rich kids more than they are utilized by those that need them.
As a self proclaimed bad test taker, it's not that I don't know the information. It's that I am capable of second guessing myself in a particular way in which I can build a logical framework to suggest another direction or answer.
This presents itself as a bad test taker, I rarely ever got above a B+ on any difficult test material. But you put me in a lab, and that same skillset becomes a major advantage.
Minds come in a variety of configurations, id suggest considering that before taking your own experience as the definitive.
datum: I'm ND, but I'm a good test-taker. There were plenty of tests for subjects where I didn't need to study because I was adept at reading the question and correctly assuming what the test-creator wanted answered, and using deduction to reduce possibilities down enough that I could be certain of an answer - or by using meta-knowledge of where the material from the recent lectures was to narrow things down, again, not because I knew the material all that well but because I could read the question. Effectively, I had a decent grasp of the "game" of test-taking, which is rather orthogonal to the actual knowledge of the class material.
I think the reverse exists as well. I think I am a much better test taker than average, and this has very clearly given me some advantages that come from the structure of exam-focused education. Exam taking is a skill and it's possible to be good at it, independent of the underlying knowledge. Of course knowing the material is still required.
However you are correct in noticing that there are an anomalously high number of "bad test takers" in the world. Many students are probably using this as a flimsy excuse for poor performance. Overall I think the phenomenon does exist.
Tests are just a proxy for understanding and/or application of a concept. Being good at the proxy doesn’t necessarily mean you understand the concept, just like not being good at the proxy doesn’t mean you don’t. Finding other proxies we can use allows for decoupling knowledge from a specific proxy metric.
If I was evaluating the health of various companies, I wouldn’t use one metric for all of them, as company health is kind of an abstract concept and any specific metric would not give me a very good overall picture and there are multiple ways for a company to be healthy/successful. Same with people.
There are lots of different ways to utilize knowledge in real world scenarios, so someone could be bad at testing and bad at some types of related jobs but good at other types of related jobs. So unless “test taking” as a skill is what is being evaluated, it isn’t necessary to be the primary evaluation tool.
I don't think I understand, as a terrible test taker myself.
The solution I use when teaching is to let evaluation primarily depend on some larger demonstration of knowledge. Most often it is CS classes (e.g. Machine Learning), so I don't really give much care for homeworks and tests and instead be project driven. I don't care if they use GPT or not. The learning happens by them doing things.
This is definitely harder in other courses. In my undergrad (physics) our professors frequently gave takehome exams. Open book, open notes, open anything but your friends and classmates. This did require trust, but it was usually pretty obvious when people worked together. They cared more about trying to evaluate and push us if we cared than if we cheated. They required multiple days worth of work and you can bet every student was coming to office hours (we had much more access during that time too). The trust and understanding that effort mattered actually resulted in very little cheating. We felt respected, there was a mutual understanding, and tbh, it created healthy competition among us.
Students cheat because they know they need the grade and that at the end of the day they won't won't actually be evaluated on what they learned, but rather on what arbitrary score they got. Fundamentally, this requires a restructuring, but that's been a long time coming. The cheating literally happens because we just treated Goodhart's Law as a feature instead of a bug. AI is forcing us to contend with metric hacking, it didn't create it.
> Many of them are neurodivergent who do miserably in exam conditions
Isn't this part of life? Learning to excel anyway?
Life doesn't tend to take place under exam conditions, either.
I believe parent is making a more general point, and as someone who would also be considered "neurodivergent" I would agree with that point. There were plenty of times growing up where special consideration would have been a huge help for me, but I'm deeply grateful that I learned in a world where "sometimes life is unfair" was considered a valuable lesson.
In my adult life I had a coworker who constantly demanded that she be given special consideration in the work environment: more time to complete tasks, not working with coworkers who moved too quickly, etc. She was capable but refused to recognize that even if you have to do things in a way that don't work for you, sometimes you either have to succeed that way or find something else to do.
Today she's homeless living out of her car, but still demands to that be hired she needs to be allowed to work as slowly as she needs and that she will need special consideration to help her complete daily tasks etc.
We recently lived through an age of incredible prosperity, but that age is wrapping up and competition is heating up everywhere. When things are great, there is enough for everyone, but right now I know top performers that don't need special consideration when doing their job struggling to find work. In this world if you learned to always get by with some extra help, you are going to be in for a very rude awakening.
Had I grown up in the world as it has been the last decade I would have a much easier adolescence and a much harder adult life. I've learned to find ways to maximize my strengths as well as suck it up and just do it when I'm faced with challenges that target my weaknesses and areas I struggle. Life isn't fair, but I don't think the best way to prepare people for this is to try to make life more fair.
On the other hand, I look at it in a more “a rising tide raises all boats” situation. Learning how to accommodate people who fall outside the norm not only helps them, but helps everyone, much like the famous sidewalk “curb cuts” for wheelchairs ended up helping everyone with luggage, strollers, bikes, etc.
We as a society have a lot of proxies for evaluating real world value. Testing is a proxy for school knowledge. Interviews are a proxy for job performance. Trying to understand and decouple actual value from the specific proxies we default to can unlock additional value. You said yourself that you do have strengths, so if there are ways society can maximize those and minimize proxies you aren’t strong in, that is a win win.
Your coworker sounds like they have an issue with laziness and entitlement more than an issue with neurodivergence. Anyone can be lazy and entitled. Even if someone has a weakness with quick turn production but excels in more complex or abstract long-term projects could be a value added for a company. Shifting workloads so that employees do more tasks they are suited towards, rather than a more ridged system, could end up helping all employees maximize productivity by reducing cognitive load they were wasting on tasks they were not as suited for, but did just because that was the way it was always done and they never struggled enough for it to become an actual “issue”.
I really like your take on this, but disagree with your conclusion. I do think that trying to "make life more fair" is essentially the main goal of civilization, codified as early (and probably much earlier) as The Code of Hammurabi.
My take is that we need to tread a thin line such that we teach young people to accept that life is inherently unfair, while at the same time doing what we can as a society to make it more fair.
> My take is that we need to tread a thin line such that we teach young people to accept that life is inherently unfair, while at the same time doing what we can as a society to make it more fair.
Agreed. Teaching that life is unfair (and how to succeed despite that) is an important lesson. But there's an object-meta distinction that's important to make there. Don't teach people that life is unfair by being unfair to them in their education and making them figure it out themselves. Teach a class on the topic and what they're likely to encounter in society, a couple times over the course of their education.
The important parts of life (like interviews) do.
> The important parts of life (like interviews)
Interviews shouldn't be "exam conditions" either. See the ten thousand different articles that regularly show up here about why not to do the "invert a binary tree on a whiteboard" style of interview.
There are much better ways to figure out people's skills. And much better things to be using in-person interview time on.
You're confusing the way things are with the way things ought to be.
The reality is life is full of time boxed challenges.
Other than a subset of interviews, what do you have in mind that has a structure similar to an exam? Because I'd agree with the comment at https://news.ycombinator.com/item?id=44106325 .
> what do you have in mind that has a structure similar to an exam?
All of life! An exam is a time boxed challenge. Sometimes it's open notes, sometimes it's not. I've had exams where I have to write an essay, and I've had exams where I've had to solve math problems. All things I've had to do in high pressure situations in my job.
Solving problems with no help and a clock ticking happens a million times per day.
We even assign grades in life, like "meets expectations" and "does not meet expectations".
Even still, you missed the point of my comment. You keep focusing on how interviews should be done, not how they're conducted in reality.
I understood the point of your comment; I disagreed with it. I think there's a meaningful distinction between high-pressure situations at work and exams in school, sufficiently so that the latter is poor preparation for the former. More to the point, everyone is subjected to the latter, while "thrives under pressure" is not a universal quality everyone is expected to have or use. It's a useful skill, and it's more useful to have than to not have, but the same can be said of a thousand skills, and many of them are things I'd prioritize higher in a colleague or employee, given the choice.
> I think there's a meaningful distinction between high-pressure situations at work and exams in school
Sure, in school there is no real consequence. That's why it's important. School exams are orders of magnitude easier than the real world.
> "thrives under pressure" is not a universal quality everyone is expected to have or use
School isn't intended to imbue everyone with universal qualities. Some people will excel and some wont. The ones that excel will go on to work in situations where you must thrive under pressure.
> It's a useful skill, and it's more useful to have than to not have, but the same can be said of a thousand skills
This is a different discussions then.
It seems like you have equated "excel" with "must thrive under pressure". That is precisely the point I am disputing. It's a skill, like any other. It is not the single most important skill everyone must have and everyone must be filtered on.
> It seems like you have equated "excel" with "must thrive under pressure"
Thriving and excelling are not that far apart :).
Thrive: grow or develop well Excel: be exceptionally good at or proficient in an activity
> It is not the single most important skill
Nobody said it was!
> everyone must be filtered on
It's a data point. Exam scores don't matter when you apply for a job, or do anything else in life.
It’s really just interviews, and even those are nothing like any exam I’ve ever taken. They’re closest, in terms of the kind of stress and the skills required to look good, to some kind of solo public speaking performance.
… which most people come out of 17+ years of school having done very little of, with basically a phobia of it, and being awful at it.
They are probably something like oral exams that a few universities use heavily, or the teaching practices of many elite prep schools.
[edit] oh and interviews in most industries aren’t like that. Tech is especially grueling in the interview phase.
It's more about the meta-skill of learning to adapt. Learning to be uncomfortable sometimes.
Right. The hardest things you encounter in your life will not adapt themselves to make you more comfortable, so it's critical that you gain experience in doing things outside of your comfort zone. Getting stressed during an exam is nothing compared to some of the bumps life will throw at you.
And it'll make you happier in the long run.
I don't think so? I teach maths, not survival or social pressure. If a student in my class is a competent mathematician why should they not be acknowledged to be that?
real life first, math second. taking tests is a skill that must be learned, especially now with AI faking quite literally everything that can be shown on a screen. (unless your students are learning purely for the joy of it and not for having a chance to get hired anywhere.)
> taking tests is a skill that must be learned
Why? It's a useless skill that you will literally never have to use after your schooling.
>taking tests is a skill that must be learned
"I had to suffer so you must too."
You understand the world actually has difficult problems, right? Like life and death challenges, without video game restarts. You don't get to pause things when it gets hard.
Yes, working under pressure is a skill that should be learned. It's best to learn it on a history exam when nobody is at risk.
IMO exams should be on the easier side and not require much computing (mainly knowledge, and not unnecessary memorization). They should be a baseline, not a challenge for students who understand the material.
Students are more accurately measured via long, take-home projects, which are complicated enough that they can’t be entirely done by AI.
Unless the class is something that requires quick thinking on the job, in which case there should be “exams” that are live simulations. Ultimately, a student’s GPA should reflect their competence in the career (or possible careers) they’re in college for.
> They should be a baseline, not a challenge for students who understand the material.
You've made this normative statement but not explained why.
I think exams should not require huge amounts of computation (I agree) but should contain a range of questions - from easy to very difficult - so that the best and average students can be differentiated.
Specifically, I think they should minimally penalize students who know the material and could apply it professionally, but don't do well on exams in general. Otherwise GPA isn't a useful metric for employers* (and I don't know who else would it be a metric for), because the best students are the best test-takers, not the best employees.
So, maybe not a baseline. The exam could have some difficult knowledge-based questions, as long as that "knowledge" when memorized would make the student a better professional; or if the exam is open-book, it can have knowledge that would be difficult to search for. It shouldn't require students to memorize obscure things that are unlikely to be used professionally (e.g. unimportant dates for history, or complex formulas for math that one would look up or reference by name), because then you're prioritizing students who handle rare edge-cases over those who probably accomplish more amortized.
* "Employer" and "professional" also including "PI" and "academic"
We have an Accessible Testing Center that will administer and proctor exams under very flexible conditions (more time, breaks, quiet/privacy, …) to help students with various forms of neurodivergence. They’re very good and offer a valuable service without placing any significant additional burden on the instructor. Seems to work well, but I don’t have first hand knowledge about how these forms of accommodations are viewed by the neurodivergent student community. They certainly don’t address the problem of allowing « explorer » students to demonstrate their abilities.
Yes I think the issue is as much that open tasks make learning interesting and meaningful in a way that exams hardly can do.
This is the core of the issue really. If we are in the business of teaching, as in making people learn, exams are a pretty blunt and ineffective instrument. However since our business is also assessing, proctoring is the best if not only trustworthy approach and exams are cheap in time, effort and money to do that.
My take is that we should just (properly) assess students at the end of their degree. Spend time (say, a full day) with them but do it only once in the degree (at the end), so you can properly evaluate their skills. Make it hard so that the ones who graduate all deserve it.
Then the rest of their time at university should be about learning what they will need.
Exams aren't for learning, they're for measuring. Projects and lecture are for learning.
The problem with this "end of university exam" structure is that you have the same problems as before but now that exam is weighted like 10,000% that of a normal exam.
> If we are in the business of teaching, as in making people learn, exams are a pretty blunt and ineffective instrument.
I'm curious: what is fulfilling in your job as a math teacher? When students learn? When they're assigned grades that accurately reflect their performance? When they learn something with minimal as opposed to significant effort? Some combination?
I always thought teacher motivations were interesting. I'm sure there are fantastic professors who couldn't care less as to what grades they gave out at the end.
> what is fulfilling in your job as a math teacher?
Many things. The most fulfilling for me is taking a student from hating maths to enjoying it. Or when they realise that in fact they're not bad at maths. Students changing their opinions about themselves or about maths is such a fulfilling experience that it's my main motivation.
Then working with students who likes and are good at maths and challenging them a bit to expand their horizon is a lot of fun.
> When students learn?
At a high level yes (that maths can be fun, enjoyable, doable). Them learning "stuff" not so much, it's part of the job.
> When they're assigned grades that accurately reflect their performance?
Yes but not through a system based on counting how many mistakes they make, like exams do. If I can design a task that enables a student to showcase competency accurately it's great. A task that enables the best ones to extend themselves (and achieve higher marks) is great.
> When they learn something with minimal as opposed to significant effort?
Not at all. If there is no effort I don't believe much learning is happening. I like to give an opportunity for all students to work hard and learn something in the process no matter where they start from.
I only care about the grade as feedback to students. It is a way for me to tell them how far they've come.
You can’t expect all students to learn without being forced to, no matter how much that’s literally the point of them being there.
They’re kids, and they should be treated as such, in both good and bad ways. You might want to make exceptions for the good ones, but absolutely not for the average or bad ones.
How many people would work their current job if money wasn’t a thing?
People of all ages seek rewards — and assessments gate the payoffs. Like a boss fight in a video game gates the progress from your skill growth.
I’ve had access to that at my school and it’s night and day. Not being as stressed about time and being in a room alone bumps me up by a grade letter at least.
> Many of them are neurodivergent who do miserably in exam conditions
I mean, for every neurodivergent person who does miserably in exam conditions you have one that does miserably in homework essays because of absence of clear time boundaries.
Autism vs. ADHD
>Many of them are neurodivergent
if "many" are "divergent" then... are they really divergent? or are they the new typical?
Many of the students I talk to. I don't claim they form a representative sample of the student cohort, on the contrary. I guess that the typical student is typical but I have not gone to check that.
I think having one huge exam at the end is the problem. An exam and assessment every week would be best.
Less stress at the end of the term, and the student can't leave everything to the last minute, they need to do a little work every week.
Too much proctoring and grading, not enough holding students' hands for stuff they should have learned from reading the textbook.
In my undergraduate experience, the location of which shall remain nameless, we had amble access to technology but the professors were fairly hostile to it and insisted on pencil and paper for all technical classes. There were some English or History classes here and there that allowed a laptop for writing essays during an "exam" that was a 3 hour experience with the professor walking around the whole time. Anyway, when I was younger I thought the pencil and paper thing to be silly. Why would we eschew brand new technology that can make us faster! And now that I'm an adult, I'm so thankful they did that. I have such a firm grasp of the underlying theory and the math precisely because I had to write it down, on my own, from memory. I see what these kids do today and they have been so woefully failed.
Teachers and professors: you can say "no". Your students will thank you in the future.
I have a Software Engineering degree from Harvard Extension and I had to take quite a few exams in physically proctored environments. I could very easily manage in Madrid and London. It is not too hard for either the institution or the student.
I am now doing an Online MSc in CompSci at Georgia Tech. The online evaluation and proctoring is fine. I’ve taken one rather math-heavy course (Simulation) and it worked. I see the program however is struggling with the online evaluation of certain subjects (like Graduate Algorithms).
I see your point that a professor might prefer to have physical evaluation processes. I personally wouldn’t begrudge the institution as long as they gave me options for proctoring (at my own expense even) or the course selection was large enough to pick alternatives.
Professional proctored testing centers exist in many locations around the world now. It's not that complicated to have a couple people at the front, a method for physically screening test-takers, providing lockers for personal possessions, providing computers for test administration, and protocols for checking multiple points of identity for each test taker.
This hybrid model is vastly preferable to "true" remote test taking in which they try to do remote proctoring to the student's home using a camera and other tools.
That’s what I did at HES and it was fine. Reasonable and not particularly stressful.
is it ok for students to submit images of hand-written solutions remotely?
seriously it reminds me of my high school days when a teacher told me i shouldn’t type up my essays because then they couldn’t be sure i actually wrote them.
maybe we will find our way back to live oral exams before long…
I attended Purdue. Since I graduated, it launched its "Purdue Global" online education. Rankings don't suggest it's happened yet, but I'm worried it will cheapen the brand and devalue my degree.
I remember sitting with the faculty in charge of offering online courses when I visited as an alum back in 2014. They seemed to look at it as a cash cow in their presentation. They were eager to be at the forefront of online CS degrees at the time.
Business models rule us all. Have you tested what kind of pushback you'll receive if you happen to flout the remote rule?
Centralization and IT-ification has made flouting difficult. There’s one common course site on the institution’s learning management system for all sections where assignments are distributed and collected via upload dropbox, where grades are tabulated and communicated.
So far, it’s still possible to opt out of this coordinated model, and I have been. But I suspect the ability to opt out will soon come under attack (the pretext will be ‘uniformity == fairness’). I never used to be an academic freedom maximalists who viewed the notion in the widest sense, but I’m beginning to see my error.
Sorry to hear this. And thanks for sharing this warning to other educators. I hope you find a way through.
Higher ups say yes to remote learning and no to remote work. Interesting to see this side by side like this.
Remote learning also opens up a lot of opportunities to people that would not otherwise be able to take advantage of them. So it's not _just_ the cash cow that benefits from it.
Yeah, the thing AI cheating is it seems inherent not in teaching but what mechanical, bureaucratic, for-profit teaching and universities have become.
Some US universities do this remotely via proctoring software. They require pencil and paper to be used with a laptop that has a camera. Some do mirror scans, room scans, hand scans, etc. The Georgia Tech OMS CS program used to do this for the math proofs course and algorithms (leet code). It was effective and scalable. However, the proctoring seems overly Orwellian, but I can understand the need due to cheating as well as maintaining high standards for accreditation.
> seems overly Orwellian
Wow.
Maybe we should consider the possibility that this isn't a good idea? Just a bit? No? Just ignore how obviously comparable this is to the most famous dystopian fiction in literary history?
Just wow. If you're willing to do that, I don't know what to tell you.
Stanford requires pen & paper exams for their remote students; the students first need to nominate an exam monitor (a person) who in turn receives and prints the assignments, meets the student at an agreed upon place, the monitor gives them the printed exams and leaves, then collects the exam after allotted time, scans it and sends it back to Stanford.
So just have test centers, and flip the classroom.
Thanks for sharing this anecdote. It’s easy to forget the revenue / business side of education and that universities are in a hard spot here.
Thank you for not giving in. The slide downhill is so ravenous and will consume so much of our future until the wise intervene.
why not pay for students to take the pen and paper exams at some proctored location, perhaps independent of the university.
Capitalism and the constant thirst for growth is killing society. Since when did universities care almost solely about renevnue and growth?
> Since when did universities care almost solely about renevnue and growth?
Since endowments got huge.
Could you explain this more? At first glace, a large endowment should either free you from worrying about revenue or move your focus to managing an endowment with a school as a side hustle.
> a large endowment should either free you
A large endowment attracts greedy people who then want to make it larger, that is true regardless where you go.
A large endowment requires more management, so you bring in a finance department to manage it. With what metric are those employees going to be evaluated, given that there is always going to be someone who will want the stability of a government job? It's not going to be "just maintain, don't worry". It's going to be "who can get us the most ROI?".
That’s a magnifier but it shouldn’t be the cause; for that you need a shift in management culture from optimizing for academic missions to optimizing for careers/influence of management and trustees.
Large endowments causes that unless you have very strict rules around it like the Nobel prize endowment. You can see how every large charity starts to focus on growing larger rather than its mission, Mozilla is a good example of that.
I see a correlation in some orgs, but can you explain the mechanics? I think I’ve also seen cases where the original mission continues to be the primary compass, so maybe the mechanics of the failure mode are what I don’t have a clear picture of yet.
With the us government now going after their funding they may have to start caring even more
When it was generally accepted by our society that the goal of all work is victory, not success. Capitalism frames everything as a competition, even when collaboration is obviously superior. Copyright makes this an explicit rule.
Hand written essays are inherently ableist. I would be at a massive disadvantage. I grew up during the 60's, but handwriting was alway slow and error prone for me. As soon as I could use a word processor I blossomed.
It's probably not as bad for mathematical derivations. I still do those by hand since they are more like drawing than expression.
> Hand written essays are inherently ableist
So is testing; people who don't have the skills don't do well. Hell, the entire concept of education is ableist towards learning impaired kids. Let's do away with it entirely.
Would you hire someone as a writer who is completely illiterate? Of course that's an extreme edge case, but at some point equality stops and the ability to do the work is actually important.
Most people would be happy to hire a writer with no consideration of how good their handwriting was.
I was a slow handwriter, too. I always did badly on in-class essay exams because I didn't have time to write all that I knew needed to be said. What saved my grade in those classes was good term papers.
Having had much occasion to consider this issue, I would suggest moving away from the essay format. Most of the typical essay is fluff that serves to provide narrative cohesion. If knowledge of facts and manipulation of principles are what is being evaluated, presentation by bullet points should be sufficient.
> Hand written essays are inherently ableist
Doing anything is inherently based on your ability to do it. Running is inherently ableist. Swimming is ableist. Typing is inherently ableist.
Pointing this out is just a thought terminating cliche. Ok, it's ableist. So?
> As soon as I could use a word processor I blossomed.
You understand this is inherently ableist to people that can't type?
> I still do those by hand since they are more like drawing than expression.
Way to do ableist math.
> Hand written essays are inherently ableist.
yes.
> I would be at a massive disadvantage.
yes.
...but.
how would you propose to filter out able cheaters instead? there's also in person one on one verbal exam, but economics and logistics of that are insanely unfavorable (see also - job interviews.)
Handwriting essays doesn't filter out cheaters though? It didn't even filter out cheaters before ChatGPT, before it was just a person writing the essay for you that you would copy