> it gets work done quickly and poorly
This is only temporary. It will be able to code like anyone in time. The only way around this will be coding in-person, but only in elementary courses. Everyone in business will be using AI to code, so that will be the way in most university courses as well.
IMO no amount of AI should be used during an undergrad education, but I can see how people would react more strongly to its use in these intro to programming courses. I don't think there's as much of an issue with using it to churn out some C for an operating systems course or whatever. The main issue with it in programming education is when learning rudiments of programming IS the point of the course. Same with using to it crank out essays for freshman English courses. These courses are designed to introduce fundamental raw skills that everything else builds on. Someone's ability to write good code isn't as big a deal for classes in OS, algs, compilers, ML, etc., as the main concepts of those courses are.
It already can. Im flabbergasted how people haven't still figured out how good gemini 2.5 is.
Claude 3.7 and 4 are better for me than Gemini 2.5 for vibing with legacy code. Gemini 2.5 has some great solutions if you handhold it, but tends to make too many assumptions about what would be better which can tear things up as an agent, imo. In other words, Gemini is smarter, but less practical when working with existing code, from what I’ve experienced. To each their own, though.
The Claudes are a lot worse at even mildly challenging algorithmic problems than Gemini 2.5 Pro.
However, most legacy code is fairly primitive on that level, so my observation is in no way contradicting yours.