turtletontine 17 days ago

I’m similarly worried about businesses all making “rational” decisions to replace their employees with “AI”, wherever they think they can get away with it. (Note that’s not the same thing as wherever “AI” can do the job well!)

But I think one place where this hits a wall is liability and accountability. Lots of low stakes things will be enshittified by “AI” replacements for actual human work. But for things like airline pilots, cancer diagnoses, heart surgery - the cost of mistakes is so large, that humans in the loop are absolutely necessary. If nothing else, at least as an accountability shield. A company that makes a tumor-detector black box wants to be an assistive tool to improve doctor’s “efficiency”, not the actual front line medical care. If the tool makes a mistake, they want no liability. They want all the blame on the doctor for trusting their tool and not double checking its opinion. I hear that’s why a lot of “AI” tools in medicine are actually reducing productivity: double checking an “AI’s” opinion is more work than just thinking and evaluating with your own brain.

2
Nasrudith 17 days ago

The funny thing is my first thought was "maybe reduced nominal productivity by increased throughness is exactly what we need when evaluating potential tumors". Keeping doctors off autopilot and not so focused that radiologists fail to see hidden gorillas in x-rays. And yes that was a real study.

eternauta3k 17 days ago

No, we already have autonomous cars driving around even though they've already killed people.

RationPhantoms 16 days ago

This is a poor take. They are objectively safer drives then their human counterpart. Yes, with those unfortunate deaths included.