The fundamental question that AI raises for me, but nobody seems to answer:
In our competitive, profit-driven world--what is the value of a human being and having human experiences?
AI is neither inevitable nor necessary--but it seems like the next inevitable step in reducing the value of a human life to its 'outputs'.
Someone needs to experience the real world and translate it into LLM training data.
ChatGPT can’t know if the cafe around the corner has banana bread, or how it feels to lose a friend to cancer. It can’t tell you anything unless a human being has experienced it and written it down.
It reminds me of that scene from Good Will Hunting: https://www.imdb.com/de/title/tt0119217/quotes/?item=qt04081...
IMO you're coming at it from the wrong angle.
Capitalism barely concerns itself with humans and whether human experiences exist or not is largely irrelevant for the field. As far as capitalism knows, humans are nothing but a noisy set of knobs that regulate how much profit one can make out of a situation. While tongue-in-cheek, this SMBC comic [1] about the Ultimatum game is an example of the type of paradoxes one gets when looking at life exclusively from an economics perspective.
The question is not "what's the value of a human under capitalism?" but rather "how do we avoid reducing humans to their economic output?". Or in different terms: it is not the blender's job to care about the pain of whatever it's blending, and if you find yourself asking "what's the value of pain in a blender-driven world?" then you are solving the wrong problem.
I’m similarly worried about businesses all making “rational” decisions to replace their employees with “AI”, wherever they think they can get away with it. (Note that’s not the same thing as wherever “AI” can do the job well!)
But I think one place where this hits a wall is liability and accountability. Lots of low stakes things will be enshittified by “AI” replacements for actual human work. But for things like airline pilots, cancer diagnoses, heart surgery - the cost of mistakes is so large, that humans in the loop are absolutely necessary. If nothing else, at least as an accountability shield. A company that makes a tumor-detector black box wants to be an assistive tool to improve doctor’s “efficiency”, not the actual front line medical care. If the tool makes a mistake, they want no liability. They want all the blame on the doctor for trusting their tool and not double checking its opinion. I hear that’s why a lot of “AI” tools in medicine are actually reducing productivity: double checking an “AI’s” opinion is more work than just thinking and evaluating with your own brain.
The funny thing is my first thought was "maybe reduced nominal productivity by increased throughness is exactly what we need when evaluating potential tumors". Keeping doctors off autopilot and not so focused that radiologists fail to see hidden gorillas in x-rays. And yes that was a real study.
No, we already have autonomous cars driving around even though they've already killed people.
This is a poor take. They are objectively safer drives then their human counterpart. Yes, with those unfortunate deaths included.
The "value of a human" - same in this age as it has always been - is our ability to be truly original and to think outside the box. (That's also what makes us actually quite smart, and what makes current cutting-edge "AI" actually quite dumb).
AI is incapable of producing anything that's not basically a statistical average of its inputs. You'll never get an AI Da Vinci, Einstein, Kant, Pythagoras, Tolstoy, Kubrick, Mozart, Gaudi, Buddha, nor (most ironically?) Turing. Just to name a few historical humans whose respective contributions to the world are greater than the sum of the world's respective contributions to them.
Have you tried image generation? It can easily apply high level concepts from one area to another area and produce something that hasn't been done before.
Unless you loosen the meaning of statistical average so much that it ends up including human creativity. At the end of the day it's basically the same process of applying an idea from one field to another.
Most humans are not Da Vinci, Einstein, Kant, etc. Does that make them not valuable as humans?
Yes, I've tried AI image generation, and while it's impressive, it's also - at the end of the day - just as bland and unoriginal a mashup of existing material as AI text generation is.
All humans (I believe!) have the potential to be that amazing. And all humans come up with amazing ideas and produce amazing works in their life, just that 99% of us aren't appreciated as much as the famous 1% are. We're all valuable.
You should determine your own value if you don't want to be controlled by anyone else.
If you don't want to determine your own value, you're probably no worse off letting an AI do that than anything else. Religion is probably more comfortable, but I'm sure AI and religion will mix before too long.