seems comparable to chess where it's well established that a human + a computer is much more skilled than either one individually
This was the Centaur hypothesis in the early days of chess programs and it hasn't been true for a long time.
Chess programs of course have a well defined algorithm. "AI" would be incapable of even writing /bin/true without having seen it before.
It certainly wouldn't have been able to write Redis.
> This was the Centaur hypothesis in the early days of chess programs and it hasn't been true for a long time.
> Chess programs of course have a well defined algorithm.
Ironically, that also "hasn't been true for a long time". The best chess engines humans have written with "defined algorithms" were bested by RL (alphazero) engines a long time ago. The best of the best are now NNUE + algos (latest stockfish). And even then NN based engines (Leela0) can occasionally take some games from Stockfish. NNs are scarily good. And the bitter lesson is bitter for a reason.
No, the alphazero papers used an outdated version of Stockfish for comparison and have always been disputed.
Stockfish NNUE was announced to be 80 ELO higher than the default. I don't find it frustrating. NNs excel at detecting patterns in a well defined search space.
Writing evaluation functions is tedious. It isn't a sign of NN intelligence.
I don't think that's been true for a while now -- computers are that much better.
Can humans really give useful input to computers? I thought we have reached a state where computers do stuff no human can understand and will crush human players.