I really do respect the engineering efforts.
But object stores are embarrassingly parallel, so if such a migration should be possible somewhere without down time, then it is definitely object stores.
Where would you make make the cut that takes advantage of object store parallelism?
That is, at what layer of the stack do you start migrating some stuff to the new strongly consistent system on the live service?
You can't really do it on a per-bucket basis, since existing buckets already have data in the old system.
You can't do it at the key-prefix level for the same reason.
Can't do both systems in parallel and try the new one and fall back to the old one if the key isn't in it, because opens up violations of the consistency rules you're trying to add.
Seems trickier than one might think.
Obviously depends on how they delivered read after write.
Likely they don't have to physically move data of objects, but the layer that writes and reads coordinates based on some version control guarantees e.g in database land MVCC is a prominent paradigm. They'd need a distributed transactional kv store that tells every reader what the latest version of the object is and where to read from.
An object write only acknowledges finished if the data is written and kv store is updated with new version.
They could do this bucket by bucket in parallel since buckets are isolated from each other.