Google Cloud Storage had it for eons before S3. GCS comes across as a much better thought-out and built product.
S3 is probably the largest object store in the world. The fact that they can upgrade a system like that to add a feature as complex as read-after-write with no downtime and working across 200+ exabytes of data is really impressive to me.
I really do respect the engineering efforts.
But object stores are embarrassingly parallel, so if such a migration should be possible somewhere without down time, then it is definitely object stores.
Where would you make make the cut that takes advantage of object store parallelism?
That is, at what layer of the stack do you start migrating some stuff to the new strongly consistent system on the live service?
You can't really do it on a per-bucket basis, since existing buckets already have data in the old system.
You can't do it at the key-prefix level for the same reason.
Can't do both systems in parallel and try the new one and fall back to the old one if the key isn't in it, because opens up violations of the consistency rules you're trying to add.
Seems trickier than one might think.
Obviously depends on how they delivered read after write.
Likely they don't have to physically move data of objects, but the layer that writes and reads coordinates based on some version control guarantees e.g in database land MVCC is a prominent paradigm. They'd need a distributed transactional kv store that tells every reader what the latest version of the object is and where to read from.
An object write only acknowledges finished if the data is written and kv store is updated with new version.
They could do this bucket by bucket in parallel since buckets are isolated from each other.
Sure, but whose (compatible) API is GCS using again? Also keep in mind that S3 is creeping up on 20 years old, so backing a change in like that is incredible.
Not just 20 years old - an almost flawless 20 years at massive scale.
It's funny that things that are pinnacles of human engineering exist like this where the general public has no idea it even exists, though they (most likely) use it every single day.
I find red dead redemption 2 more impressive. I don’t know why. It sounds stupid but S3 on the surface has the simplest api and it’s just not impressive to me when compared to something like that.
I’m curious which one is actually more impressive in general.
Simple to use from an external interface yes, the backend is wildly impressive.
Some previous discussion https://news.ycombinator.com/item?id=36900147
> S3 on the surface has the simplest api and it’s just not impressive [...]
Reminded of the following comment from not too long ago.
That's the strangest comparison I have seen. What axis are you really comparing here? Better graphics? Sound?
Complexity and sheer intelligence and capability required to build either.
And what is the basis for your claim? You are not impressed by AWS's complexity and intelligence and capability to build and manage 1-2 zettabytes of storage near flawlessly?
Im more impressed by red dead redemption 2 or baldurs gate 3.
There is no “basis” other my gut feeling. Unless you can get quantified metrics to compare that’s all we got. For example if you had lines of code for both, or average IQ. Both would lead towards the “basis” which neither you or I have.
AWS has said that the largest S3 buckets are spread over 1 million hard drives. That is quite impressive.
Red dead redemption 2 is likely on over 74 million hard drives.
I think you misunderstood. They're not saying S3 uses a million hard drives, they're saying that there exist some large single buckets that use a million hard drives just for that one bucket/customer!
actually data from more than one customer would be stored on those million drives. But data from one customer is spread over 1 million drives to get the needed IOPs from spinning hard drives.
GCS's metadata layer was originally implemented with Megastore (the precursor to Spanner). That was seamlessly migrated to Spanner (in roughly small-to-large "region" order), as Spanner's scaling ability improved over the years. GCS was responsible for finding (and helping to knock out) quite a few scaling plateaus in Spanner.
> GCS comes across as a much better thought-out and built product
I've worked with AWS and GCS for a while, and I have the opposite opinion. GCS is what you get if you let engineers dictate to customers how they are allowed to do work, and then give them shit interfaces, poor documentation, and more complexity than adds value.
There's "I engineered the ultimate engineery thing", and then there's "I made something people actually like using".
Maybe. But Google do have a reputation which makes selecting them for infrastructure a risky endeavor
From my POV Amazon designs its services from a "trust nothing, prepare for the worst case" perspective. Eventual consistency included. Sometimes that's useful and most of the time it's a PITA.