bakugo 7 days ago

4chan is a special case, because all of its content pages are static HTML files being served by nginx that are rewritten on the server every time someone makes a post. There's nothing dynamic, everyone is served the exact same page, which makes it much easier to scale.

5
donnachangstein 6 days ago

It's not a special case at all. 20 years ago this was standard architecture (hell, HN still caches static versions of pages for logged-out users).

No, what changed is the industry devolved into over-reliance on mountains of 'frameworks' and other garbage that no one person fully understands how it all works.

Things have gotten worse, not better.

pmdr 6 days ago

The "this won't scale" dogma pushed by cloud providers via frameworks has actually scared people into believing they really need a lot more resources than they actually do to display information on the web.

It's really dumbfounding that most devs fell for it even as raw computing power has gotten drastically cheaper.

haiku2077 6 days ago

I was having a conversation with some younger devs about hosting websites for our photography hobbies. One was convinced hosting the photos on your own domain would bankrupt you in bandwidth costs. It's wild.

sgarland 6 days ago

I very much enjoyed the Vercel fanboys posting their enormous bills on Twitter, and then daring people to explain how they could possibly run it on, you know, a server for anything close to the price.

I took the bait once and analyzed a $5000 bill. IIRC, it worked out to about the compute provided by an RPi 4. “OK, but what about when your site explodes in popularity?” “I dunno, take the other $4900 and buy more RPis?”

nssnsjsjsjs 6 days ago

Or get a hundred Hetzner dedis

DrillShopper 6 days ago

Sounds like the real web scale was all of the AWS bills we paid along the way

actuallyalys 6 days ago

Static HTML and caching aren't special cases by any means, but a message board where literally nothing changes between users certainly seems like a special case, even twenty years ago. You don't need that in order to make a site run fast, of course, but that limitation certainly simplifies things.

haiku2077 6 days ago

I worked at at company near the top of https://en.wikipedia.org/wiki/List_of_the_largest_software_c... for a while. It was extremely common that web services only used about 1/20th of a CPU core's timeshare. These were dynamic web services/APIs. (We did have to allocate more CPU than that in practice to improve I/O latency, but that was to let the CPU idle to be ready to quickly react to incoming network traffic.)

This was many years ago on hardware several times slower than the current generation of servers.

agumonkey 6 days ago

Here goes all your software engineering classes. So bare it's hilarious

bawolff 6 days ago

I wouldn't call that a special case, just using a good tool for the job.

mschuster91 6 days ago

... which, again, shows just how much power you can get out of a 10 year old server if you're not being a sucker for the "latest and greatest" resume-driven-development crap.

Just look at New Reddit, it's an insane GraphQL abomination.