How PAPER magazine's sysadmins scaled their backend for Kim Kardashian's ass-plosion

[Permalink]

They wouldn’t be the first site that gets a sudden flood of hits, and then goes back to normal. Somebody oughta start a business just handling the overflow for sites that get flooded. A one-hit wonder can’t afford to build a big server farm.

Oh, Paper. My, how you’ve grown from that little black and white zine I used to pick up in SoHo.

You too, bb, you little neurozine.

1 Like

Highly recommend the “Reply All” podcast’s coverage of this (based off the Medium article):

http://gimletmedia.com/episode/12-backend-trouble/

I hate podcasts. Fine for an airplane - if you get air-sick from reading - otherwise a waste of time.
(Most people can read around four times as fast as the average speaker can enunciate, some up to ten times as fast.)
No offense intended of course, I’m sure somebody out there listens to them on their way to work or something.

2 Likes

Christ, what an ass…

3 Likes

Like AWS? CloudFlare?

1 Like

That’s all I could think about during the episode: reinventing the wheel. It’s more fun to roll your own, but it’s unlikely anyone would nail it the first time. Either he massively overbuilds and spends way too much on infrastructure or, worse, underestimates and the site goes down.

I’d tap that backend.

1 Like

This photo rules. It’s obviously an homage - but only if you’re even aware of the original’s existence - which I, despite having spent years looking at and studying art, had never seen. I don’t buy into the racism row that surrounded the picture (protip: art doesn’t care what you think) but I do think this image is more interesting with the context.

3 Likes

While I follow many things on Medium - I never would have found this on Ars Technica or Slashdot if this wasn’t run here on BoingBoing. It matters that it is being covered here, because this kind of scaling is hard, and really poorly understood.
Really, sysadmins don’t get a lot of really well written headlines do they?
@xeni - that’s a damn awesome headline in need of a mixtape.

Maybe - I’m not an IT guy, I’m sure I’m not the first one who thought of it.

As you probably guessed, this is not a new problem. The earliest examples were probably when sites got “slashdotted” that is, back in the 90’s when slashdot.org was one of the largest websites, they would sometimes point a link to someone’s server which was actually just a small computer sat in someone’s closet and it would die. (BoingBoing have managed to do this to small sites as well). As such, there’s a bunch of methods to mitigate the problem.
The main problem is how much notice you get. In the case in the article, the sysadmins had a couple of weeks to prepare, but for some people the first they know about is their ISP ringing them up to tell them their contract has just been terminated.

1 Like

Weird that he’d use GlusterFS rather than S3 since he’s using all Amazon stuff anyway. Gluster is a bit of a pig so I can’t imagine it was for performance reasons.

The rest seemed straight-out-of-the-box simple enough from my view. Personally I would have just hosted the pages in question on a sub-domain as static content and let the internet hammer the cache to death, but maybe he figured people would stick around and check out the rest of the site? Seems unlikely, though.

This topic was automatically closed after 5 days. New replies are no longer allowed.