Are you implying that given the specs, hundreds of thousands of messages per day is not good enough? I think you are, or at least that is what I was thinking myself.
True. Adding an extra, capable person to an incredibly small number of people who've considered a problem is a huge increase in total throughput on it. Might be that there isn't much more to it.
8000 messages a day, tops? That’s 5 a minute. Does that warrant “infrastructure”? I think a gameboy’s Z80 could handle that load.
I don’t want to be dismissive, but I often see these big numbers being posted, like “14M messages” or “thousands of messages” and then adding “per year” or something, which brings it down to toy level load.
Even the first “serious” example is about “thousands of messages” per minute. Say 5K a minute. That’s 83 per second, say 100. That seems .. not that interesting?
Am I being too dismissive? I think I am. I am not seeing something right. Can anybody say something to widen my perspective?
It's quite possibly because they're getting a lot of traffic. As people are realizing that it's actually functional I'm sure they've got more telnet connections than all the MUDs in the world combined.
It's a nice stat to see but I think this sort of comparison with "we moved our infrastructure of undisclosed age and unknown bloat to this new infrastructure built for the current problem domain" doesn't really do much for the ongoing conversation.
The article is touted as praise for a stack but my gut says that its really a smart restructuring of how they serve mobile. Either way, good on them for the efficiency boost.
It's "servicing millions of customer requests a day" or some part of "50,000" (later in the article)?
The example doesn't seem compelling to me, it's not rearranged the delivery, it's not solving any substantial problem in the co-dialog. What does this prove?
80,000 messages a day is still ridiculously minuscule. It's a rounding error. Average them how you like. Assume exponential growth. Whatever. It's still around the same number of messages that go on in say a single channel on freenode.
Not to diminish anything but 100 to 200k messages per day is not really huge in relative scale of things. Several private services and government orgs in China and India easily send 10x or more of that number.
It's a nice accomplishment to build a user-facing feature to that scale, but I'm not all that impressed by the raw number. Let's break it down: 1.3T requests per day is roughly 15M req/sec, and assuming that each of the approximately 300 PoP locations [0] is serving this via anycast, that's 'only' 50k req/sec/PoP, easily doable by a couple, a handful at most, of hosts in each location. Small request and response sizes increase total request rate but keep down overall bit rate so one can handle a larger number of requests with a small number of hosts. Obviously these numbers will vary with location, higher and lower based on overall pop utilization usage, but rough enough to plan for. Even with encryption, these are well within the reach of single digit machine numbers in each location.
I don't think the raw number is difficult, the difficult problem was building up the supporting infrastructure, both technical and product-wise.
My source for the above? I've built at-scale, geo-distributed internet-facing services such as anycast DNS, video streaming delivery and the like.
reply