Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Eh, it's a useful proxy. 1000 lines a day isn't better than 100 lines a day but both tell you that it's being actively worked on.


sort by: page size:

Are you implying that given the specs, hundreds of thousands of messages per day is not good enough? I think you are, or at least that is what I was thinking myself.

Averaging 680 lines every single day is still much, much too high.

That's not impressive when those lines are just API calls to a hosted service that does all of the actual work.

Because it would change a massive amount of infrastructure and get about 10 hits a day.

(OK, 10 hits a day is probably an exaggeration but shockingly little of one.)


5 million lines really isn't that much in the grand scheme of things.

True. Adding an extra, capable person to an incredibly small number of people who've considered a problem is a huge increase in total throughput on it. Might be that there isn't much more to it.

It's not 100 users though, it's 250 million users and the infrastructure to balance them.

Whether or not you think my anus is more capable than you estimating the cost, for an organisation that relies on soft revenue, this is serious money.


8000 messages a day, tops? That’s 5 a minute. Does that warrant “infrastructure”? I think a gameboy’s Z80 could handle that load.

I don’t want to be dismissive, but I often see these big numbers being posted, like “14M messages” or “thousands of messages” and then adding “per year” or something, which brings it down to toy level load.

Even the first “serious” example is about “thousands of messages” per minute. Say 5K a minute. That’s 83 per second, say 100. That seems .. not that interesting?

Am I being too dismissive? I think I am. I am not seeing something right. Can anybody say something to widen my perspective?


Right, but 10,000 requests in a quarter is 833 per week. 2/833 is 0.24%, or 99.76% uptime that week.

Is 500 million lines a reasonable number for this? I've never worked on anything remotely like this, but 500 million seems like a crazy high number.

It's quite possibly because they're getting a lot of traffic. As people are realizing that it's actually functional I'm sure they've got more telnet connections than all the MUDs in the world combined.

It's a nice stat to see but I think this sort of comparison with "we moved our infrastructure of undisclosed age and unknown bloat to this new infrastructure built for the current problem domain" doesn't really do much for the ongoing conversation.

The article is touted as praise for a stack but my gut says that its really a smart restructuring of how they serve mobile. Either way, good on them for the efficiency boost.


It's "servicing millions of customer requests a day" or some part of "50,000" (later in the article)?

The example doesn't seem compelling to me, it's not rearranged the delivery, it's not solving any substantial problem in the co-dialog. What does this prove?


80,000 messages a day is still ridiculously minuscule. It's a rounding error. Average them how you like. Assume exponential growth. Whatever. It's still around the same number of messages that go on in say a single channel on freenode.

When the guy from Sequoia, who has the inside numbers on the top companies, says it is unheard of, its probably unheard of.

The backend service is rather simple for 50 billion messages a day? hmmm not so much, I'm thinking.


The counterpoint is 151M records represents .0000041% of global call records in 2016.

(Based on an estimate of 10B calls per day worldwide)


The OP clearly states that it's about 22m responding internet protocols.

Not to diminish anything but 100 to 200k messages per day is not really huge in relative scale of things. Several private services and government orgs in China and India easily send 10x or more of that number.

It's a nice accomplishment to build a user-facing feature to that scale, but I'm not all that impressed by the raw number. Let's break it down: 1.3T requests per day is roughly 15M req/sec, and assuming that each of the approximately 300 PoP locations [0] is serving this via anycast, that's 'only' 50k req/sec/PoP, easily doable by a couple, a handful at most, of hosts in each location. Small request and response sizes increase total request rate but keep down overall bit rate so one can handle a larger number of requests with a small number of hosts. Obviously these numbers will vary with location, higher and lower based on overall pop utilization usage, but rough enough to plan for. Even with encryption, these are well within the reach of single digit machine numbers in each location.

I don't think the raw number is difficult, the difficult problem was building up the supporting infrastructure, both technical and product-wise.

My source for the above? I've built at-scale, geo-distributed internet-facing services such as anycast DNS, video streaming delivery and the like.

[0] https://www.cloudflare.com/network/

next

Legal | privacy