Yes but the data is crazy expensive for that use case.
Companies that offer this aggregate connectivity through carriers across many countries which can vary widely in price, so they have to price in order to protect themselves in case someone only uses it in (insert most expensive region).
So you're paying premium for it to work many places. And it's intended for small data chunks that would be sent by things like IOT sensor clusters.
If you know you're only going to use it in (these N regions) you could possibly work with them to get a better rate and have them blacklist your traffic in the more expensive regions.
Except that there's only so much bandwidth to go around. It's not that simple. More users (who don't run nodes themselves) means the same bandwidth among more users = slower for everyone = less usable for people who actually need it.
I guess I don't see how 20 installations would be nearly enough to track all or even most internet communications in the US. Given that core routers are on the very edge of feasible performance, I don't see how you could do substantial spying there, which means your spying has to be closer to the edges, which means a lot more than 20 installations.
Moreover, such a large scale program really wouldn't centralize data in giant data center in Utah. When you're dealing with this much data, you need to process locally because the costs of transmitting data volumes that large is prohibitive.
That’s a good question, tbh I have no idea. Yeah presumably if the user base increased then the number of nodes would also grow, but definitely unclear to me what level of latency (if any) is required for privacy
This is more about maintaining 10,000,000 open connections than about actually sending useful data to this many people or even allowing input from this many connections.
In this case, you're only limited by your network speed in the event of data-destroying event. In the normal case it's completely local, which has much less headache than any network option most of the time.
You're assuming roughly a 50MBit/sec connection. What about people on mobile connections or in regions where 1MBit are barely achievable? Drop them over the edge? Ignore them? Might be a valid decision for your project, but for some people, bandwidth still matters.
I wonder how many people are donating their bandwidths to this. Will it become unsustainable in the near future when more and more Chinese (a huge number of people) adopt this?
Then you’d probably be shocked to find out that they already do that for voice servers. I’m talking about the other part which doesn’t have to have many geo distributed locations only a couple
I think there are circumstances where it is very valuable - it depends on your target audience.
For example on mobile, connection speeds in major cities in developed countries maybe rocketing, but substantial areas still need to make a lot happen with very little bandwidth. So it can be a small competitive advantage.
The title of the post makes this sound frivolous, but it actually seems like a worthwhile endeavor. 60 ms * the number of packets sent back and forth is significant; and, as the article points out, it'll also add redundancy to the cable network between Asia and the rest of the world.
That's absolutely true, the closer to the root nodes of EU you get the cheaper the bandwidth is. Heck, in Romania you can even get 10Gbit on a small datacenter for a few thousand €.
So sure, if the larger nodes can weigh up the smaller ones in a distributed network, then it might be plausible.
reply