Why is storage at a (space-limited) cell tower more interesting than storage/compute at the ISP or packet core (or whatever's at the other end of the backhaul)?
How much latency do you think is incurred between the ISP and cell tower?
Storing my data at my house makes sense, especially if upkeep of the box is relegated to the end user. Storing my data at the tower in my neighborhood, instead of a regional center seems to be a large increase in maintenance cost for a minimal decrease in latency.
Accessing the tower is expensive in time, and equipment that runs at the tower is exposed to a wider variety of temperatures and RF stress than in a nice warehouse somewhere in the metro area.
It's possible the right caching at towers could reduce the backhaul bandwidth requirements, but seems iffy.
True, but the interesting use case is probably low population density areas where building towers is not practical. In that situation, you probably wouldn't need many concurrent connections.
It's usually the latency requirements that rule-out distributed systems. The amount of total data isn't particularly crazy, it's just that it comes in very concentrated bursts.
But that would mean 24x7x365 Internet (WiFi?) access, besides the costs imposed by the ISP for downstreams. Local storage, or maybe "clanned' style data serving (i.e. group cell networks operating within a few miles) would be better.
The real difference is in latency: I tend to use the same operator on the fixed and mobile line, with Wireguard.
SSH, SMB and Matrix respond almost as if you were in LAN.
If I add the cost for bandwidth and storage of a data center, then the economic choice is obvious.
* Most home connections are asymmetric and lack upload bandwidth.
* Most personal devices are battery powered and intermittently operated.
* Many are mobile in terms of physical location and logical network (home WiFi, work/school WiFi, cellular).
Having a third, highly available, high upload capacity location to stage data between production and consumption wins because it is a genuinely effective solution to problems inherent in sharing data between end users.
I'm not the OP, but the thing that's always scared me off from those types of services is that I think there would be high latency, where in a DB you really need low latency. Is that not true?
I don't think this is done for the sake of efficiency, but rather latency.
For example, if you had a underwater data center that sat in the middle of the atlantic ocean in between new york and london, you could do some serious trading with that capability.
This could make the backhaul problem even worse- because in order to get that 35x bandwidth , you have to transport ALL that data to a large group of base-stations,unlike today where you transport data only to the relevant base-station.
On the other hand, if a company invented a new wireless technology, it's probably smart enough to be aware of the back haul problem.
Count me among those who think the storage is doable. The transmission may be a bottleneck: effectively you're doubling the bandwidth requirements for phone traffic.
With local tapping and storage facilities with some mechanism for cache-and-forward including the enduring favorite: a station wagon full of tapes, or perhaps a panel-van full of flash drives, this remains within the realm of possibility.
How much latency do you think is incurred between the ISP and cell tower?
reply