This is a solution to problem that does not exists. In database heavy apps the latency is the least of the concern. unless that architecture is not optimal in some cases to meet compliance.
Eh, sure latency is suboptimal. But if you have a LLM in the mix, that latency will dominate the overall response time. At that point you might not care about how performant your index is, and since performance/cost is non linear, it can translate to very significant savings
I think latency also matters a lot for client-side DB connection pooling. If the DB is a 10ms round-trip away, and queries take 10ms, the process on the DB side is only really busy half the time.
There’s no network latency involved as it’s an in memory, embedded database.
Also, max throughput here is ultimately a sum of how fast the memory allocator is and memory bandwidth which, even with a poor implementation would be orders of magnitude more than what you can do with standard NICs and Linux kernel networking stack.
Moreover, as a consequence, far less context switching is involved.
Your site keeps most of the logic on the backend, and only makes a single roundtrip to the server per user interaction - that makes you significantly less subject to latency problems than the trendy PWAs that implement most of their logic in the frontend (and then have to issue multiple queries to the backend to load the necessary data)
Yeah those response times are in the realm of “why would I even consider that”. If I am trying to tune my db queries to have p95 latency of 1ms (for example) there’s no way I would choose an architecture that then threw that all out the window with ~100ms network latency.
Hopefully I am misunderstanding those numbers somehow.
Network latency is your no.1 bottleneck for every modern device, everything else is a distant second. Also you can optimize everything, but you can't make MPAs navigate without a network roundtrip.
As mentioned in the article there is no magic latency, throughput and memory consumption are connected optimizing for one you will sacrifice a bit in the other areas.
Correct me if I am wrong, but won't latency be an issue if connecting to a DB via an API? I would imagine that would be a deal breaker for most (especially for me).
Bandwidth and memory are probably both worse, JSON adds overhead. Round trip latency only happens if you wait for the results of one query to send the other.
The fact that you have a whole other process, was my line of thinking. If the scheduling doesn't play nice then latency will suffer. I don't really know in practice though.
There are other cloud-based database services out there - and from my (fairly limited) experience the latency involved is rather noticeable.
It might be different if you do bulk uploads of data and then spend a lot of time querying it and retrieving relatively small reports - but that's probably not what most applications spend their time doing.
Right, I replicate the database. Latency hasn't been a problem for our volume. During our peak hours we get 30 requests/second, so it's pretty manageable.
Regarding latency, it's mostly related to the distance between the user and the database. The routing strategy has nothing to do with that.
reply