Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Works if latency isn't an issue between app and db, which it often times is.


sort by: page size:

Heh, heck my db and app live in the same data center, and the latency is more than 30-40 MS.

This is a solution to problem that does not exists. In database heavy apps the latency is the least of the concern. unless that architecture is not optimal in some cases to meet compliance.

Correct me if I am wrong, but won't latency be an issue if connecting to a DB via an API? I would imagine that would be a deal breaker for most (especially for me).

And functions often have multiple db calls not just one. The latency would be a deal breaker no?

I think latency also matters a lot for client-side DB connection pooling. If the DB is a 10ms round-trip away, and queries take 10ms, the process on the DB side is only really busy half the time.

How are you imagining solving latency issues? Many people want their DB to be close to their appservers.

edit: and if it's an hosted service, I'm assuming decent networking options (ipsec et al)?


Wouldn't this drastically increase database latency?

Unfortunately it's a database that's gotten big, so I need low latency :/

I’d think the latency between the app servers and the DBs would be a problem, given they didn’t live in the same data center. How was that mitigated?

Oh cool, so now I can host my app in one data center and have it make DB calls across the open internet to another DB server! But wait, there's more! It's over a stateless protocol: HTTP, with really poor multiplexing/pipelining support.

Latency is a feature, right? Like "slow your roll, cowboy, let's not have a heart attack here".


SimpleDB latency is bad, but not that bad.

Yes, OP is assuming that. The popular use-case for a "global DB" is end-users.

And the popular definition of geographic network latency is a function of distance & the speed of light, not the speed of which an app talks to the DB.


Latency is quite poor, I wouldn't recommend running high performance database loads there.

"One cool, super-advanced trick you can use to reduce latency for simple queries by 95%!"

Then you read the article and it's just running the app and DB on the same physical server, and communicating over local sockets rather than a TCP/IP stack through various virtualization layers and across several networks, real and virtual—so, partying like it's 1999.


Inter-datacenter network latency is also a limiting factor. Round trips between an app server and a database add up quickly if they take long.

It's absolutely an issue when multiple applications or users write to the same data from different locations, or when users move around.

If the application only lives in one place you are paying for the latency to reach the application from the remote client in the first place, even if the latency from the app to the database is "fast".

FaunaDB gives you local-latency reads and minimum global write latency from everywhere.


If latency and performance is an issue there are also solutions like RocksDB or LevelDB

This is obviously sometimes the case. But more often I’ve seen IO bound apps spending all their time on network roundtrip latency. I.E. not a few poorly performing SQL queries, but a thousand queries which all take a millisecond or two.

Neat idea, but what about my database? Where does that live?

Do you do anything to speed up latency from the edge to the database?

next

Legal | privacy