Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Well it’s easy to make a GC with consistent latency if your attitude is ‘don’t collect things that need inconsistent latency to collect.’


sort by: page size:

Yeah, I’m nominally familiar with these, but I can’t understand why one of these low latency collectors wouldn’t be the default GC unless they impose other significant tradeoffs.

But a low-latency GC won't solve that problem.

You'd be surprised. With an allocator and collector that is aware of real time constraints, GC can actually be a pretty huge advantage for achieving low latency.

What I meant is that low latency GC is sort of a commodity/mainstream now. Not advising to use java or a specific JIT+Collector.

When latency matters, just use pool allocation, which makes the GC irrelevant.

Very good point. Performance folds in latency (determinism) and throughput. There is usually a trade-off between the two. GC might handle throughput reasonably, latency is a bit tougher. You are sort of left tweaking knobs on a black box hoping to get good results in the end.

Digression here, but I like the "(nearly)". This bit always amuses me about garbage collection wonks. The pauseless bit is a real time requirement. Saying your GC has great latencies or is "(nearly) pauseless" is tantamount to telling a real time engineer your system only fails some of the time. It makes you look dumb.

GC is great. GC makes a ton of things simpler. GC as implemented in popular environments still sucks for real time use.


You are always going to have some kind of latency spike with a sweeping GC; even if that spike is tiny.

GC often gets more flak than is deserved.

Often such flak ignores the differences between throughput and latency.

For long, lived processes you’ll end up writing some kind of garbage collection system.


That goal is referring to latency spikes. GC is usually much less than 20% of the time on average, but it may take, say, 200ms in one go and then run for 2 seconds without another collection. It seems their goal is to ensure there's no large pauses like that, so the 200ms would be spread across the 2 seconds as smaller time slices instead.

Can you link to more info about that latency-optimized GC?

Are there any production-ready GCs that have strictly bounded latency? I'm not aware of any, and that means that GC is still too immature for use in about 95% of the computers in my house, the deeply embedded ones.

Sorry, I was referring to the implementation of safepoints, not GC latency.

It is really good to see Go's garbage collector claims backed up in the real world. In my experience GC pauses are a real problem, I hope this really works.

As a niggle, it's a shame when people report averaged latencies. Since latency is almost certainly not normally distributed averages are of little value. The percentiles are good.


The GC is low latency, but its throughput isn’t great. The key is avoiding the heap by mostly brute forcing trivial data structures on the stack (which is why you see so many repetitive O(n) loops).

It's perfectly passable for web serving and other such high-latency tasks. In a GC, the goals of throughput and latency are diametrically opposed -- optimizing for one makes the other worse. The present simplistic GC is a typical throughput-oriented design.

Unpredictable latency is a huge no-no in these days of responsive server applications. Long GC pauses are just not acceptable to many server developers, so I don't think it's correct to consider it only a client-side problem. If you look at the mechanical sympathy mailing list, you'll see a large group of Official Knob Turners for the Java garbage collectors discussing their trade.

Presenting a new GC implementation as solving the low latency problem that people have been working on for decades without any real proof.

Even a cascading deallocation is still deterministic. It’s true that it doesn’t have hard latency guarantees, though (and true GC with hard latency guarantees is arguably more useful in many cases).
next

Legal | privacy