[Submitted on 15 Dec 2021]
Featured Content Adsadd advertising here
Summary: Without reference to the lengthy historic past of rubbish collection (GC) and its prevalence in
accepted programming languages, there would possibly perhaps be surprisingly cramped clarity about its
correct overheads. Even when evaluated using accepted, successfully-established
methodologies, essential tradeoffs made by GCs can rush unnoticed, which ends in
misinterpretation of evaluate results. In this paper, we 1) assemble a
methodology that enables us to procedure a decrease sure on the absolute overhead
(LBO) of GCs, and a pair of) narrate key performance tradeoffs of 5 manufacturing GCs in
OpenJDK 17, a excessive-performance Java runtime. We uncover that with a modest heap
size and in some unspecified time in the future of a various suite of accepted benchmarks, manufacturing GCs incur
giant overheads, spending 7-82% more wall-clock time and 6-92% more CPU
cycles relative to a nil-rate GC scheme. We propose that these overheads would possibly perhaps per chance additionally be
masked by concurrency and staunch provision of memory/compute. In addition, we
gather that newer low-end GCs are very much more costly than older GCs,
and once in a whereas even carry worse utility latency than stay-the-world GCs.
Our findings reaffirm that GC is by no come a solved field and that a
low-rate, low-latency GC stays elusive. We counsel adopting the LBO
methodology and using a substantial broader differ of rate metrics for future GC opinions.
This would possibly perhaps per chance no longer most spirited assist the team more comprehensively perceive the
performance traits of various GCs, nevertheless additionally display alternatives for
future GC optimizations.
Submission historic past
From: Zixian Cai [view email]
Wed, 15 Dec 2021 04: 51: 49 UTC (165 KB)
Be part of the pack! Be part of 8000+ others registered users, and gather chat, invent groups, put up updates and invent company in some unspecified time in the future of the sphere!