JDK-8253871 : Regression 7% in Derby-ParGC apparently after 8247281
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 16
  • Priority: P3
  • Status: Closed
  • Resolution: Duplicate
  • Submitted: 2020-09-30
  • Updated: 2021-06-30
  • Resolved: 2021-06-30
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 17
17Resolved
Related Reports
Duplicate :  
Relates :  
Relates :  
Description
Only shows up in the ParGC run, not G1.
Comments
Only issues that are associated with a changeset should be closed as fixed.
30-06-2021

Fixed by JDK-8210100
30-06-2021

[~ayang] Please run the option set where you achieved the 2% result in aurora and paste it here, so we can see it. Otherwise, Derby-ParGC is still -4.69% slower than 15-b36.21 (re-baselined on the new HW)
29-06-2021

According to the logs for various runs, it seems that the score reported by Derby is positively correlated with #GC cycles; IOW, higher scores when there are more GC cycles. (Maybe more GC cycles improved the program locality.) ParallelGC divides the heap into two spaces, new and old. The new space is further divided into three spaces, eden, survive1 and survive2. The log shows they have different steady-state values before&after JDK-8247281. Before: 10206M, 16M, 16M After : 10210M, 14M, 14M The total new space size is the same (10238M, also the value of `MaxNewSize` on that machine). The difference in survive* space is very likely due to the strength change (from strong to weak) of some roots in JDK-8247281: fewer objects are kept alive. The eden space just takes whatever is left in new space. The slightly larger eden space results into fewer GC cycles -> lower scores. I have manually set `MaxNewSize` to 10234M (10206+14+14) so that the eden space will be of the same size as before, and the regression drops to ~2%. I think 2% is small enough, and this ticket can be closed.
29-06-2021

Hmm. The reason is very likely to be that ParallelGC performs weak processing in a single thread now, and it used to process the in-use lists in parallel. G1 performs the weak processing in parallel, explaining why no regression was found with G1. The reason the weak processing was done in a single thread with Parallel, is that it did not have a worker gang, making it hard to plug in to shared parallel GC code. Nowadays, it does have a worker gang, so we can just plug it in. Gonna go ahead and do that, and hope this problem goes away.
01-10-2020

ILW = HLM = P3
30-09-2020