JDK-8134802 : LCM register pressure scheduling
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 9
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: x86
  • Submitted: 2015-08-31
  • Updated: 2017-03-10
  • Resolved: 2015-09-16
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 9
9 b89Fixed
Related Reports
Duplicate :  
Relates :  
Description
These changes calculate register pressure at the entry of a basic block, at the end and incrementally while we are scheduling. It uses an efficient algorithm for recalculating register pressure on a as needed basis. The algorithm uses heuristics to switch to a pressure based algorithm to reduce spills for int and float registers using thresholds for each.  It also uses weights which count on a per register class basis to dope ready list candidate choice while scheduling so that we reduce register pressure when possible.  Once we fall over either threshold, we start trying mitigate pressure upon the affected class of registers which are over the limit.  This happens on both register classes and/or separately for each. We switch back to latency scheduling when pressure is alleviated.  As before we obey hard artifacts such as barriers, fences and such.  Overhead for constructing and providing liveness information and the additional algorithmic usage is very minimal, so as affect compile time minimally.
Comments
Code review link: http://cr.openjdk.java.net/~mcberg/8134802/webrev.04/ Added code to screen liveness analysis to a methods which contain blocks which have more than 10 instructions. Also added this same logic to LCM for each block we process. This decreases overhead to levels where register allocation speedup makes up for scheduling in the vast majority of cases on x86 and x64. Meaning overhead is no longer an issue. Changed the node flag to share value with reductions flag.
16-09-2015