JDK-8058221 : Rounding in log output makes evaluation difficult
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 9
  • Priority: P4
  • Status: Open
  • Resolution: Unresolved
  • Submitted: 2014-09-11
  • Updated: 2019-02-11
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
tbdUnresolved
Related Reports
Relates :  
Relates :  
Description
In the gc, heap sizes are typically rounded to some (rather abritrary) k/M/G values for better human consumption.

This introduces subtle rounding problems when calculating values from it (like number of promoted bytes being negative for no reason)

A possibility is to add an option to allow the size format output to always be written in bytes.

This feature may also be automatically enabled by other options like G1LogLevel=finest.

Comments
Reg test required.
15-09-2014

Comments from Kirk P. from hotspot-gc-dev: The problem of the subtle rounding error is due to the fact that log records, instead of sharing common formats, define their own. So while one set of records share EXT_SIZE_FORMAT (g1CollectorPolicy.cpp:#define EXT_SIZE_FORMAT ���%.1f%s��� other records use a completely separate integer definition. In my experience most people use either Censum or one of the OSS tools or a home grown tool to look at these logs which implies that human readability is less important. And if humans want values to be rounded a flag like -XX:PrintMemoryRounding=[G,M,K,B] could be introduced. I���m not a fan of G1LogLevel=finest as this turns on bags of stuff that may or may not be relevant to the problem at hand and increases the complexity of parsing the log files. This is especially true in light of the corruption that often occurs in both CMS and G1 logs that can be difficult to untangle. Format isn���t typically that much of a problem for the home grown folks as they only deal with 1 log format, theirs. If it changes when they change JVM they can simply change their parser. However this is very painful for the few of us that actually try to maintain tools that work across all versions and support a variety of flag collector and flag combinations. As you can probably imagine, that this rounding error exists means that it needs to be compensated for in the tooling for as long as these versions of the JVM are in play again.
12-09-2014

Capacities are always rounded to some multiple of some granularity. The interesting values are the actual values anyway where when used for calculating values based on it these rounding errors tend to propagate. I agree that probably the unified logging work is the best place/opportunity to do that.
12-09-2014

Two thoughts on this: * We should probably introduce this type of support in the unified logging work rather than doing it as a separate change before that. * In G1 a lot of sizes are related to the region size, which is a minimum of 1M. I doubt that it is more readable to have those numbers printed as bytes.
12-09-2014