JDK-8219789 : [TESTBUG] TestOptionsWithRanges.java produces hs_err_pidXXXXX.log file for VMThreadStackSize=9007199254740991
Type:Enhancement
Component:hotspot
Sub-Component:runtime
Affected Version:12
Priority:P4
Status:Resolved
Resolution:Fixed
Submitted:2019-02-26
Updated:2019-09-11
Resolved:2019-02-28
The Version table provides details related to the release that this issue/RFE will be addressed.
Unresolved : Release in which this issue/RFE will be addressed. Resolved: Release in which this issue/RFE has been resolved. Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.
TestOptionsWithRanges.javap technically passes for VMThreadStackSize=9007199254740991, but the test will produce hs_err_pidXXXXX.log as reported by [tschatzl]
Comments
Fix Request
I would like to bring this to jdk11.0.3 and jdk12.
The huge stacks seem to cause problems to our solaris systems.
11.0.3. as this is a pure testbug.
No risks. Patch applied cleanly.
04-03-2019
We need to exclude VMThreadStackSize=9007199254740991 (which is the max range) from TestOptionsWithRanges.java
26-02-2019
Reported by [tschatzl]
With latest 13, I am sometimes(?) getting a test failure with runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java. In any case, even if the test passes, there are hs_err files in the scratch directories.
I.e.
$ rm -rf JT*; export JT_JAVA=java && ~/Downloads/jtreg/bin/jtreg -J-Djavatest.maxOutputSize=1000000 -testjdk:[...]/build/linux-x86_64-server-fastdebug/images/jdk/ -v -conc:8 runtime/CommandLine/OptionsValidation/TestOptionsWithRanges.java
[...]
Passed. Execution successful
Test results: passed: 10
but
$ find JTwork -name 'hs_err*'
JTwork/scratch/3/hs_err_pid1216.log
JTwork/scratch/3/hs_err_pid1184.log
JTwork/scratch/3/hs_err_pid1209.log
I.e. the test should probably not testing the bounds of VMThreadStackSize.
Contents of one of the hs_err files:
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Cannot create worker GC thread. Out of system resources.
# Possible reasons:
# The system is out of physical RAM or swap space
# The process is running with CompressedOops enabled, and the Java Heap may be blocking the growth of the native heap
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (...src/hotspot/share/gc/shared/workerManager.hpp:91), pid=1184, tid=1190
#
# JRE version: (13.0) (fastdebug build )
# Java VM: OpenJDK 64-Bit Server VM (fastdebug 13-internal+0-adhoc.tschatzl.openjdk, mixed mode, aot, sharing, tiered, compressed oops, g1 gc, linux-amd64)
# Core dump will be written. Default location: Core dumps may be processed with "/usr/share/apport/apport %p %s %c %d %P" (or dumping to /home/tschatzl/Downloads/openjdk/test/hotspot/jtreg/JTwork/scratch/3/core.1184)
#
--------------- S U M M A R Y ------------
Command Line: -Xmx1024m -XX:-ZapUnusedHeapArea -XX:+UseG1GC -XX:VMThreadStackSize=9007199254740991 optionsvalidation.JVMStartup
Note the really large VMThreadStackSize value.