JDK-6799919 : Recursive calls to report_vm_out_of_memory are handled incorrectly
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 7
  • Priority: P4
  • Status: Closed
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2009-02-02
  • Updated: 2014-06-26
  • Resolved: 2013-02-21
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 Other
8Fixed hs25Fixed
Related Reports
Relates :  
Description
In debug.cpp report_vm_out_memory is defined as follows:

void report_vm_out_of_memory(const char* file_name, int line_no, size_t size, const char* message) {
  if (Debugging || assert_is_suppressed(file_name, line_no))  return;

  // We try to gather additional information for the first out of memory
  // error only; gathering additional data might cause an allocation and a
  // recursive out_of_memory condition.

  const jint exiting = 1;
  // If we succeed in changing the value, we're the first one in.
  bool first_time_here = Atomic::xchg(exiting, &_exiting_out_of_mem) != exiting;

  if (first_time_here) {
    Thread* thread = ThreadLocalStorage::get_thread_slow();
    VMError(thread, size, message, file_name, line_no).report_and_die();
  }

  // Dump core and abort
  vm_abort(true);
}

The detection of recursive calls at this level is, as far as I can see, unnecessary, because a second call to report_and_die() will itself detect there is an existing error report in progress and block the thread. Further, the above logic is undesirable because if a second OOM condition is encountered in another thread then that thread will cause an immediate abort and core dump, while the first thread is trying to report details of the first error.

Comments
Refresh READ_ME
21-02-2013

Up load test case files
21-02-2013

Attached the webrev from Code Review Round 0 for HSX-25.
20-02-2013

Add an e-mail discussing the history of the code being modified by this bug.
20-02-2013

We tried to consume with 4k block allocations, but it did not take it all. this approach left a chunk lest than 4k. import java.util.ArrayList; public class MemEater { public static void main(String[] args) { System.out.println("Hello from MemEater!"); ArrayList memBlocks = new ArrayList(); while (true) { try { memBlocks.add(new char[1024 * 1024]); } catch (OutOfMemoryError oome) { // cannot allocate another block break; } } System.out.println("Allocated " + memBlocks.size() + " blocks before OutOfMemoryError."); } }
11-02-2013

There is an existing a command line option: notproduct(uintx, ErrorHandlerTest, 0, \ "If > 0, provokes an error after VM initialization; the value" \ "determines which error to provoke. See test_error_handler()" \ "in debug.cpp.") This option looks useful: $ cat Hello.java public class Hello { public static void main(String[] args) { System.out.println("Hello world!"); } } $ $JAVA_HOME/fastdebug/bin/java -XX:ErrorHandlerTest=8 Hello # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (malloc) failed to allocate 4096 bytes for ChunkPool::allocate # An error report file with more information is saved as: # /tmp/hs_err_pid3132.log The interesting parts from the hs_err_pid file: # Out of Memory Error (/tmp/workspace/jdk8-2-build-solaris-i586-product/jdk8/hotspot/src/share/vm/utilities/debug.cpp:363), pid=3132, tid=2 # # JRE version: Java(TM) SE Runtime Environment (8.0-b72) (build 1.8.0-ea-fastdebug-b72) # Java VM: Java HotSpot(TM) Server VM (25.0-b14-fastdebug mixed mode solaris-x86 ) # Core dump written. Default location: /tmp/core or core.3132 # --------------- T H R E A D --------------- Current thread (0x0806b800): JavaThread "main" [_thread_in_native, id=2, stack(0xfc43f000,0xfc48f000)] Stack: [0xfc43f000,0xfc48f000], sp=0xfc48e1c0, free space=316k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0x1db4c39] void VMError::report(outputStream*)+0x92d V [libjvm.so+0x1db5ebe] void VMError::report_and_die()+0x57a V [libjvm.so+0xb5ab59] void test_error_handler(unsigned)+0xcd5 V [libjvm.so+0x11559be] JNI_CreateJavaVM+0x2e2 C [libjli.so+0x84a7] InitializeJVM+0x103 C [libjli.so+0x64ed] JavaMain+0x65 C [libc.so.1+0xa71d0] _thr_setup+0x4e C [libc.so.1+0xa74c0] __moddi3+0x60 line 363: case 8: vm_exit_out_of_memory(num, "ChunkPool::allocate"); The vm_exit_out_of_memory macro calls report_vm_out_of_memory() in debug.cpp.
11-02-2013

D 1.168.1.1 05/07/29 11:29:29 sbohne 339 337 00031/00000/00930 MRs: COMMENTS: 5073464 improve error handling for create_itable_stub out of memory This is the bug that this code was added for, which was after the code in vmError::report_and_die() recursive error handling. But I think it was part of a bigger fix.
20-12-2012