United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
Bug ID: JDK-7026932 G1: No need to abort VM when card count cache expansion fails
JDK-7026932 : G1: No need to abort VM when card count cache expansion fails

Details
Type:
Enhancement
Submit Date:
2011-03-11
Status:
Closed
Updated Date:
2011-04-24
Project Name:
JDK
Resolved Date:
2011-04-24
Component:
hotspot
OS:
generic
Sub-Component:
gc
CPU:
generic
Priority:
P4
Resolution:
Fixed
Affected Versions:
7
Fixed Versions:
hs21 (b08)

Related Reports
Backport:

Sub Tasks

Description
Steffan Friberg reported that he experienced the following crash while doing some G1 performance runs:

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 9831128 bytes for CardEpochCacheEntry in /HUDSON/workspace/jdk7-2-build-linux-i586-product/jdk7/hotspot/src/share/vm/gc_implementation/g1/concurrentG1Refine.cpp
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (allocation.inline.hpp:44), pid=27123, tid=3022015376
#
# JRE version: 6.0_25-b03
# Java VM: Java HotSpot(TM) Server VM (21.0-b02 mixed mode linux-x86 )
# Core dump written. Default location: /localhome/tests/specjapp04/wls1032/sthx6434/wlsdomain/wls103/specdomain/core or core.27123
#

---------------  T H R E A D  ---------------

Current thread (0x08dfd000):  GCTaskThread [stack: 0x00000000,0x00000000] [id=27125]

Stack: [0x00000000,0x00000000],  sp=0xb42017b0,  free space=2951173k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x6c709b]  VMError::report_and_die()+0x19b
V  [libjvm.so+0x2c8c2e]  report_vm_out_of_memory(char const*, int, unsigned int, char const*)+0x4e
V  [libjvm.so+0x14a93b]  AllocateHeap(unsigned int, char const*)+0x4b
V  [libjvm.so+0x2967a2]  ConcurrentG1Refine::clear_and_record_card_counts()+0x92
V  [libjvm.so+0x35bd9c]  G1RemSet::oops_into_collection_set_do(OopsInHeapRegionClosure*, int)+0x13c
V  [libjvm.so+0x347808]  G1CollectedHeap::g1_process_strong_roots(bool, SharedHeap::ScanningOption, OopClosure*, OopsInHeapRegionClosure*, OopsInGenClosure*, int)+0x288
V  [libjvm.so+0x34f26d]  G1ParTask::work(int)+0x98d
V  [libjvm.so+0x6d6289]  GangWorker::loop()+0x99
V  [libjvm.so+0x6d5c08]  GangWorker::run()+0x18
V  [libjvm.so+0x587411]  java_start(Thread*)+0x111
C  [libpthread.so.0+0x5832]  abort@@GLIBC_2.0+0x5832

The full hs_err is attached.

                                    

Comments
EVALUATION

Out of C heap while attempting to expand the data structures used to manage the hot card cache. We shouldn't abort in this case but instead continue execution with the existing data structures.
                                     
2011-03-11
EVALUATION

http://hg.openjdk.java.net/jdk7/hotspot-gc/hotspot/rev/02f49b66361a
                                     
2011-03-30



Hardware and Software, Engineered to Work Together