United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
Bug ID: JDK-6930581 G1: assert(ParallelGCThreads > 1 || n_yielded() == _hrrs->occupied(),"Should have yielded all the ..
JDK-6930581 : G1: assert(ParallelGCThreads > 1 || n_yielded() == _hrrs->occupied(),"Should have yielded all the ..

Details
Type:
Bug
Submit Date:
2010-02-27
Status:
Closed
Updated Date:
2012-02-01
Project Name:
JDK
Resolved Date:
2011-03-08
Component:
hotspot
OS:
generic,solaris_10
Sub-Component:
gc
CPU:
x86,generic
Priority:
P4
Resolution:
Fixed
Affected Versions:
hs17,hs19,7
Fixed Versions:
hs19 (b06)

Related Reports
Backport:
Backport:
Backport:
Backport:
Duplicate:
Relates:
Relates:

Sub Tasks

Description
VM crashes when -XX:ParallelGCThreads=0 is used

Here is the report:
;; Using jvm: "/net/jse-st01.russia/export4/java/re/jdk/7/promoted/ea/b84/binaries/linux-x64/fastdebug/jre/lib/amd64/server/libjvm.so"
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (/BUILD_AREA/jdk7/hotspot/src/share/vm/gc_implementation/g1/heapRegionRemSet.cpp:1286), pid=5662, tid=140352559626512
#  Error: assert(ParallelGCThreads > 1 || n_yielded() == _hrrs->occupied(),"Should have yielded all the cards in the rem set (in the non-par case).")
#
# JRE version: 7.0-b84
# Java VM: Java HotSpot(TM) 64-Bit Server VM (17.0-b09-fastdebug mixed mode linux-amd64 )
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x000000000249a000):  VMThread [stack: 0x00007fa66065b000,0x00007fa66075c000] [id=5668]

Stack: [0x00007fa66065b000,0x00007fa66075c000],  sp=0x00007fa660751320,  free space=3d80000000000000018k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0xa061af];;  VMError::report(outputStream*)+0x138f
V  [libjvm.so+0xa0654a];;  VMError::report_and_die()+0x2da
V  [libjvm.so+0x462b7e];;  report_assertion_failure(char const*, int, char const*)+0x6e
V  [libjvm.so+0x56f91a];;  HeapRegionRemSetIterator::has_next(unsigned long&)+0xea
V  [libjvm.so+0x521c26];;  ScanRSClosure::doHeapRegion(HeapRegion*)+0x136
V  [libjvm.so+0x4f8ff8];;  G1CollectedHeap::collection_set_iterate_from(HeapRegion*, HeapRegionClosure*)+0x98
V  [libjvm.so+0x51e241];;  HRInto_G1RemSet::scanRS(OopsInHeapRegionClosure*, int)+0x171
V  [libjvm.so+0x51e915];;  HRInto_G1RemSet::oops_into_collection_set_do(OopsInHeapRegionClosure*, int)+0x3f5
V  [libjvm.so+0x5035a4];;  G1CollectedHeap::g1_process_strong_roots(bool, SharedHeap::ScanningOption, OopClosure*, OopsInHeapRegionClosure*, OopsInHeapRegionClosure*, OopsInGenClosure*, int)+0x334
V  [libjvm.so+0x50dd14];;  G1ParTask::work(int)+0x4a4
V  [libjvm.so+0x503aae];;  G1CollectedHeap::evacuate_collection_set()+0x28e
V  [libjvm.so+0x5011fd];;  G1CollectedHeap::do_collection_pause_at_safepoint()+0xc5d
V  [libjvm.so+0xa22bba];;  VM_G1IncCollectionPause::doit()+0xca
V  [libjvm.so+0xa21adf];;  VM_Operation::evaluate()+0x8f
V  [libjvm.so+0xa1fc10];;  VMThread::evaluate_operation(VM_Operation*)+0xc0
V  [libjvm.so+0xa205ab];;  VMThread::loop()+0x24b
V  [libjvm.so+0xa20afe];;  VMThread::run()+0xae
V  [libjvm.so+0x83b380];;  java_start(Thread*)+0xf0
#  Internal Error (/tmp/jprt/P1/B/045005.jcoomes/source/src/share/vm/gc_implementation/g1/heapRegionRemSet.cpp:1284), pid=30925, tid=3037698960
#  assert(ParallelGCThreads > 1 || n_yielded() == _hrrs->occupied()) failed: Should have yielded all the cards in the rem set (in the non-par case).
#
# JRE version: 7.0-b99
# Java VM: OpenJDK Client VM (19.0-b02-201007020450.jcoomes.gc-tasks-fastdebug mixed mode linux-x86 )
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x0912a800):  GCTaskThread [stack: 0x00000000,0x00000000] [id=30929]

Stack:
[error occurred during error reporting (printing stack bounds), id 0xe0000000]

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x6dfa17];;  _ZN7VMError6reportEP12outputStream+0x1207
V  [libjvm.so+0x6dfc9b];;  _ZN7VMError14report_and_dieEv+0x18b
V  [libjvm.so+0x2e0a98];;  _Z15report_vm_errorPKciS0_S0_+0x68
V  [libjvm.so+0x6dfacf];;  _ZN7VMError6reportEP12outputStream+0x12bf
V  [libjvm.so+0x6dfc9b];;  _ZN7VMError14report_and_dieEv+0x18b
V  [libjvm.so+0x2e0a98];;  _Z15report_vm_errorPKciS0_S0_+0x68
V  [libjvm.so+0x37f1d4];;  _ZN24HeapRegionRemSetIterator8has_nextERj+0xf4
V  [libjvm.so+0x34db16];;  _ZN13ScanRSClosure12doHeapRegionEP10HeapRegion+0x1a6
V  [libjvm.so+0x3289ab];;  _ZN15G1CollectedHeap27collection_set_iterate_fromEP10HeapRegionP17HeapRegionClosure+0x15b
V  [libjvm.so+0x34ab4d];;  _ZN15HRInto_G1RemSet6scanRSEP23OopsInHeapRegionClosurei+0xfd
V  [libjvm.so+0x34b1a0];;  _ZN15HRInto_G1RemSet27oops_into_collection_set_doEP23OopsInHeapRegionClosurei+0x1f0
V  [libjvm.so+0x32689c];;  _ZN15G1CollectedHeap23g1_process_strong_rootsEbN10SharedHeap14ScanningOptionEP10OopClosureP23OopsInHeapRegionClosureP16OopsInGenClosurei+0x29c
V  [libjvm.so+0x33b381];;  _ZN9G1ParTask4workEi+0x9b1
V  [libjvm.so+0x7090c0];;  _ZN10GangWorker4loopEv+0x130
V  [libjvm.so+0x7079e8];;  _ZN10GangWorker3runEv+0x18
V  [libjvm.so+0x59eaa9];;  _ZL10java_startP6Thread+0xf9
C  [libpthread.so.0+0x5832]



http://sqeweb.sfbay.sun.com/nfs/results/vm/gtee/JDK7/NIGHTLY/VM/2010-07-06/G1_GC_Baseline/vm/linux-i586/client/mixed/linux-i586_client_mixed_vm.gc.testlist_129AB4939B1/ResultDir/Churn4//hs_err_pid30925.log


Rerun file:
http://sqeweb.sfbay.sun.com/nfs/results/vm/gtee/JDK7/NIGHTLY/VM/2010-07-06/G1_GC_Baseline/vm/linux-i586/client/mixed/linux-i586_client_mixed_vm.gc.testlist_129AB4939B1/ResultDir/Churn4/rerun.sh
Both reports above are on Linux (one 32-bit client and the other 64-bit server).
I don't know whether that is mere coincidence. I have not tried reproducing it
anywhere yet.
gc.memory.Churn.Churn4.Churn4
gc/memory/Churi/Chrun4/Churn4
http://sqeweb.sfbay.sun.com/nfs/tools/gtee/results/JDK7/NIGHTLY/VM/2010-07-29/G1_GC_Baseline/vm/linux-i586/server/mixed/linux-i586_vm_server_mixed_vm.gc.testlist/ResultDir/Churn1//hs_err_pid11724.log

gc.memory.Churn.Churn1.Churn1
gc/memory/Churn/Churn1/Churn1

                                    

Comments
SUGGESTED FIX

We need to have the sparse table iterator iterate over the table referenced by the _next field. We cannot do this when the iterator is initialized as _cur and _next may point to the same instance of RHashTable - which will no longer be the case after expansion.

We either do this using an extra level of indirection (i.e. initalize the iterator with the address of the _next field), or re-initialize the iterator during expansion.

Looking at the code the premise seems to be that updates would happen to the _next RHashTable instance while scans and iteration would operate on the _cur RHashTable instance. Prior to RSet updating these two tables are reconciled. An alternative might be to prevent sparse table expansion during a pause.
                                     
2010-07-28
EVALUATION

During RSet updating the sparse table for a region's remembered set can get expanded. After expansion subsequent cards are added to the RHashTable referenced by the _next field. Subsequent RSet scanning iterates over the RHashTable referenced by the _cur field which may not match that referenced by the _nect field. This has the potential of missing some cards (added post-expansion) during RSet scanning.
                                     
2010-07-28
EVALUATION

http://hg.openjdk.java.net/jdk7/hotspot-gc/hotspot/rev/a03ae377b2e8
                                     
2010-08-06



Hardware and Software, Engineered to Work Together