JDK-6901609 : CMS crash in GCTaskThread BinaryTreeDictionary::getChunkFromTree
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 6u5-rev
  • Priority: P3
  • Status: Closed
  • Resolution: Duplicate
  • OS: linux_redhat_5.2
  • CPU: generic
  • Submitted: 2009-11-16
  • Updated: 2010-07-29
  • Resolved: 2010-01-11
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Related Reports
Duplicate :  
Relates :  
Relates :  
CMS crash in GCTaskThread BinaryTreeDictionary::getChunkFromTree

OS:Red Hat Enterprise Linux Server release 5.2 (Tikanga)
vm_info: Java HotSpot(TM) 64-Bit Server VM (10.0-b19) for linux-amd64 JRE (1.6.0_05-b13)

Please find the complete hs_err_pid.log file attached.

Demangled stack trace :

Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
V  [libjvm.so+0x1cf160] BinaryTreeDictionary::getChunkFromTree(unsigned long, FreeBlockDictionary::Dither(bool))+0x20
V  [libjvm.so+0x1d0371] BinaryTreeDictionary::getChunk(unsigned long, FreeBlockDictionary::Dither)+0x11
V  [libjvm.so+0x2389d1] CompactibleFreeListSpace::getChunkFromDictionaryExact(unsigned long)+0x21
V  [libjvm.so+0x23b880] CFLS_LAB::alloc(unsigned long)+0x50
V  [libjvm.so+0x25790b] ConcurrentMarkSweepGeneration::expand_and_par_lab_allocate(CMSParGCThreadState*, unsigned long)+0x3b
V  [libjvm.so+0x253fbb] ConcurrentMarkSweepGeneration::par_promote(int, oopDesc*, markOopDesc*, unsigned long)+0x1ab
V  [libjvm.so+0x514b65] ParNewGeneration::copy_to_survivor_space_avoiding_promotion_undo(ParScanThreadState*, oopDesc*, unsigned long, markOopDesc*)+0x3b5
V  [libjvm.so+0x5156be] ParScanClosure::do_oop_work(oopDesc**, bool, bool)+0xbe
V  [libjvm.so+0x2f5968] instanceKlass::oop_oop_iterate_nv(oopDesc*, ParScanWithBarrierClosure*)+0xa8
V  [libjvm.so+0x512904] ParScanThreadState::trim_queues(int)+0xc4
V  [libjvm.so+0x51302e] ParEvacuateFollowersClosure::do_void()+0x1e
V  [libjvm.so+0x5132b3] ParNewGenTask::work(int)+0x123
V  [libjvm.so+0x678cfd] GangWorker::loop()+0xad
V  [libjvm.so+0x678c14] GangWorker::run()+0x24
V  [libjvm.so+0x505eea] java_start(Thread*)+0x14a

PUBLIC COMMENTS I see that the customer is using -Xms128m -Xmx1024M, and the failure happens as the heap is being expanded:- Heap par new generation total 19136K, used 19136K [0x00002aaaae200000, 0x00002aaaaf6c0000, 0x00002aaab6200000) eden space 17024K, 100% used [0x00002aaaae200000, 0x00002aaaaf2a0000, 0x00002aaaaf2a0000) from space 2112K, 100% used [0x00002aaaaf2a0000, 0x00002aaaaf4b0000, 0x00002aaaaf4b0000) to space 2112K, 100% used [0x00002aaaaf4b0000, 0x00002aaaaf6c0000, 0x00002aaaaf6c0000) concurrent mark-sweep generation total 320128K, used 287303K [0x00002aaab6200000, 0x00002aaac9aa0000, 0x00002aaaee200000) concurrent-mark-sweep perm gen total 55420K, used 33535K [0x00002aaaee200000, 0x00002aaaf181f000, 0x00002aaaf3600000) If we get a core file or a complete pstack of the core, we might be able to tell more, but one strong possibility is that the customer is running into 6642634, for which the stated workaround is to fix the size of the heap (both perm gen and java heap), in the customer's case via -Xms1024M -Xmx1024M -XX:MaxPermSize=64m -XX:PermSize=64m so as to avoid the generation expansion race fixed in 6642634. Alternatively, the customer should try 6u10 or later (when the bug was fixed). In fact, since a handful of other CMS bugs (related to work queue overflow) were fixed in 6u12 and later, we would in fact strongly encourage moving to 6u17, and when it's available 6u18.