JDK-6976528 : PS: assert(!limit_exceeded || softrefs_clear) failed: Should have been cleared
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs19,hs20
  • Priority: P4
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2010-08-12
  • Updated: 2015-02-02
  • Resolved: 2013-03-19
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 7 JDK 8 Other
7u80Fixed 8 b82Fixed hs25Fixed
Related Reports
Relates :  
Description
I'll need to check if this has been seen on platforms other than amd64 and baselines
other than comp_baseline below. This looks like the first time this failure has been
noted by a human, although i need to check if it's been noted in the nightly tests
before but missed by us. (I'll update this space when i find out for sure one way or
another.)

http://sqeweb.sfbay.sun.com/nfs/results/vm/gtee/JDK7/NIGHTLY/VM/2010-08-10/Comp_Baseline/vm/windows-amd64/server/comp/windows-amd64_vm_server_comp_nsk.stress.testlist/ResultDir/jck122012/hs_err_pid6060.log

#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (C:\temp\jprt\P1\B\191642.never\source\src\share\vm\gc_implementation\parallelScavenge\parallelScavengeHeap.cpp:470), pid=6060, tid=2624
#  assert(!limit_exceeded || softrefs_clear) failed: Should have been cleared
#
# JRE version: 7.0
# Java VM: OpenJDK 64-Bit Server VM (19.0-b04-201008101916.never.6975027-fastdebug compiled mode windows-amd64 )
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

---------------  T H R E A D  ---------------

Current thread (0x000000000a1a5800):  JavaThread "Thread-261" [_thread_in_vm, id=2624, stack(0x000000001c520000,0x000000001c620000)]

Stack: [0x000000001c520000,0x000000001c620000]
[error occurred during error reporting (printing stack bounds), id 0xe0000000]


[error occurred during error reporting (printing native stack), id 0xe0000000]


Not unexpectedly, the heap is very nearly full when the error occurs:-

VM state:synchronizing (normal execution)

VM Mutex/Monitor currently owned by a thread:  ([mutex/lock_event])
[0x000000000056be08] Threads_lock - owner thread: 0x00000000060f9800
[0x000000000056c628] Heap_lock - owner thread: 0x000000000b03e000
[0x000000000056cf18] MethodData_lock - owner thread: 0x0000000006150000

Heap
 PSYoungGen      total 466048K, used 233088K [0x00000000da960000, 0x0000000105400000, 0x0000000105400000)
  eden space 233088K, 100% used [0x00000000da960000,0x00000000e8d00000,0x00000000e8d00000)
  from space 232960K, 0% used [0x00000000e8d00000,0x00000000e8d00000,0x00000000f7080000)
  to   space 232960K, 0% used [0x00000000f7080000,0x00000000f7080000,0x0000000105400000)
 PSOldGen        total 1398144K, used 1398126K [0x0000000085400000, 0x00000000da960000, 0x00000000da960000)
  object space 1398144K, 99% used [0x0000000085400000,0x00000000da95b8f0,0x00000000da960000)
 PSPermGen       total 21248K, used 8898K [0x0000000080000000, 0x00000000814c0000, 0x0000000085400000)
  object space 21248K, 41% used [0x0000000080000000,0x00000000808b0838,0x00000000814c0000)


and for the errant thread:-

---------------  T H R E A D  ---------------

Current thread (0x000000000a1a5800):  JavaThread "Thread-261" [_thread_in_vm, id=2624, stack(0x000000001c520000,0x000000001c620000)]

Stack: [0x000000001c520000,0x000000001c620000]
[error occurred during error reporting (printing stack bounds), id 0xe0000000]


[error occurred during error reporting (printing native stack), id 0xe0000000]

Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
v  ~RuntimeStub::_new_array_Java
J  java.lang.StringBuffer.<init>(Ljava/lang/String;)V
j  javasoft.sqe.tests.vm.classfmt.lmt.stcksz001.stcksz00101m1.stcksz00101m1.run([Ljava/lang/String;Ljava/io/PrintStream;)I+62
v  ~StubRoutines::call_stub
J  sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;
j  sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+87
j  sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+6
j  java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object;+161
J  nsk.stress.share.StressTest$TestThread.runTest(I)V
j  nsk.stress.share.StressTest$TestThread.run()V+27
v  ~StubRoutines::call_stub

=>0x000000000a1a5800 JavaThread "Thread-261" [_thread_in_vm, id=2624, stack(0x000000001c520000,0x000000001c620000)]

The assert in question appears to have been added as part of:-

6858496 Clear all SoftReferences before reaching the GC overhead limit

so i have added that CR to the "See also" field.
gc/gctests/FinalizeLock

Comments
This assertion failed in nsk/stress/stack/stack018 as well.
05-03-2013

The assertion says that if the gc time limit has been reached than all softrefs have been cleared. The flag _all_soft_refs_clear is reset after each GC and it is not obvious that it will be always be set at the point of the assertion. In any event the test below the assertion if (limit_exceeded && softrefs_clear) { is the correct test so I'm deleting the assertion.
04-03-2013