United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
Bug ID: JDK-6920346 G1: "must avoid base_memory and AliasIdxTop"
JDK-6920346 : G1: "must avoid base_memory and AliasIdxTop"

Details
Type:
Bug
Submit Date:
2010-01-26
Status:
Closed
Updated Date:
2011-03-08
Project Name:
JDK
Resolved Date:
2011-03-08
Component:
hotspot
OS:
solaris
Sub-Component:
compiler
CPU:
sparc
Priority:
P3
Resolution:
Fixed
Affected Versions:
hs17
Fixed Versions:
hs17 (b09)

Related Reports
Backport:
Backport:

Sub Tasks

Description
When testing with the dacapo test suite I've hit the following assert:

# To suppress the following error report, specify this argument
# after -XX: or in .hotspotrc:  SuppressErrorAt=/memnode.cpp:4035
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (/java/east/u2/ap31282/hotspot-g1-race-7/src/share/vm/opto/memnode.cpp:4035), pid=2974, tid=63
#  Error: assert(alias_idx >= Compile::AliasIdxRaw || alias_idx == Compile::AliasIdxBot && Compile::current()->AliasLevel() == 0,"must avoid base_memory and AliasIdxTop")
#
# JRE version: 7.0-b28
# Java VM: OpenJDK 64-Bit Server VM (17.0-b06-internal-fastdebug mixed mode solaris-sparc )
# An error report file with more information is saved as:
# /java/east/u2/ap31282/gc_test_suite_core/dacapo/hs_err_pid2974.log
#
# If you would like to submit a bug report, please visit:
#   http://java.sun.com/webapps/bugreport/crash.jsp
#

I hit it every time I run the bloat test, either on its own or when run along with the other dacapo tests. I can reproduce it with a solaris / sparcv9 / fastdebug build on a 16-way UltraIV, as well as a Niagara 1 box. It doesn't seem to happen with a 32-bit sparc JVM. I also tried an amd64 build on my workstation and again I couldn't reproduce the issue.

Here's the incantation that uncovers the failure:

java -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC -XX:-ReduceInitialCardMarks -d64 -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -jar dacapo-2006-10.jar -s default bloat

                                    

Comments
EVALUATION

The fix for 6877254 added a new field to StoreCMNode but the size_of,
hash and cmp routines needed to be implemented to make cloning and
hashing work correctly.  This particular failure is caused by size_of
reporting too small a size so the clone has uninitialized fields.  The
fix is simply to implement the needed routines in the obvious way.
Tested with dacapo bloat test.
                                     
2010-01-26
EVALUATION

ChangeSet=http://hg.openjdk.java.net/jdk7/hotspot-comp/hotspot/rev/8d9bfe6a446b,ChangeRequest=6920346
                                     
2010-01-29



Hardware and Software, Engineered to Work Together