United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
Bug ID: JDK-6885584 A particular class structure causes large allocation spike for jit
JDK-6885584 : A particular class structure causes large allocation spike for jit

Details
Type:
Bug
Submit Date:
2009-09-25
Status:
Closed
Updated Date:
2011-03-08
Project Name:
JDK
Resolved Date:
2011-03-08
Component:
hotspot
OS:
windows_2008
Sub-Component:
compiler
CPU:
x86
Priority:
P3
Resolution:
Fixed
Affected Versions:
6u4
Fixed Versions:
hs17 (b04)

Related Reports
Backport:
Backport:
Relates:

Sub Tasks

Description
FULL PRODUCT VERSION :
1.6_9 1.6_10 1.6_14 1.6_16 and 1.7 (latest as of time of report)

ADDITIONAL OS VERSION INFORMATION :
Windows vista x64 - multuple and windows 2008 - mutiple.

EXTRA RELEVANT SYSTEM CONFIGURATION :
This requires the x64 version of java to be run. No other special considerations are required. It has been shown on multiple machines of different configurations and hardware.

A DESCRIPTION OF THE PROBLEM :
The following cutdown when compiled using standard javac will produce a class file which when run will consume over 2gig of memory once the program has stopped executing (ie after the final println).

The amount of memory consumed is altered by altering the initial value of the variable k. However, changing the number of iterations of the last loop makes no difference other than it must be a loop. Altering the number and referencing of the fields tends to cause the bug to stop. Altering the initial value of i below 14563 causes the bug to go away.

-Xint stops the bug and so we believe it to be in JIT.
-Xmx has no effect - this is almost certainly native memory.

The consumption of memory is so fast that we have seen it overwhelm the OS and cause a machine to be unresponsive of many minutes and in some cases require a reboot.

public class crashJava
{

    public void crashJava()
    {
        // This number must be more than 14562
        for(int j = 14563; j != 0; j--)
        {
        }
       
        // This must reference a field
        System.out.println(i1);

        // The resource leak is roughly proportional to this initial value
        for(int k = 20000000; k != 0; k--)
        {
            if(i2 > i3)i1 = k;
            if(k==(20000000-1))break;
        }

        System.out.println("program ended :)");
    }

    public static void main(String args[])
    {
        (new crashJava()).crashJava();
    }
   
    private int i1;
    private int i2;
    private int i3;
}

Please note - we have tested this on several machines and seen exactly the same behaviour. The consumption of memory has been monitored using process-explorer from Microsoft. Because it happens outside the execution of the java its self, we cannot give any more information at this time.

Thanks -AJ

For any more information please contact me at ###@###.###. This bug was found by members of our language development team.

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Compile and run the cutdown supplied using x64 java on windows.

EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
The program will print out two lines of text and execute having consumed minimal resource.
ACTUAL -
The program will run and then when you expect it to exit it will start to consume massive amounts of memory and cpu. If you use -Xprof then this consumption occurs _after_ the profiler information is produced.

REPRODUCIBILITY :
This bug can be reproduced always.

---------- BEGIN SOURCE ----------
{

    public void crashJava()
    {
        // This number must be more than 14562
        for(int j = 14563; j != 0; j--)
        {
        }
       
        // This must reference a field
        System.out.println(i1);

        // The resource leak is roughly proportional to this initial value
        for(int k = 20000000; k != 0; k--)
        {
            if(i2 > i3)i1 = k;
            if(k==(20000000-1))break;
        }

        System.out.println("program ended :)");
    }

    public static void main(String args[])
    {
        (new crashJava()).crashJava();
    }
   
    private int i1;
    private int i2;
    private int i3;
}
---------- END SOURCE ----------

Release Regression From : 6u3
The above release value was the last known release where this 
bug was not reproducible. Since then there has been a regression.

Release Regression From : 6u3
The above release value was the last known release where this 
bug was not reproducible. Since then there has been a regression.

                                    

Comments
EVALUATION

During CCP we end up propagating the a -1 around the loop repeatedly which causes all the intermedate range types between 19999999 and 0..19999999 to be produced which causes the allocation spike.  If you don't have enough memory we will run out of memory and swap or die.  I can reproduce this starting with 6u4 which is hs10.  I don't see the root cause of the change though.  The widen bits are supposed to cause termination but they aren't for some reason.
                                     
2009-09-25
EVALUATION

With 6u3 you can see the widen bits kick in and send it all the way to int:

int:200000000  329      Phi     ===  326  135  318  [[ 318  319  321 ]]  #int !Note=0x0816c4d0
int:199999999..200000000  329   Phi     ===  326  135  318  [[ 318  319  321 ]]  #int !Note=0x0816c4d0
int:199999998..200000000  329   Phi     ===  326  135  318  [[ 318  319  321 ]]  #int !Note=0x0816c4d0
int:199999997..200000000  329   Phi     ===  326  135  318  [[ 318  319  321 ]]  #int !Note=0x0816c4d0
int  329        Phi     ===  326  135  318  [[ 318  319  321 ]]  #int !Note=0x0816c4d0

But starting with 6u4 it will happily walk all the way through all the intermediate ranges.
                                     
2009-09-25
EVALUATION

This is caused by the fix for 6467870.  There is new logic in TypeInt::widen that attempts to move the limit to max_int/min_int but since it moves hi limit to the extremes the code in PhaseCCP::saturate will move the limit back down.

(dbx) p new_type->dump()
int:199999964..200000000:www
(dbx) p old_type->dump()
int:199999965..200000000:www

 const Type* wide_type = new_type->widen(old_type);

(dbx) p wide_type->dump()
int:>=199999964:www

 1506     if (wide_type != new_type) {          // did we widen?
 1507       // If so, we may have widened beyond the limit type.  Clip it back down.
 1508       new_type = wide_type->filter(limit_type);
 1509     }

(dbx) p new_type->dump()
int:199999964..200000000:www

The code in widen should be looking at which boundaries are moving when deciding how to saturate the bounds.
                                     
2009-09-30
EVALUATION

http://hg.openjdk.java.net/jdk7/hotspot-comp/hotspot/rev/03b336640699
                                     
2009-10-08



Hardware and Software, Engineered to Work Together