United StatesChange Country, Oracle Worldwide Web Sites Communities I am a... I want to...
Bug ID: JDK-6815790 G1: Missing MemoryPoolMXBeans with -XX:+UseG1GC
JDK-6815790 : G1: Missing MemoryPoolMXBeans with -XX:+UseG1GC

Details
Type:
Bug
Submit Date:
2009-03-11
Status:
Closed
Updated Date:
2012-02-01
Project Name:
JDK
Resolved Date:
2011-03-08
Component:
hotspot
OS:
windows_xp
Sub-Component:
gc
CPU:
x86
Priority:
P4
Resolution:
Fixed
Affected Versions:
6u14
Fixed Versions:
hs17 (b06)

Related Reports
Backport:
Backport:
Relates:
Relates:

Sub Tasks

Description
FULL PRODUCT VERSION :
6u14-b01

A DESCRIPTION OF THE PROBLEM :
With -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC , ManagementFactory.getMemoryPoolMXBeans() does not return MXBeans with type MemoryType.HEAP

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Run the provided class with  "-XX:+UnlockExperimentalVMOptions -XX:+UseG1GC" and without them.


EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
(without  -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC)

name = Code Cache type: NON-HEAP
name = Eden Space type: HEAP
name = Survivor Space type: HEAP
name = Tenured Gen type: HEAP
name = Perm Gen type: NON-HEAP
name = Perm Gen [shared-ro] type: NON-HEAP
name = Perm Gen [shared-rw] type: NON-HEAP


ACTUAL -
(with -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC)

name = Code Cache type: NON-HEAP

REPRODUCIBILITY :
This bug can be reproduced always.

---------- BEGIN SOURCE ----------
public class A {
  public static void main(String[] args) {
    for (final MemoryPoolMXBean pool : ManagementFactory.getMemoryPoolMXBeans()) {
      final String name = pool.getName();
      System.out.println("name = " + name + " type: " + (pool.getType() == MemoryType.HEAP ? "HEAP" : "NON-HEAP"));
    }
  }
}

---------- END SOURCE ----------

                                    

Comments
EVALUATION

The monitoring and management support for G1 is yet to be implemented
                                     
2009-04-23
SUGGESTED FIX

I introduced three memory pools that represent the G1 young gen space, G1 survivor space, and G1 old gen space, as well as a fourth memory pool which is the G1 perm gen space. They are included in the new services/g1MemoryPool.{hpp,cpp} files. Here's a comment from g1MemoryPool.hpp that discusses why I implemented them the way I did and also discusses some issues that arise.

  27 // This file contains the three classes that represent the memory
  28 // pools of the G1 spaces: G1EdenPool, G1SurvivorPool, and
  29 // G1OldGenPool. In G1, unlike our other GCs, we do not have a
  30 // physical space for each of those spaces. Instead, we allocate
  31 // regions for all three spaces out of a single pool of regions (that
  32 // pool basically covers the entire heap). As a result, the eden,
  33 // survivor, and old gen are considered logical spaces in G1, as each
  34 // is a set of non-contiguous regions. This is also reflected in the
  35 // way we map them to memory pools here. The easiest way to have done
  36 // this would have been to map the entire G1 heap to a single memory
  37 // pool. However, it's helpful to show how large the eden and survivor
  38 // get, as this does affect the performance and behavior of G1. Which
  39 // is why we introduce the three memory pools implemented here.
  40 //
  41 // The above approach inroduces a couple of challenging issues in the
  42 // implementation of the three memory pools:
  43 //
  44 // 1) The used space calculation for a pool is not necessarily
  45 // independent of the others. We can easily get from G1 the overall
  46 // used space in the entire heap, the number of regions in the young
  47 // generation (includes both eden and survivors), and the number of
  48 // survivor regions. So, from that we calculate:
  49 //
  50 //  survivor_used = survivor_num * region_size
  51 //  eden_used     = young_region_num * region_size - survivor_used
  52 //  old_gen_used  = overall_used - eden_used - survivor_used
  53 //
  54 // Note that survivor_used and eden_used are upper bounds. To get the
  55 // actual value we would have to iterate over the regions and add up
  56 // ->used(). But that'd be expensive. So, we'll accept some lack of
  57 // accuracy for those two. But, we have to be careful when calculating
  58 // old_gen_used, in case we subtract from overall_used more then the
  59 // actual number and our result goes negative.
  60 //
  61 // 2) Calculating the used space is straightforward, as described
  62 // above. However, how do we calculate the committed space, given that
  63 // we allocate space for the eden, survivor, and old gen out of the
  64 // same pool of regions? One way to do this is to use the used value
  65 // as also the committed value for the eden and survivor spaces and
  66 // then calculate the old gen committed space as follows:
  67 //
  68 //  old_gen_committed = overall_committed - eden_committed - survivor_committed
  69 //
  70 // Maybe a better way to do that would be to calculate used for eden
  71 // and survivor as a sum of ->used() over their regions and then
  72 // calculate committed as region_num * region_size (i.e., what we use
  73 // to calculate the used space now). This is something to consider
  74 // in the future.
  75 //
  76 // 3) Another decision that is again not straightforward is what is
  77 // the max size that each memory pool can grow to. Right now, we set
  78 // that the committed size for the eden and the survivors and
  79 // calculate the old gen max as follows (basically, it's a similar
  80 // pattern to what we use for the committed space, as described
  81 // above):
  82 //
  83 //  old_gen_max = overall_max - eden_max - survivor_max
  84 //
  85 // 4) Now, there is a very subtle issue with all the above. The
  86 // framework will call get_memory_usage() on the three pools
  87 // asynchronously. As a result, each call might get a different value
  88 // for, say, survivor_num which will yield inconsistent values for
  89 // eden_used, survivor_used, and old_gen_used (as survivor_num is used
  90 // in the calculation of all three). This would normally be
  91 // ok. However, it's possible that this might cause the sum of
  92 // eden_used, survivor_used, and old_gen_used to go over the max heap
  93 // size and this seems to sometimes cause JConsole (and maybe other
  94 // clients) to get confused. There's not a really an easy / clean
  95 // solution to this problem, due to the asynchrounous nature of the
  96 // framework.
                                     
2009-11-24
EVALUATION

http://hg.openjdk.java.net/jdk7/hotspot-gc/hotspot/rev/db0d5eba9d20
                                     
2009-11-24



Hardware and Software, Engineered to Work Together