JDK-8016825 : Large pages for the heap broken on Windows for compressed oops
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: hs24,hs25
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • Submitted: 2013-06-18
  • Updated: 2013-09-24
  • Resolved: 2013-09-11
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 Other
8Fixed hs25Fixed
When we run with compressed oops we pick an address where we want to map the Java heap. This address gets passed down the chain and on Windows we eventually we end up in os::reserve_memory_special() if we have large pages enabled.

The os::reserve_memory_special() method has two branches. One for UseLargePagesIndividualAllocation, which seems to work properly, and one for only UseLargePages. The latter one does this os call:

VirtualAlloc(NULL, bytes, flag, prot);

The first parameter to VirtualAlloc is the address where you want the memory mapped. We get this passed in as the parameter addr, but this call anyway passes NULL to VirtualAlloc.

This means that we will not get memory mapped where we want it and ReservedSpace::initialize() will call failed_to_reserve_as_requested() which will throw away the large page mapping and retry with small pages.

Impact=M (no crash, but likely loss of performance)
Likelihood=M (Windows only. Large pages and compressed oops only.)
Workaround=M (Use UseLargePagesIndividualAllocation to get some large pages at least)


There is a pmap-like tool for windows called VMMap from MS: http://technet.microsoft.com/en-us/sysinternals/dd535533.aspx Not sure how scriptable it is.

To reproduce the issue, you need to enable large pages use in windows. Use the following description: http://msdn.microsoft.com/en-us/library/ms190730.aspx Additionally, in my tests the VM had to be run in an elevated shell (admin shell). Only then the VM did not complain about insufficient rights when using -XX:+UseLargePages.

There is some description on the MDSN page about VirtualQueryEx function (http://msdn.microsoft.com/en-us/library/windows/desktop/aa366907(v=vs.85).aspx) at the bottom. 1) lock the pages using VirtualLock() to make the page(s) resident in memory 2) use QueryWorkingSetEx() to retrieve page information for given virtual addresses 3) some sub-structure of the returned values (PSAPI_WORKING_SET_EX_BLOCK) contains information whether a given pointer references a large page or not 4) call VirtualUnlock() to unlock the pages There may be issues due to access rights.

SQE is OK to defer this

Basically all you need to do is run with large pages and compressed oops on a Windows 64-bit machine and verify that you did not get large pages for the heap. The first part is easy: just run with -XX:+UseLargePages and -XX:+UseCompressedOops The second part, to verify whether or not you got large pages, is more difficult. I have not found a good way of doing that. On Solaris you can use pmap and on Linux you can use /proc/<pid>/maps, but on Windows I have not found a tool that will show me the mappings. The way I verified it was to compare the "private data" and the "working set" for the process. Large pages are not counted in the working set, so if we get large pages for the heap you should notice a corresponding difference in these values. It would be good if we could find a better way of checking this. Best would be to have an automatic way of checking that we get large pages when we ask for it. That way we could write a test that checks that large pages work. Right now it seems like we have more or less no large page testing. At least not on Windows.

Bengt, would you provide some clues on how to reproduce this bug. SQE needs this to verify the fix. Thanks!

7u40-defer-request justification: As far as I can tell, this is not a regression compared to previous releases. The problem happens if we use large pages in combination with compressed oops. Applications won't crash just experience reduced performance. But since it is not a regression there are probably no applications out there that will notice the lack of performance.