JDK-8079449 : Improve os::attempt_reserve_memory_at() fallback coding on Linux, BSD, Solaris
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 9
  • Priority: P4
  • Status: Closed
  • Resolution: Won't Fix
  • Submitted: 2015-05-06
  • Updated: 2020-02-11
  • Resolved: 2020-02-11
Related Reports
Relates :  
Relates :  
Description
In the Linux implementation of os::attempt_reserve_memory_at() we have a fallback implementation which, if the first mmap(req_addr) does not yield the requested address, repeatedly allocates memory using mmap(NULL) without freeing it, in the hope that at some point this reserved memory will intersect with the requested address:

os_linux.cpp:

3708  int i;
3709  for (i = 0; i < max_tries; ++i) {
3710    base[i] = reserve_memory(bytes);
3711
3712    if (base[i] != NULL) {
3713      // Is this the block we wanted?
3714      if (base[i] == requested_addr) {
3715        size[i] = bytes;
3716        break;
3717      }
3718
3719      // Does this overlap the block we wanted? Give back the overlapped
3720      // parts and try again.
3721
3722      ptrdiff_t top_overlap = requested_addr + (bytes + gap) - base[i];
3723      if (top_overlap >= 0 && (size_t)top_overlap < bytes) {
3724        unmap_memory(base[i], top_overlap);
3725        base[i] += top_overlap;
3726        size[i] = bytes - top_overlap;
3727      } else {
3728        ptrdiff_t bottom_overlap = base[i] + bytes - requested_addr;
3729        if (bottom_overlap >= 0 && (size_t)bottom_overlap < bytes) {
3730          unmap_memory(requested_addr, bottom_overlap);
3731          size[i] = bytes - bottom_overlap;
3732        } else {
3733          size[i] = bytes;
3734        }
3735      }
3736    }
3737  }
3738
3739  // Give back the unused reserved pieces.
3740
3741  for (int j = 0; j < i; ++j) {
3742    if (base[j] != NULL) {
3743      unmap_memory(base[j], size[j]);
3744    }
3745  }

It tries by default 10 times.

This code assumes that the requested address is somewhere in the vicinity of the memory mmap(NULL) returns by default. It also assumes that the memory returned by mmap(NULL) is attached at monotonously growing addresses.

Both assumptions should be checked and if wrong, the fallback should be skipped.

This is mainly a performance issue, as os::attempt_reserve_memory_at() is called a large number of times when allocating heap optimizes for compressed Oops. 

See: http://mail.openjdk.java.net/pipermail/hotspot-runtime-dev/2015-April/014600.html



Comments
Runtime Triage: This is not on our current list of priorities. We will consider this feature if we receive additional customer requirements.
11-02-2020

In the proposed change (http://cr.openjdk.java.net/~stuefe/webrevs/8079449/webrev.00/webrev/), the reserve_memory() and some of the unmap_memory() calls were changed to be anon_mmap() and anon_munmap() in os_bsd.cpp. But there are still some unmap_memory() calls remaining in the bed version of os::can_commit_large_page_memory(). The unmap_memory calls are tracked by NMT. The remaining unmap_memory calls would cause issue in NMT. All unmap_memory() usage in os::can_commit_large_page_memory() should be changed to use anon_munmap. The linux version does not have the problem.
11-01-2016