JDK-8152193 : GC heap memory usage does not take into consideration overcommit_memory setting
  • Type: Bug
  • Component: hotspot
  • Sub-Component: gc
  • Affected Version: 9
  • Priority: P3
  • Status: Closed
  • Resolution: Duplicate
  • OS: linux
  • Submitted: 2016-03-18
  • Updated: 2016-06-14
  • Resolved: 2016-03-22
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 9
9Resolved
Related Reports
Duplicate :  
Duplicate :  
Description
I ran into the following failure on one of our linux-x64 systems while running runThese:

Java HotSpot(TM) 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f6f32000000, 1409286144, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 1409286144 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /scratch/local/aurora/sandbox/results/runThese/hs_err_pid2277.log

Stafan K and Ioi helped determine that the overcommit_memory setting seems to be the trigger. If a system has /proc/sys/vm/overcommit_memory set to 2, it means:

"Setting to 2, means that processes can only allocate upto (RAM+swap) and will start getting allocation failure or OOM messages when it goes beyond that amount."

On the failing system we have:

$ cat /proc/sys/vm/overcommit_memory 
2

Comments
I'm ok with closing. I just wanted to make sure we at least had some discussion here in case someone runs into this problem again. I think the info you've given is sufficient.
21-03-2016

There are already several issues open in the bug tracker that deal with more sensible behavior in case of out of native memory (e.g. JDK-8022662, JDK-6912330, and others I did not find quickly). This would make this issue a duplicate of those. As for the second idea, I think this has already been discussed several times too. There may even be an RFE out there for exactly this. Imho it is not reasonable for the VM to try to second-guess what the user or other applications is going to do in these situations within the VM. Actually only determining these situations would require the VM to continuously query the state of all other processes in the system, and according to some very OS-specific policies react in some way. That would not even guarantee better behavior if the VM tried to fix user errors, but introducing rather tight dependencies on the current behavior of the OS. The amount of available memory could also change at any second, so even by doing so native OOM can not be prevented. Particularly startup specific guessing does not help here at all. Now consider if multiple VMs try to be "clever" like that the same time: this may not only affect system throughput (as querying the OS is not particularly cheap), but may also decrease stability of the system significantly. In any case this would not be a bug but a major RFE that spans all components of the VM as this issue might occur on any allocation and is not particularly specific to GC.
21-03-2016

I guess I would ask two things here in terms of where this bug lies (VM or host config), and these both relate to potential solutions. Can the JVM recover from this failure to commit the requested memory. In other words, does the heap HAVE to grow in order for the JVM to keep running, or is this just a case of GC heuristics deciding it would be good to expand the heap in order to maintain good performance. The second question is whether the JVM could detect at startup that the max heap size exceeds what the Linux overcommit_memory rules will support, and either fail on startup (and suggest a smaller heap size), or limit the max heap size automatically.
21-03-2016

Sure doing this configuration is a perfectly valid linux config. But so is a linux config that does not allow allocation of more than X MB of memory for the Java process where X is too small to fit the Java heap (which is also a perfectly valid linux config). The VM can't do anything about this situation than error out. It obviously can't do less memory allocation, because without doing these allocations it won't work. It is nice to know about that the /proc/sys/vm/overcommit_memory setting is another knob that may make the VM process go OOM. But imho the only thing that can be done is fixing the environment in this case. Which means that this issue is a machine setup error if anything.
21-03-2016

@Thomas: I don't see why this should be considered an environment setup issue. Setting /proc/sys/vm/overcommit_memory to 2 is a perfectly valid linux config, and I believe is meant to avoid (or reduce the likelihood) of the linux OOM process killer taking action. As for what the VM should do, I'm not really sure. I'm not an expert in this area. Just passing along the issue I ran into.
21-03-2016

Could you elaborate why a problem with the environment setup should be a VM issue? And what is the expected reaction of the VM? It seems to require successful commit of that memory and apparently can't continue without that memory.
21-03-2016