JDK-6515172 : Runtime.availableProcessors() ignores Linux taskset command
  • Type: Bug
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 6,7,8,9
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • OS: linux,solaris_8
  • CPU: x86
  • Submitted: 2007-01-19
  • Updated: 2024-06-04
  • Resolved: 2016-01-29
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 JDK 9
8u121Fixed 9 b107Fixed
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Description
FULL PRODUCT VERSION :
java version "1.6.0"
Java(TM) SE Runtime Environment (build 1.6.0-b105)
Java HotSpot(TM) 64-Bit Server VM (build 1.6.0-b105, mixed mode)

ADDITIONAL OS VERSION INFORMATION :
#1 SMP Tue Aug 29 10:40:40 UTC 2006 x86_64 x86_64 x86_64 GNU/Linux

A DESCRIPTION OF THE PROBLEM :
When limiting the available CPUs by Linux 'taskset' command, JVM obeys this limitation but Runtime.availableProcessors() always returns  the number of CPUs where the hardware has.


REPRODUCIBILITY :
This bug can be reproduced always.

Comments
This fix also needs the fix for JDK-8161993, which in turn requires JDK-8147910.
30-01-2017

URL: http://hg.openjdk.java.net/jdk9/jdk9/hotspot/rev/c5480d4abfe4 User: lana Date: 2016-02-24 20:06:28 +0000
24-02-2016

URL: http://hg.openjdk.java.net/jdk9/hs-rt/hotspot/rev/c5480d4abfe4 User: dholmes Date: 2016-01-29 12:29:24 +0000
29-01-2016

Here is the new version of the code. I will send out a RFR without the testing code soon. // Get the current number of available processors for this process. // This value can change at any time during a process's lifetime. // sched_getaffinity gives an accurate answer as it accounts for cpusets. // If it appears there may be more than 1024 processors then we do a // dynamic check - see 6515172 for details. // If anything goes wrong we fallback to returning the number of online // processors - which can be greater than the number available to the process. int os::active_processor_count() { cpu_set_t cpus; // can represent at most 1024 (CPU_SETSIZE) processors cpu_set_t* cpus_p = &cpus; int cpus_size = sizeof(cpu_set_t); int configured_cpus = processor_count(); // upper bound on available cpus int cpu_count = 0; if (configured_cpus >= CPU_SETSIZE || UseNewCode) { // kernel may use a mask bigger than cpu_set_t log_trace(os)("active_processor_count: using dynamic path - configured processors: %d", configured_cpus); cpus_p = CPU_ALLOC(configured_cpus); if (cpus_p != NULL && !UseNewCode2) { cpus_size = CPU_ALLOC_SIZE(configured_cpus); // zero it just to be safe CPU_ZERO_S(cpus_size, cpus_p); } else { // failed to allocate so fallback to online cpus int online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN); log_trace(os)("active_processor_count: " "CPU_ALLOC failed (%s) - using " "online processor count: %d", strerror(errno), online_cpus); return online_cpus; } } else { log_trace(os)("active_processor_count: using static path - configured processors: %d", configured_cpus); } // pid 0 means the current thread - which we have to assume represents the process if (sched_getaffinity(0, cpus_size, cpus_p) == 0 && !UseNewCode3) { if (cpus_p != &cpus) { cpu_count = CPU_COUNT_S(cpus_size, cpus_p); } else { cpu_count = CPU_COUNT(cpus_p); } log_trace(os)("active_processor_count: sched_getaffinity processor count: %d", cpu_count); } else { cpu_count = ::sysconf(_SC_NPROCESSORS_ONLN); warning("sched_getaffinity failed (%s)- using online processor count (%d) " "which may exceed available processors", strerror(errno), cpu_count); } if (cpus_p != &cpus) { CPU_FREE(cpus_p); } assert(cpu_count > 0 && cpu_count <= processor_count(), "sanity check"); return cpu_count; }
22-01-2016

> I thought that the cpus are numbered 0 to 1023, but the maximum count is 1024. I thought so to, but the discussion refers to a 1023 maximum for some reason. > Using _SC_NPROCESSORS_CONF instead of _SC_NPROCESSORS_ONLN is quite reasonable, but then there's a new fear - that the former could be a very large integer > (who knows how many cpus can be added at runtime?) that was also discussed in the libc-alpha thread. My testing on small systems shows that configured and online cpus seem to be the same (in the past I'm sure some platforms used to report some hardwired maximum - 1024 or something, which would be bad as it would force us down the dynamic path.) Further, as all I want to do is count the cpus not create per-cpu data structures (as referenced in the discussion) it is only the CPU_ALLOC that I need to worry about. At one bit per cpu I'm not allocating very much. > Adding new hardware (including CPUS) while the system is running is apparently quite common on IBM mainframes - "zero downtime" is an enterprise feature! Sure, but it all depends on the details of the architecture (and OS) as to whether that would be considered a change in configured processors, or a change in online processors. Anyway one step at a time. Right now we want Docker cpu sets to work correctly. :) Maybe next year we need to worry about 1024+ processors Thanks.
21-01-2016

I thought that the cpus are numbered 0 to 1023, but the maximum count is 1024. Using _SC_NPROCESSORS_CONF instead of _SC_NPROCESSORS_ONLN is quite reasonable, but then there's a new fear - that the former could be a very large integer (who knows how many cpus can be added at runtime?) that was also discussed in the libc-alpha thread. Adding new hardware (including CPUS) while the system is running is apparently quite common on IBM mainframes - "zero downtime" is an enterprise feature!
21-01-2016

I plan to use the configured CPUs rather than online for the 1023 check (that discussion indicates 1023 and not 1024 is the actual maximum??) to trigger the dynamic case. I can easily imagine the number of online cpus changing due to sys admin actions (whether that involves hotswap or not I am not sure). I find it hard to imagine the case where someone is plugging chips into empty sockets while the system is running, so I will draw the line there. The glibc versus manpage/kernel documentation situation is really a travesty. Seriously how do these people expect developers to actually write decent code! I will avoid the EINVAL question by using the configured cpus to select dynamic or not. If sched_getaffinity fails for any reason after that decision then it falls back to the number of online cpus. Thanks for all the input!
21-01-2016

We can expect big trouble if we try to get things perfectly correct in a "cpu hotplug" environment. A list (i.e. bitmask) is not a good concurrent data structure. More correct would be to check for EINVAL in the dynamic case, and keep resizing, to handle hotplug additions. But that race would be insanely rare.
21-01-2016

See below for my own local try (but I'm targeting a more predictable environment) int os::active_processor_count() { // See: // http://www.gnu.org/software/libc/manual/html_node/CPU-Affinity.html // http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html // http://man7.org/linux/man-pages/man3/CPU_SET.3.html // https://bugs.openjdk.java.net/browse/JDK-6515172 // Runtime.availableProcessors() ignores Linux taskset command // https://sourceware.org/ml/libc-alpha/2013-07/msg00288.html // // There is no need to call CPU_ZERO. Although the linux kernel system // call does not initialize all the bytes, the glibc wrapper function does. const int online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN); assert(online_cpus > 0 && online_cpus <= processor_count(), "sanity check"); int available_cpus = 0; if (online_cpus <= CPU_SETSIZE) { cpu_set_t cpus; // can represent at most 1024 (CPU_SETSIZE) processors if (sched_getaffinity(0, sizeof(cpu_set_t), &cpus) == 0) available_cpus = CPU_COUNT(&cpus); } else { cpu_set_t *cpusetp = CPU_ALLOC(online_cpus); if (cpusetp != NULL) { const size_t size = CPU_ALLOC_SIZE(online_cpus); if (sched_getaffinity(0, size, cpusetp) == 0) available_cpus = CPU_COUNT_S(size, cpusetp); CPU_FREE(cpusetp); } } return (available_cpus > 0 && available_cpus <= online_cpus) ? available_cpus : online_cpus; // fallback }
21-01-2016

> But I also see now what you mean by the EINVAL situation. I think my logic for detecting the > 1024 processor case is wrong. I had assumed that on a system with > 1024 processors I would get a full mask and so should check again using a larger dynamic mask. But it appears that on such a system the original call to sched_getaffinity will fail with EINVAL. It is unclear whether a larger mask will always be used on such systems, or whether it will only be used if needed. But I may never be in a position to test that, so will have to assume the worst and always check for EINVAL and then take the CPU_ALLOC path. Yes, it's difficult and murky. The glibc manual does not document EINVAL, but the man page does. EINVAL (sched_getaffinity() and, in kernels before 2.6.9, sched_setaffinity()) cpusetsize is smaller than the size of the affinity mask used by the kernel. Part of the discrepancy is that the man page is documenting the *kernel*, not the library, and even today, "Linux" is not part of the "GNU system".
21-01-2016

> For the record I also have a bug in the dynamic path as I'm assuming the memory from CPU_ALLOC was zeroed, which is not stated. So I have to zero it first. Amusingly, in my local backport I added a CPU_ZERO, but later thought better of it. The kernel does not zero all the requested memory, but the glibc implementation does. And this is documented .... hmmmm .... here: http://www.gnu.org/software/libc/manual/html_node/CPU-Affinity.html """If successful, the function always initializes all bits in the cpu_set_t object and returns zero.""" (but ... you could quibble that the glibc manual doesn't document the dynamic case at all!) The man page and the glibc manual appear to be maintained by different folks.
21-01-2016

I have now read the entire tree of conversation from: https://sourceware.org/ml/libc-alpha/2013-07/msg00288.html It is intriguing, confusing and ultimately inconclusive. It also shed no light on what impact the SuSE change might have. I'm now inclined to change tactics on this - even though it is impossible to determine what a completely correct solution would be. Based on the approach outlined here: https://sourceware.org/ml/libc-alpha/2013-07/msg00425.html I will first check the number of configured processors (already available via processor_count()) and if that is > 1023 then use CPU_ALLOC with that size. If I get an EINVAL after that I will simply fallback to the number of online processors. For the record I also have a bug in the dynamic path as I'm assuming the memory from CPU_ALLOC was zeroed, which is not stated. So I have to zero it first.
21-01-2016

On my system the manpage reads differently: sched_getaffinity() writes the affinity mask of the process whose ID is pid into the cpu_set_t structure pointed to by mask. The cpusetsize argument specifies the size (in bytes) of mask. If pid is zero, then the mask of the calling process is returned. but alas that is dated 2010-11-06. So looks like they changed from "process" to "thread" at some point. The source code still refers to processes, but as Linux threads are kind-of-processes that may not mean anything. So in this regard I will have to assume that the calling thread is representative of the process and has not been individually modified via sched_setaffinity. But I also see now what you mean by the EINVAL situation. I think my logic for detecting the > 1024 processor case is wrong. I had assumed that on a system with > 1024 processors I would get a full mask and so should check again using a larger dynamic mask. But it appears that on such a system the original call to sched_getaffinity will fail with EINVAL. It is unclear whether a larger mask will always be used on such systems, or whether it will only be used if needed. But I may never be in a position to test that, so will have to assume the worst and always check for EINVAL and then take the CPU_ALLOC path. This is really painful.
21-01-2016

It's surprisingly difficult to program this robustly, allowing everything to work well (fallback) on systems without CPU_ALLOC, and taking into account Suse's decision to change __CPU_SETSIZE. It seems reasonable to use CPU_ALLOC for jdk9 and not try to backport to jdk8. There is no API to get the number of processors available to the current process or thread. The designers of these mechanisms seem to suffer from the usual "C programmer's disease".
20-01-2016

>> Strictly speaking, the affinity is per-thread, but different threads in the JDK are unlikely to differ. > Passing 0 as the tid gets the affinity for the process not the thread. According to http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html """ If pid is zero, then the mask of the calling thread is returned."""
20-01-2016

Muddying the waters further, this message: https://sourceware.org/ml/libc-alpha/2013-07/msg00296.html suggests that misguided Suse Linux maintainers changed the value of __CPU_SETSIZE from 1024 to 4096, which is an ABI change! With luck, binaries compiled on a non-Suse system will work on a Suse system, but the reverse direction may result in buffer overflow, (but this may not be a problem in practice - who does that?). If we want to be extra careful, use the dynamic interface with CPU_ALLOC for sizes > 1024 ?
20-01-2016

> Would it be clearer if we said > > // This value can change at any time during a process's lifetime, using sched_setaffinity. They can also change because they are taken offline. I did not want to enumerate all the possible ways the number of available processors might change. > Strictly speaking, the affinity is per-thread, but different threads in the JDK are unlikely to differ. Passing 0 as the tid gets the affinity for the process not the thread. > Should we be checking for EINVAL from sched_getaffinity, as suggested in the man page? If there is any failure we fallback to using the number of online processors. > It seems that CPU_ALLOC is much more recent than sched_getaffinity. I would want the JDK to be portable to systems without this support, both at compile and runtime. We have minimal OS version requirements for building and execution. Our official build systems are still quite "old" in relation to latest distributions and support this capability. My manpage for it is dated 2010-09-10 so this support is at least 5 years old. But if we find there are issues with this then we can look at it again. JDK 8 will need a simpler, less cpaable version. Thanks for the feedback.
20-01-2016

// Get the current number of available processors for this process. // This value can change at any time during a process's lifetime. Would it be clearer if we said // This value can change at any time during a process's lifetime, using sched_setaffinity. --- Strictly speaking, the affinity is per-thread, but different threads in the JDK are unlikely to differ. --- Should we be checking for EINVAL from sched_getaffinity, as suggested in the man page? --- It seems that CPU_ALLOC is much more recent than sched_getaffinity. I would want the JDK to be portable to systems without this support, both at compile and runtime. --- I should probably do a local backport to jdk8 myself - we have far fewer compatibility problems.
19-01-2016

Backporting this to JDK 8u will not be done directly as the build platforms do not support the CPU_COUNT or CPU_ALLOC related functionality - and it may be that the runtime platforms also don't have the necessary support. For JDK 8 we will have to manually count the cpus in the cpu_set_t and limit ourselves to being correct only when <=1024 cpus.
19-01-2016

This is a testing version of the code (using UseNewCodeN to force different paths to be taken, and artificially considering the maximum processor count to be 8). // Get the current number of available processors for this process. // This value can change at any time during a process's lifetime. // sched_getaffinity gives an accurate answer as it accounts for cpusets. // If it appears there may be more than 1024 processors then we do a // second, dynamic check. // If anything goes wrong we fallback to returning the number of online // processors - which can be greater than the number available to the process. int os::active_processor_count() { cpu_set_t cpus; // can represent at most 1024 (CPU_SETSIZE) processors int cpu_count = 0; if (sched_getaffinity(0, sizeof(cpu_set_t), &cpus) == 0) { cpu_count = CPU_COUNT(&cpus); int max_count = UseNewCode ? 8 : CPU_SETSIZE; if (cpu_count == max_count) { log_trace(rt)("active_processor_count: checking for more than %d processors", max_count); // we may have more cpus than can be represented in the static cpu_set_t // so perform a dynamic check. Note that the number of online // processors may have changed in the meantime cpu_count = ::sysconf(_SC_NPROCESSORS_ONLN); if (cpu_count > max_count && !(UseNewCode2 && UseNewCode3)) { log_trace(rt)("active_processor_count: number of online processors: %d", cpu_count); cpu_set_t* dcpus = CPU_ALLOC(cpu_count); if (dcpus != NULL && !UseNewCode2) { size_t size = CPU_ALLOC_SIZE(cpu_count); if (sched_getaffinity(0, size, dcpus) == 0 && !UseNewCode3) { cpu_count = CPU_COUNT_S(size, dcpus); log_trace(rt)("active_processor_count: dynamically-sized sched_getaffinity " "processor count: %d", cpu_count); } else { // failed for some reason so fallback to online cpus log_trace(rt)("active_processor_count: dynamically-sized " "sched_getaffinity failed (%s) - using " "online processor count: %d", strerror(errno), cpu_count); } CPU_FREE(dcpus); } else { // failed to allocate so fallback to online cpus log_trace(rt)("active_processor_count: " "CPU_ALLOC failed (%s) - using " "online processor count: %d", strerror(errno), cpu_count); } } else { // online cpus must have changed so fallback to online cpus log_trace(rt)("active_processor_count: online processor count (%d) " "no longer exceeds maximum (%d) - using online processor count", cpu_count, max_count); } } else { log_trace(rt)("active_processor_count: sched_getaffinity processor count: %d", cpu_count); } } else { warning("sched_getaffinity failed (%s) so using online processor count " "which may exceed available processor", strerror(errno)); cpu_count = ::sysconf(_SC_NPROCESSORS_ONLN); } assert(cpu_count > 0 && cpu_count <= processor_count(), "sanity check"); return cpu_count; } Here are examples of the tracing output for different runs on a 24 cpu system. Three calls to active_processor_count are internal to VM initialization, the fourth is the test program - which prints Runtime.availableProcessors(): Case 1: simple usage, no triggering of the dynamic path as processor count is too low: > taskset -c 0-6 ./b0/se-linux-i586/images/jdk/bin/java -cp ../../tests -Xlog:rt=trace Processors [0.001s][trace ][rt] active_processor_count: sched_getaffinity processor count: 7 [0.012s][trace ][rt] active_processor_count: sched_getaffinity processor count: 7 [0.058s][trace ][rt] active_processor_count: sched_getaffinity processor count: 7 [0.089s][trace ][rt] active_processor_count: sched_getaffinity processor count: 7 7 Case 2: simple usage, no triggering of the dynamic path as we didn't enable UseNewCode > taskset -c 0-7 ./b0/se-linux-i586/images/jdk/bin/java -cp ../../tests -Xlog:rt=trace Processors [0.001s][trace ][rt] active_processor_count: sched_getaffinity processor count: 8 [0.012s][trace ][rt] active_processor_count: sched_getaffinity processor count: 8 [0.059s][trace ][rt] active_processor_count: sched_getaffinity processor count: 8 [0.077s][trace ][rt] active_processor_count: sched_getaffinity processor count: 8 8 Case 3: Enable the dynamic allocation path > taskset -c 0-7 ./b0/se-linux-i586/images/jdk/bin/java -XX:+UnlockDiagnosticVMOptions -XX:+UseNewCode -cp ../../tests -Xlog:rt=trace Processors [0.001s][trace ][rt] active_processor_count: checking for more than 8 processors [0.001s][trace ][rt] active_processor_count: number of online processors: 24 [0.001s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity processor count: 8 [0.012s][trace ][rt] active_processor_count: checking for more than 8 processors [0.012s][trace ][rt] active_processor_count: number of online processors: 24 [0.012s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity processor count: 8 [0.061s][trace ][rt] active_processor_count: checking for more than 8 processors [0.061s][trace ][rt] active_processor_count: number of online processors: 24 [0.061s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity processor count: 8 [0.081s][trace ][rt] active_processor_count: checking for more than 8 processors [0.081s][trace ][rt] active_processor_count: number of online processors: 24 [0.081s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity processor count: 8 8 Case 4: simulate CPU_ALLOC failure > taskset -c 0-7 ./b0/se-linux-i586/images/jdk/bin/java -XX:+UnlockDiagnosticVMOptions -XX:+UseNewCode -XX:+UseNewCode2 -cp ../../tests -Xlog:rt=trace Processors [0.001s][trace ][rt] active_processor_count: checking for more than 8 processors [0.001s][trace ][rt] active_processor_count: number of online processors: 24 [0.001s][trace ][rt] active_processor_count: CPU_ALLOC failed (Cannot allocate memory) - using online processor count: 24 [0.012s][trace ][rt] active_processor_count: checking for more than 8 processors [0.012s][trace ][rt] active_processor_count: number of online processors: 24 [0.012s][trace ][rt] active_processor_count: CPU_ALLOC failed (File exists) - using online processor count: 24 [0.064s][trace ][rt] active_processor_count: checking for more than 8 processors [0.064s][trace ][rt] active_processor_count: number of online processors: 24 [0.065s][trace ][rt] active_processor_count: CPU_ALLOC failed (No such file or directory) - using online processor count: 24 [0.083s][trace ][rt] active_processor_count: checking for more than 8 processors [0.083s][trace ][rt] active_processor_count: number of online processors: 24 [0.083s][trace ][rt] active_processor_count: CPU_ALLOC failed (No such file or directory) - using online processor count: 24 24 Note the reasons for failing are random samplings of errno. Case 5: simulate sched_getaffinity failure using the dynamically sized cpu_set_t > taskset -c 0-7 ./b0/se-linux-i586/images/jdk/bin/java -XX:+UnlockDiagnosticVMOptions -XX:+UseNewCode -XX:+UseNewCode3 -cp ../../tests -Xlog:rt=trace Processors [0.001s][trace ][rt] active_processor_count: checking for more than 8 processors [0.001s][trace ][rt] active_processor_count: number of online processors: 24 [0.001s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity failed (No such file or directory) - using online processor count: 24 [0.010s][trace ][rt] active_processor_count: checking for more than 8 processors [0.010s][trace ][rt] active_processor_count: number of online processors: 24 [0.010s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity failed (File exists) - using online processor count: 24 [0.063s][trace ][rt] active_processor_count: checking for more than 8 processors [0.063s][trace ][rt] active_processor_count: number of online processors: 24 [0.063s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity failed (No such file or directory) - using online processor count: 24 [0.083s][trace ][rt] active_processor_count: checking for more than 8 processors [0.083s][trace ][rt] active_processor_count: number of online processors: 24 [0.083s][trace ][rt] active_processor_count: dynamically-sized sched_getaffinity failed (No such file or directory) - using online processor count: 24 24 Case 6: Simulate the number of online processors dropping below the maximum (note the numbers won't make sense as we forced this path) > taskset -c 0-7 ./b0/se-linux-i586/images/jdk/bin/java -XX:+UnlockDiagnosticVMOptions -XX:+UseNewCode -XX:+UseNewCode3 -XX:+UseNewCode2 -cp ../../tests -Xlog:rt=trace [0.001s][trace ][rt] active_processor_count: checking for more than 8 processors [0.001s][trace ][rt] active_processor_count: online processor count (24) no longer exceeds maximum (8) [0.012s][trace ][rt] active_processor_count: checking for more than 8 processors [0.012s][trace ][rt] active_processor_count: online processor count (24) no longer exceeds maximum (8) [0.066s][trace ][rt] active_processor_count: checking for more than 8 processors [0.066s][trace ][rt] active_processor_count: online processor count (24) no longer exceeds maximum (8) [0.086s][trace ][rt] active_processor_count: checking for more than 8 processors [0.086s][trace ][rt] active_processor_count: online processor count (24) no longer exceeds maximum (8) 24 The "warning path" was also simulated separately: Java HotSpot(TM) Server VM warning: sched_getaffinity failed (Cannot allocate memory) so using online processor count which may exceed available processor Java HotSpot(TM) Server VM warning: sched_getaffinity failed (File exists) so using online processor count which may exceed available processor Java HotSpot(TM) Server VM warning: sched_getaffinity failed (No such file or directory) so using online processor count which may exceed available processor Java HotSpot(TM) Server VM warning: sched_getaffinity failed (No such file or directory) so using online processor count which may exceed available processor 24
19-01-2016

David: Given your successful experience using sched_getaffinity, I can't think of any reason not to use this on all linux libc systems. Can you port your code to openjdk9 or provide a patch? This is useful enough for Google (and probably others) to backport or maintain a private patch, simply because it makes heuristics based on availableProcessors more efficient.
30-12-2015

>>> For JDK 9 we know sched_getaffinity is supported on our supported build and runtime platforms so we should not need the dynamic lookups. I'm not aware of any currently supported non-glibc linuxes. My own instinct is to always use autoconf machinery and/or dynamic lookup, because I love portability, but it's true that hotspot is full of platform-specific code, so supporting arbitrary unix-like platforms is impractical.
18-12-2015

From: https://sourceware.org/bugzilla/show_bug.cgi?id=15630 Fix use of cpu_set_t with sched_getaffinity when booted on a system with more than 1024 possible cpus. Snippet: There are 3 ways to determine the correct size of the possible cpu mask size: (a) Read it from sysfs /sys/devices/system/cpu/online, which has the actual number of possibly online cpus. (b) Interpret /proc/cpuinfo or /proc/stat. (c) Call the kernel syscall sched_getaffinity with increasingly larger values for cpusetsize in an attempt to manually determine the cpu mask size. Methods (a) and (b) are already used by sysconf(_SC_PROCESSORS_ONLN) to determine the value to return. Method (c) is used by sched_setaffinity to determine the size of the kernel mask and then reject any bits which are set outside of the mask and return EINVAL. Method (c) is recommended by a patched RHEL man page [1] for sched_getaffinity, but that patch has not made it upstream to the Linux Kernel man pages project. The goal is therefore to make using a fixed cpu_set_t work at all times, but only support the first 1024 cpus. To support more than 1024 cpus you need to use the dynamically sized macros and method (a) (if you want all the cpus). In order to make a fixed cpu_set_t size work all the time the following changes need to be made to glibc:
17-12-2015

For JDK 9 we know sched_getaffinity is supported on our supported build and runtime platforms so we should not need the dynamic lookups. I'm not aware of any currently supported non-glibc linuxes. Don't know what to say about systems with more than 1024 logical processors - does linux/glibc support such systems? If so surely they have considered what sched_getaffinity should do in such cases. It would be unfortunate to have to check both ways on every call to this just to account for a broken OS/glibc. We normally assume the OS works unless we discover otherwise, or can find a bug that indicates otherwise. I outlined the code we used in 2010 above. A CPU_COUNT macro would be nice if it is reliably available.
17-12-2015

I just learned about Linux taskset, which makes ad hoc testing of this very easy. So if you run taskset 0x1 java ... then Runtime.availableProcessors should definitely return 1. It is an incompatible change, but the kind that seems quite acceptable.
16-12-2015

I just discovered this bug. Elsewhere on build-dev I wrote: My current mental model is configured cpus >= online cpus >= allowed cpus In a traditional system they are all the same. I experimented and saw that cpusets are indeed turned on in some systems used for testing at Google. I.e. allowed cpus is a strict subset of online cpus. It seems likely that the following would be a better implementation of availableProcessors on Linux: cpu_set_t s; return (sched_getaffinity(0, sizeof(s), &s) == 0) ? CPU_COUNT(&s) : fallback_to_old_way(); with all the pain in configury. I think the configury pain is the usual: detecting sched.h, sched_getaffinity, CPU_COUNT, don't forget _GNU_SOURCE, check you're on a glibc system, probably check at runtime too, so use dlsym to access sched_getaffinity, look for similar hacks on non-glibc systems. Worry about systems with more than 1024 cpus. Worry about sched_getaffinity returning a higher number than the old way. Check both ways; never return a higher number than _NPROCESSORS_ONLN Is that enough things to worry about?
16-12-2015

With the increased use of resource managed environments (such as Docker containers) it is important that the VM report the actual available number of processors that can be used - otherwise resources will be overallocated and performance will suffer.
15-12-2015

We already had to deal with this in the real-time VM. Here is our active_processor_count implementation: // Note: JVMTI can call this early in the VM init phase before // the main thread has attached and TLS is initialized int os::active_processor_count() { // We return the processor count for the processor set of the // current thread - which represents the available processors for // a given type of thread: RTT, NHRT or JLT cpu_set_t s; pid_t tid = os::Linux::gettid(); int result = sched_getaffinity(tid, sizeof(cpu_set_t), &s); int online_cpus = 0; if(result == 0) { for(int i = 0; i < Linux::processor_count(); i++) { if(CPU_ISSET(i, &s)) { online_cpus++; } } } else { warning("sched_getaffinity failed: %s", strerror(errno)); online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN); } assert(online_cpus > 0 && online_cpus <= Linux::processor_count(), "sanity check"); return online_cpus; } In regular SE Hotspot of course all threads use the same taskset (unless the application changes it directly via native code).
15-12-2015

<deleted broken link> No idea what it was once linking to.
29-03-2014

EVALUATION Should fix.
07-01-2011

SUGGESTED FIX Updated patch from Michael Spiegel: --- /usr/users/0/mspiegel/openjdk7-clean/openjdk/hotspot/src/os/linux/vm/os_linux.cpp 2010-11-11 15:43:13.000000000 -0500 +++ /usr/users/0/mspiegel/test/os_linux.cpp 2010-11-30 16:14:07.437983600 -0500 @@ -4040,9 +4040,15 @@ }; int os::active_processor_count() { - // Linux doesn't yet have a (official) notion of processor sets, - // so just return the number of online processors. - int online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN); + cpu_set_t mask; + int online_cpus; + + if (sched_getaffinity(0, sizeof(cpu_set_t), &mask) == 0) { + online_cpus = CPU_COUNT(&mask); + } else { + online_cpus = ::sysconf(_SC_NPROCESSORS_ONLN); + } + assert(online_cpus > 0 && online_cpus <= processor_count(), "sanity check"); return online_cpus; } But note CPU_COUNT is only available in glibc 2.6+ so may not necessarily be available on all our build platforms for Linux.
30-11-2010

SUGGESTED FIX Get the sched_affinity of the current thread and count the number of enabled processors in the affinity set. As long as native code is not messing with individual affinities this will give you the available processors for the process as a whole.
30-11-2010