JDK-8042127 : Performance issues with java.util.Objects.requireNonNull
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: compiler
  • Affected Version: 8,9,10
  • Priority: P4
  • Status: Open
  • Resolution: Unresolved
  • Submitted: 2014-04-29
  • Updated: 2022-12-02
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
Other
tbdUnresolved
Related Reports
Relates :  
Relates :  
Relates :  
Description
The implicit null check when doing string.toString() is handled in a reasonably graceful way, but when doing explicit null checks and throwing a NPE in java.util.Objects.requireNonNull, the performance degenerates a lot more drastically. Since Objects.requireNonNull is used extensively in the JDK we should try to achieve similar performance characteristics 

Results using attached JMH[1] microbenchmark:
0% null values: performance on par
1% null values: implicit -17%, requireNonNull -82% 

java -jar target/microbenchmark.jar -f 3 -wi 8 -i 2 -t 2 ".*NullCheckBench.*"

Benchmark                                         Mode   Samples         Mean   Mean error    Units
s.m.NullCheckBench.nullToString0p                thrpt         6   656137.726    38047.328   ops/ms
s.m.NullCheckBench.nullToString1p                thrpt         6   543812.134    18287.248   ops/ms
s.m.NullCheckBench.nullToString1pExtraStack      thrpt         6   541348.560    10447.750   ops/ms
s.m.NullCheckBench.nullToString10p               thrpt         6   214927.628    11711.270   ops/ms
s.m.NullCheckBench.nullToString50p               thrpt         6    55391.524     2224.665   ops/ms
s.m.NullCheckBench.nullToString100p              thrpt         6    31790.624     3407.311   ops/ms

s.m.NullCheckBench.requireNonNull0p              thrpt         6   667901.165    17983.635   ops/ms
s.m.NullCheckBench.requireNonNull1p              thrpt         6   116286.386     8363.924   ops/ms
s.m.NullCheckBench.requireNonNull1pExtraStack    thrpt         6   114705.760     3837.641   ops/ms
s.m.NullCheckBench.requireNonNull10p             thrpt         6    14871.953     1891.190   ops/ms
s.m.NullCheckBench.requireNonNull50p             thrpt         6     3027.838      211.838   ops/ms
s.m.NullCheckBench.requireNonNull100p            thrpt         6     1494.453      220.459   ops/ms

Explanation: ..0p = 0% null values in test data, 1p = 1% and so forth. For ..ExtraStack I added 3 layers of method calls to increase stack depth to rule out that the extra stack trace entry in the requireNonNull case was adding a lot of extra overhead. 

[1] http://openjdk.java.net/projects/code-tools/jmh/

Comments
Probably this issue was fixed by the change done https://bugs.openjdk.org/browse/JDK-8282143?
02-12-2022

Objects.requireNonNull (and its various alternatives, such as getClass) are used to ensure that an API which advertises an NPE will in fact throw one promptly when required. It is not the case that normally operating programs will ever generate an NPE from an API-mandated null check. Failure to throw a correct NPE must usually be determined by negative API testing. Thus, most programs will not be affected either way by the costs associated with throwing an NPE, stackless or not. The most important cost relating to Objects.requireNonNull (and its alternatives) is the cost of doing nothing, which is to say the cost of successfully passing a null check. In properly optimized programs, this cost is often precisely zero, because nearly object operations (header or field reads, or field writes) can be used to perform a null check in parallel, at no extra cost. This is called an "implicit null check"; see the ImplicitNullChecks flag in HotSpot. If a null is encountered, the operating system will deliver a trap (such as SIGBUS or SIGSEGV) to the JVM, and the JVM will decode it to determine that in fact a null pointer was encountered. This is rare. But such traps are expensive to receive, decode, and respond to. To reduce this cost, the JVM uses trap history and branch profiling to convert implicit null checks to explicit ones. An explicit null check costs a couple of cycles, a test and a branch, and it expands code size, so it is not the default. In the common case, the implicit null checks are appropriate. Imagine a sign printed on them that says "Break glass on emergency". One other point should be remembered about implicit null checks: Since their cost is zero (i.e, already a sunk hardware cost for another operation), then there may be very small marginal costs associated with features associated with the null check, and unusual tactics may be appropriate for reducing those marginal costs, if they apply to every single JVM-mandated null check. This is the main reason HotSpot uses preallocated null pointer exceptions with no backtrace. Creating a new exception with a backtrace requires a full JVM frame state at each point that might raise a NPE. Frame states are not free, because they tend to stretch value live ranges, and can therefore cause spills. What kind of program benefits from explicit null checks? They are rare, especially since the null being checked is usually a true error condition. But occasionally an unusual program will use NPE throws for control flow. More innocently, a program might expect nulls and branch to special null-handling code. If the C2 JIT notices that such branches are rare (one out of thousands or less), then it may try an implicit null check for the branch, and back off only if there are too many traps. Since branch profiling and trap history are used to gate the implicit null optimization, it is possible for profile pollution to cause programs which would benefit from implicit null checks to fail to use them. As the attached benchmark "NullCheckProf" demonstrates, this can happen in a case where one caller of Objects.requireNonNull is feeding it nulls (for whatever perverse reason); in that case all other callers are penalized, since the profile seems to show that nulls are frequent enough to suppress the implicit null checks. In short, a program may suffer an irreversible, global performance tax if an unrelated module emits a spate of NPE exceptions via requireNonNull, at any time, for any reason at all. Fixing this problem is highly desirable. Any fix to the quality of the profile for methods like requireNonNull may also help with other important (far more important) cases of profile pollution, such as is routinely found in highly functional APIs like Java 8 Streams. Here are some ideas about reducing the problem of profile pollution: http://mail.openjdk.java.net/pipermail/core-libs-dev/2015-February/031708.html
24-02-2015

Thanks for taking a deep-dive into this, Aleksey! I agree that this could be closed as not an issue, unless someone insists that Objects.requireNonNull should be intrinsiified and be able to be turned into throwing a stackless cached exception via VM flags (I would insist the defaults are changed to disable this by default, first, though).
21-02-2015

See the analysis here: http://cr.openjdk.java.net/~shade/8042127/NullCheckMix.java I think the major dominator is raising the actual exception. Both getClass and toString seem to cheat by using the stackless cached exception (see JDK-8073432). Therefore, I don't think there is a performance problem with Objects.requireNonNull.
19-02-2015