JDK-8166848 : Performance bug: SystemDictionary - optimization
  • Type: Enhancement
  • Component: hotspot
  • Sub-Component: runtime
  • Affected Version: 9
  • Priority: P3
  • Status: Resolved
  • Resolution: Fixed
  • Submitted: 2016-09-28
  • Updated: 2017-08-25
  • Resolved: 2017-05-18
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 10
10 b21Fixed
Related Reports
Duplicate :  
Relates :  
Relates :  
Relates :  
Relates :  
Relates :  
Description
We need to research and possibly implement an optimization for the "Performance bug: SystemDictionary" warning.

This is about "unbalanced" hash table, where certain entries get stuck at the end of a bucket's linked list, even though they have much higher lookup count the the other entries in the list.

One way to fix this would be to make sure the high lookup entries remain at the head of the list (either by adding the new entries to the tail, or marking the special entries by a special bit in their memory pointer, and checking that bit when inserting a new element, and if the bit is ON we add new element after the special one)

We also should add the "verify_lookup_length" warning to all tables and enable it for product builds.

Lastly, as part of this feature we should determine how we can test and validate this performance issue and any solution (currently the performance warning can pop-up during any debug test and possibly interfere with it - would be nice to have a dedicated test).
Comments
So I really don't know how this check accomplishes anything meaningful. If the hashtable is mostly empty but has a lot of lookups for items that aren't there, you'll get this warning but not because any bucket is too large. I think a better heuristic is to keep a running bucket list count, and check that it's a percentage greater than the average, like we do for StringTable. template <class T, MEMFLAGS F> bool RehashableHashtable<T, F>::check_rehash_table(int count) { assert(this->table_size() != 0, "underflow"); if (count > (((double)this->number_of_entries()/(double)this->table_size())*rehash_multiple)) { // Set a flag for the next safepoint, which should be at some guaranteed // safepoint interval. return true; } return false; } SystemDictionary is a bit unusual because an entry is looked up multiple times before it is eventually added, so it increases the lookup_count. Generally this assert indicates a problem with the hashing function, but the hashing function for SystemDictionary is essentially os::random for the class name Symbol ^ os::random for the class_loader object.
11-05-2017

Most of the tables don't get this warning because they don't set _lookup_length in the lookup function: load below is #entries/table_size template <MEMFLAGS F> bool BasicHashtable<F>::verify_lookup_length(double load, const char *table_name) { if ((!_lookup_warning) && (_lookup_count != 0) && ((double)_lookup_length / (double)_lookup_count > load * 2.0)) { warning("Performance bug: %s lookup_count=%d " "lookup_length=%d average=%lf load=%f", table_name, _lookup_count, _lookup_length, (double)_lookup_length / _lookup_count, load); _lookup_warning = true; return false; } return true; }
09-05-2017