JDK-8133300 : Ensure symbol table immutability in Nashorn AST
  • Type: Enhancement
  • Component: core-libs
  • Sub-Component: jdk.nashorn
  • Priority: P4
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2015-08-11
  • Updated: 2016-01-14
  • Resolved: 2015-08-31
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 JDK 9
8u72Fixed 9 b81Fixed
Related Reports
Duplicate :  
Relates :  
Description
Symbol tables (and symbols themselves) are mutable in Nashorn at the moment. They're the last mutable piece of AST. We should make them immutable; this would allow moving SYMBOL_ASSIGNMENT_PHASE and compilation phases into COMPILE_UPTO_SERIALIZABLE phase group, before SERIALIZE_SPLIT_PHASE. That in turn would allow us to keep ASTs weakly reachable and reuse them in optimistic recompilations as only OPTIMISTIC_TYPE_ASSIGNMENT_PHASE and LOCAL_VARIABLE_TYPE_CALCULATION_PHASE would need to be re-run. That could cut down on compilation cost significantly as we would need to reparse much less frequently while doing deoptimizing recompilation.
Comments
Implementation notes for reviewers: Logic in AssignSymbols.leaveBlock() was moved into its own compiler phase, DECLARE_LOCAL_SYMBOLS_TO_COMPILER. This is needed as AssignSymbols now doesn't have to be re-run for cached ASTs, but that logic actually altered compiler state, so it still needs to be re-run for cached ASTs, hence it was moved into a separate phase that runs for cached ASTs too. ".accept(this)" call on various newly created literal nodes was removed from AssignSymbols, as it resulted in some unwanted duplicate invocations without actually serving useful purpose. We don't cache the AST from the eager pre-pass, except for split functions; see previous JIRA comment. FindScopeDepth.java caches ASTs when they're on-demand compiled, except for those that underwent apply-to-call specialization, which are as of this moment uncacheable. AstSerializer.java no longer needs to remove nested function bodies, this responsibility now lies with CacheAst.java RecompilableScriptFunctionData.java probably has the most changes: - it now hosts an ExecutorService for asynchronously serializing ASTs of split functions - instead of "byte[] serializedAst" it now has "Object cachedAst", which can have several types: either null, or a SoftReference to a FunctionNode, or an instance of a new inner class SerializedAst which is really a tuple of byte[] and SoftReference (so that even with serialized ASTs we still get the benefit of a softly-cached AST and only resort to deserialization when needed). - note that setCachedAst() will initally set a SoftReference to the cachedAst field even for split functions. It will aslo send a task into the executor to eventually serialize the AST. Since the task strongly references the FunctionNod,e that prevents the soft reference from being prematurely cleaned. -
26-08-2015

One surprising hurdle: we can't cache and reuse AST for functions that undergo apply-to-call specialization. It seems like the current logic for apply-to-call needs to be rewritten in order to discover all non-cacheable functions in the eager pre-process pass. This points somewhat beyond the scope of this issue, so I'll rather disable caching of the eager pre-pass AST (except for split functions, which are serialized anyway so that'll be their only representation anyway) for now. I separately raised JDK-8134292 to enable caching of pre-parsed non-split ASTs in the future.
24-08-2015

I can measure the following with mandreel.js (in both cases, also with JDK-8133785 already implemented): - reduction of 3 seconds on load time - overall 16 seconds reduction of run-time for 2 iterations - of those: - 5 seconds saved by making AST serialization asynchronous (but 1.3 lost with AST caching, mostly because of symbol cloning), and - 8 seconds saved by avoiding AST deserialization while AST is still softly-reachable - 1 second saved by moving ASSIGN_SYMBOLS into pre-cached compilation phases (so it's not repeated) Detailed before: [mandreel] finished loading 'mandreel' [mandreel.js]... /Users/attila/Documents/projects/jdk9/nashorn/test/script/basic/../external/octane/mandreel.js in 16861 ms [mandreel] running 'mandreel' for 2 iterations of no less than 5 seconds [mandreel] warmup finished 2 ops/minute [mandreel] iteration 1 finished 3 ops/minute [mandreel] iteration 2 finished 7 ops/minute [mandreel] 5 ops/minute (3-7), warmup=2 [time] Accumulated compilation phase timings: [time] [time] 'JavaScript Parsing' 1457 ms [time] 'Constant Folding' 160 ms [time] 'Control Flow Lowering' 341 ms [time] 'Builtin Replacement' 118 ms [time] 'Code Splitting' 899 ms [time] 'Program Point Calculation' 278 ms [time] 'Serialize Split Functions' 4886 ms [time] 'Symbol Assignment' 1736 ms [time] 'Scope Depth Computation' 432 ms [time] 'Optimistic Type Assignment' 218 ms [time] 'Local Variable Type Calculation' 1115 ms [time] 'Bytecode Generation' 6812 ms [time] 'Class Installation' 4402 ms [time] 'Reuse Compile Units' 95 ms [time] 'Deserialize' 8182 ms [time] [time] Total runtime: 91356 ms (Non-runtime: 31140 ms [34%]) [time] [time] Emitted compile units: 729 after: [mandreel] finished loading 'mandreel' [mandreel.js]... /Users/attila/Documents/projects/jdk9/nashorn/test/script/basic/../external/octane/mandreel.js in 13607 ms [mandreel] running 'mandreel' for 2 iterations of no less than 5 seconds [mandreel] warmup finished 3 ops/minute [mandreel] iteration 1 finished 3 ops/minute [mandreel] iteration 2 finished 6 ops/minute [mandreel] 5 ops/minute (3-6), warmup=3 [time] Accumulated compilation phase timings: [time] [time] 'JavaScript Parsing' 1419 ms [time] 'Constant Folding' 123 ms [time] 'Control Flow Lowering' 365 ms [time] 'Builtin Replacement' 321 ms [time] 'Code Splitting' 914 ms [time] 'Program Point Calculation' 218 ms [time] 'Symbol Assignment' 657 ms [time] 'Scope Depth Computation' 408 ms [time] 'Cache ASTs' 1279 ms [time] 'Local Symbols Declaration' 112 ms [time] 'Optimistic Type Assignment' 206 ms [time] 'Local Variable Type Calculation' 1394 ms [time] 'Bytecode Generation' 7785 ms [time] 'Class Installation' 4027 ms [time] 'Reinitialize cached' 111 ms [time] 'Reuse Compile Units' 32 ms [time] [time] Total runtime: 75238 ms (Non-runtime: 19378 ms [25%]) [time] [time] Emitted compile units: 729
21-08-2015