libjava imports the symbol sysThreadAvailableStackWithSlack from libjvm.
This symbol should be a private symbol of libjvm. If this information is
needed, an interface in jvm.h should be created for it.
This bug affects the symbols exported by libjvm.
Based on Tim Lindholm's report of a summary of a discussion,
JVM_AvailableThreadStack should be implemented in hotspot on all
platforms (at least trivially). Probably three steps:
1-Add the new symbol to the VMs.
2-Change references in the libraries from the old symbol to the new. (jvm.h)
3-Remove the old symbol from hotspot/solaris.
We will add JVM_AvailableThreadStack to the jvm.h interface, but prefer
changing the jvm.h interface at the same time for all platforms, since jvm.h has
significant licensee impact.
Since Kestrel win32 is already fcs, this will not be possible for Kestrel.
We will fix this in Ladybird and in the meantime consider the export of sysThreadAvailableStackWithSlack from libjvm.so as a "bug".
This question relates to the JDK ladybird merge: EVM had a sysAvailableThreadWithSlack (later renamed to JVM_AvailableThreadStack) entrypoint, which was used in:
All three places stack allocate a small buffer if stack space is available, and otherwise malloc's a larger buffer.
There are similar pieces of code in zip_util.c and ZipFile.c. For the zip cases, the reference JDK always stack allocates a 4Kb buffer, while the production JDK stack allocates up to 1Kb, and otherwise mallocs. For the io case, the reference JDK always stack allocates a 8Kb buffer. The production JDK used to do that, then it changed to 2Kb, and then to the current 1Kb scheme above.
Was this change done purely to circumvent a potential stack overflow in native code? Or was there a performance improvement from malloc'ing a larger buffer?
We don't want to put in JVM_AvailableThreadStack into jvm.h if it isn't really needed. If it goes into jvm.h we (and all licensees) will have to implement it and keep it for the foreseable future. For Merlin the new IO package doesn't need it, so this would mostly be a Ladybird relevant change.
In HotSpot the default stack size on Sparc is 512Kb, while it was 128Kb for EVM. As you may recall a larger stack size is needed to run the new javac (GJC). HotSpot's frame size is a big bigger than EVM's (not for C1 but for C2), but we should still be able to fit at least 2X the number of frames on the stack (unless, of course, people specify -Xss on the command line).
Anyway to answer your question, I don;t recall the details either.But it appears this was to avoid stack overflow in native code. If my recollection is right this would not appear in EVM as a java.lang.stackoverflow error (which it can be argued it should), rather the application would die with a SEGV_ACCERR crash.
No, there was no performance improvement (some had suggested there may be a performance decline from having to malloc, which is single-threaded; measurements on a set of socket-heavy benchmarks didn;t reveal any any performance degradation on account of malloc-ing so the change -- of malloing -- was considered acceptable), so this was merely to avoid a stack overflow in native code, which would "crash" EVM (and would HotSpot too, as far as i can tell).
Looking more closely at the problem, sysThreadAvailableStackWithSlack was not added as a performance improvement, but rather as an attempt to prevent stack overflow in native code. Stack overflow in native code is a hard problem that is not dealt with in general (Classic, EVM or HotSpot). That would require excessive stack banging on native method entry, which would have a performance impact. The sysThreadAvailableStackWithSlack workaround would only address the problem if the application was running barely under the stack limit. On EVM, the default stack size was 128KB, while it is 512KB on HotSpot, so removing this check should be safe. See discussion above. Closing as "will not fix". Symbol sysThreadAvailableStackWithSlack will be removed from os_solaris.cpp, but we will leave it behind for a while in order to be able to run with 1.2.2 Solaris production libraries.