JDK-8312174 : missing JVMTI events from vthreads parked during JVMTI attach
  • Type: Bug
  • Component: hotspot
  • Sub-Component: jvmti
  • Affected Version: 21
  • Priority: P4
  • Status: Resolved
  • Resolution: Fixed
  • OS: generic
  • CPU: generic
  • Submitted: 2023-07-17
  • Updated: 2024-02-02
  • Resolved: 2023-09-12
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 21 JDK 22
21.0.2Fixed 22 b15Fixed
Related Reports
Relates :  
Relates :  
Relates :  
Description
ADDITIONAL SYSTEM INFORMATION :
Ubunu 22.10 x86_64 

openjdk version "22-ea" 2024-03-19
OpenJDK Runtime Environment (build 22-ea+6-393)
OpenJDK 64-Bit Server VM (build 22-ea+6-393, mixed mode, sharing)

openjdk version "21-ea" 2023-09-19
OpenJDK Runtime Environment (build 21-ea+31-2444)
OpenJDK 64-Bit Server VM (build 21-ea+31-2444, mixed mode, sharing)

commit 81c4e8f916a04582698907291b6505d4484cf9c2
from https://github.com/openjdk/jdk.git

A DESCRIPTION OF THE PROBLEM :
VirtualThreadEnd events are not posted for virtual threads that were parked while an agent was loaded into a running JVM. This also applied to the mount/unmount extension events.

These events are posted for virtual threads that were mounted during attach. In the builds mentioned above, events for mounted vhtreads were incomplete, but with the fix for JDK-8311556, all events seem to be posted for vthreads mounted during attach.

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
With the attached sample and JAVA_HOME set to a JDK 21 or 22: 

g++ -std=c++11 -shared -I$JAVA_HOME/include -I$JAVA_HOME/include/linux -fPIC VThreadEventTest.cpp -o libVThreadEventTest.so

$JAVA_HOME/bin/javac VThreadEventTest.java

LD_LIBRARY_PATH=. $JAVA_HOME/bin/java  -Djdk.attach.allowAttachSelf=true -XX:+EnableDynamicAgentLoading VThreadEventTest

EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Process exits with 0. 
ACTUAL -
Process exits with 1 and prints  

end: 4 (exp: 17), unmount: 7 (exp: 20), mount: 0 (exp: 13)
unexpected count

for the builds mentioned above and 

end: 7 (exp: 17), unmount: 10 (exp: 20), mount: 3 (exp: 13)
unexpected count

with a build from the repo at the commit mentioned above.

---------- BEGIN SOURCE ----------
-- VThreadEventTest.java ----------------------------------------------------------- 
import com.sun.tools.attach.VirtualMachine;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.locks.LockSupport;

public class VThreadEventTest {
    private static native int getEndCount();
    private static native int getMountCount();
    private static native int getUnmountCount();

    private static volatile boolean attached;

    public static void main(String[] args) throws Exception {
        if (Runtime.getRuntime().availableProcessors() < 8) {
            System.out.println("WARNING: test expects at least 8 processors.");
        }
        try (ExecutorService executorService = Executors.newVirtualThreadPerTaskExecutor()) {
            for (int threadCount = 0; threadCount < 10; threadCount++) {
                executorService.execute(() -> {
                    LockSupport.parkNanos(1_000_000L * 7_000);
                });
            }
            for (int threadCount = 0; threadCount < 4; threadCount++) {
                executorService.execute(() -> {
                    while (!attached) {
                        // keep mounted
                    }
                });
            }
            for (int threadCount = 0; threadCount < 3; threadCount++) {
                executorService.execute(() -> {
                    while (!attached) {
                        // keep mounted
                    }
                    LockSupport.parkNanos(1_000_000L * 100);
                });
            }
            Thread.sleep(2_000);
            VirtualMachine vm = VirtualMachine.attach(String.valueOf(ProcessHandle.current().pid()));
            vm.loadAgentLibrary("VThreadEventTest");
            Thread.sleep(500);
            attached = true;
        }
        int endCount = getEndCount();
        int unmountCount = getUnmountCount();
        int mountCount = getMountCount();
        int endExpected = 10 + 4 + 3;
        int unmountExpected = 10 + 4 + 3 * 2;
        int mountExpected = 10 + 3;
        System.out.println("end: " + endCount + " (exp: " + endExpected + "), unmount: " + unmountCount +
                " (exp: " + unmountExpected + "), mount: " + mountCount + " (exp: " + mountExpected + ")");
        if (endCount != endExpected || unmountCount != unmountExpected || mountCount != mountExpected) {
            System.out.println("unexpected count");
            System.exit(1);
        }
    }
}
------------------------------------------------------------------------------------
-- VThreadEventTest.java -----------------------------------------------------------
#include <jvmti.h>
#include <cstring>
#include <mutex>

#ifdef _WIN32
#define VARIADICJNI __cdecl
#else
#define VARIADICJNI JNICALL
#endif

namespace {
    jvmtiEnv *jvmti = nullptr;
    std::mutex lock;
    int endCount = 0;
    int unmountCount = 0;
    int mountCount = 0;

    void checkJvmti(int code, const char* message) {
        if (code != JVMTI_ERROR_NONE) {
            printf("Error %s: %d\n", message, code);
            abort();
        }
    }

    void JNICALL vthreadEnd(jvmtiEnv *jvmti_env, JNIEnv* jni_env, jthread virtual_thread) {
        std::lock_guard<std::mutex> lockGuard(lock);
        endCount++;
    }

    void VARIADICJNI vthreadUnmount(jvmtiEnv* jvmti_env, ...) {
        std::lock_guard<std::mutex> lockGuard(lock);
        unmountCount++;
    }

    void VARIADICJNI vthreadMount(jvmtiEnv* jvmti_env, ...) {
        std::lock_guard<std::mutex> lockGuard(lock);
        mountCount++;
    }
}

extern "C" JNIEXPORT jint JNICALL Java_VThreadEventTest_getEndCount(JNIEnv* jni_env, jclass clazz) {
    std::lock_guard<std::mutex> lockGuard(lock);
    return endCount;
}

extern "C" JNIEXPORT jint JNICALL Java_VThreadEventTest_getMountCount(JNIEnv* jni_env, jclass clazz) {
    std::lock_guard<std::mutex> lockGuard(lock);
    return mountCount;
}

extern "C" JNIEXPORT jint JNICALL Java_VThreadEventTest_getUnmountCount(JNIEnv* jni_env, jclass clazz) {
    std::lock_guard<std::mutex> lockGuard(lock);
    return unmountCount;
}

extern "C" JNIEXPORT jint JNICALL Agent_OnAttach(JavaVM *vm, char *options, void *reserved) {
    printf("attached\n");
    if (vm->GetEnv(reinterpret_cast<void **>(&jvmti), JVMTI_VERSION) != JNI_OK || !jvmti) {
        printf("Could not initialize JVMTI\n");
        abort();
    }
    jvmtiCapabilities capabilities;
    memset(&capabilities, 0, sizeof(capabilities));
    capabilities.can_support_virtual_threads = 1;
    checkJvmti(jvmti->AddCapabilities(&capabilities), "adding capabilities");

    jvmtiEventCallbacks callbacks;
    memset(&callbacks, 0, sizeof(callbacks));
    callbacks.VirtualThreadEnd = &vthreadEnd;
    checkJvmti(jvmti->SetEventCallbacks(&callbacks, (jint)sizeof(callbacks)), "setting callbacks");
    checkJvmti(jvmti->SetEventNotificationMode(JVMTI_ENABLE, JVMTI_EVENT_VIRTUAL_THREAD_END, nullptr), "enabling vthread end event");

    jint extensionCount = 0;
    jvmtiExtensionEventInfo* extensions;
    checkJvmti(jvmti->GetExtensionEvents(&extensionCount, &extensions), "getting extension events");
    jint unmountIndex = -1;
    jint mountIndex = -1;
    for (int exIndex = 0; exIndex < extensionCount; exIndex++) {
        jvmtiExtensionEventInfo &eventInfo = extensions[exIndex];
        if (strcmp(eventInfo.id, "com.sun.hotspot.events.VirtualThreadUnmount") == 0) {
            unmountIndex = eventInfo.extension_event_index;
        } else if (strcmp(eventInfo.id, "com.sun.hotspot.events.VirtualThreadMount") == 0) {
            mountIndex = eventInfo.extension_event_index;
        }
    }
    if (unmountIndex == -1 || mountIndex == -1) {
        printf("extension events not found.");
        abort();
    }
    checkJvmti(jvmti->SetExtensionEventCallback(unmountIndex, vthreadUnmount), "setting extension callback");
    checkJvmti(jvmti->SetEventNotificationMode(JVMTI_ENABLE, static_cast<jvmtiEvent>(unmountIndex), nullptr), "enabling extension event");
    checkJvmti(jvmti->SetExtensionEventCallback(mountIndex, vthreadMount), "setting extension callback");
    checkJvmti(jvmti->SetEventNotificationMode(JVMTI_ENABLE, static_cast<jvmtiEvent>(mountIndex), nullptr), "enabling extension event");

    printf("vthread events enabled\n");
    return JVMTI_ERROR_NONE;
}
------------------------------------------------------------------------------------

---------- END SOURCE ----------

FREQUENCY : always



Comments
Okay, thanks.
22-11-2023

So far we have only observed the problem when an application attaches native threads. JDK-8319935 is also targeting the case of attaching native threads. In my opinion, after the proposed fix in https://github.com/openjdk/jdk/pull/16642 , JvmtiThreadState::state_for_while_locked() still has the possibility of creating multiple instances of JvmtiThreadState pointing to the same platform thread. However, I have not seen it happening in practice.
22-11-2023

Also, thank you for filing the follow up bug JDK-8319935. I'm trying to catch up now. Is this issue observable for attaching native threads only or it is more general?
22-11-2023

No worries. Yes, it has already been backported to 21u.
22-11-2023

> ~sspitsyn, any concern for backporting this to 21u? [~manc] Sorry for being late. It seems you have already back ported this to 21u, right? Thank you a lot for this! In fact, I had to do it myself.
22-11-2023

Filed JDK-8319935.
11-11-2023

I looked further into this today. David, as you commented above, the 'fix' is indeed lucky coincidence. The two JvmtiThreadStates for one JavaThread issue that we have observed particularly happens for attached native thread. The first creation of a JvmtiThreadState occurs during JvmtiEventControllerPrivate::thread_started when a thread is being attached and it's creating the Java instance of the Thread. The created JvmtiThreadState has a null _thread_oop_h. * frame #0: 0x00007f88791948aa libjvm.so`JvmtiThreadState::JvmtiThreadState(JavaThread*, oopDesc*) [inlined] OopHandle::OopHandle(this=0x00005602b76eb810) at oopHandle.hpp:46:17 frame #1: 0x00007f88791948aa libjvm.so`JvmtiThreadState::JvmtiThreadState(this=0x00005602b76eb800, thread=0x00005602b7769c10, thread_oop=0x0000000000000000) at jvmtiThreadState.cpp:56:19 frame #2: 0x00007f8879176d83 libjvm.so`JvmtiEventCollector::setup_jvmti_thread_state() [inlined] JvmtiThreadState::state_for_while_locked(thread=<unavailable>, thread_oop=<unavailable>) at jvmtiThreadState.inline.hpp:98:19 frame #3: 0x00007f8879176cec libjvm.so`JvmtiEventCollector::setup_jvmti_thread_state() at jvmtiThreadState.inline.hpp:111:13 frame #4: 0x00007f8879176cc4 libjvm.so`JvmtiEventCollector::setup_jvmti_thread_state(this=0x00007ffcbf7bee98) at jvmtiExport.cpp:2953:29 frame #5: 0x00007f88791773ee libjvm.so`JvmtiSampledObjectAllocEventCollector::start(this=0x00007ffcbf7bee98) at jvmtiExport.cpp:3146:5 frame #6: 0x00007f887928a549 libjvm.so`MemAllocator::Allocation::notify_allocation_jvmti_sampler() [inlined] JvmtiSampledObjectAllocEventCollector::JvmtiSampledObjectAllocEventCollector(this=0x00007ffcbf7bee98, should_start=true) at jvmtiExport.hpp:560:5 frame #7: 0x00007f887928a52b libjvm.so`MemAllocator::Allocation::notify_allocation_jvmti_sampler(this=0x00007ffcbf7bef40) at memAllocator.cpp:185:43 frame #8: 0x00007f887928a94a libjvm.so`MemAllocator::Allocation::notify_allocation(this=<unavailable>, thread=<unavailable>) at memAllocator.cpp:235:3 [artificial] frame #9: 0x00007f887928aec9 libjvm.so`MemAllocator::allocate() const [inlined] MemAllocator::Allocation::~Allocation(this=0x00007ffcbf7bef40) at memAllocator.cpp:85:7 frame #10: 0x00007f887928aeb3 libjvm.so`MemAllocator::allocate(this=0x00007ffcbf7bef90) const at memAllocator.cpp:375:3 frame #11: 0x00007f8878f2f221 libjvm.so`InstanceKlass::allocate_instance_handle(JavaThread*) [inlined] CollectedHeap::obj_allocate(this=<unavailable>, klass=<unavailable>, size=<unavailable>, __the_thread__=0x00005602b7769c10) at collectedHeap.inline.hpp:36:20 frame #12: 0x00007f8878f2f1fd libjvm.so`InstanceKlass::allocate_instance_handle(JavaThread*) [inlined] InstanceKlass::allocate_instance(this=<unavailable>, __the_thread__=0x00005602b7769c10) at instanceKlass.cpp:1442:38 frame #13: 0x00007f8878f2f1ee libjvm.so`InstanceKlass::allocate_instance_handle(this=<unavailable>, __the_thread__=<unavailable>) at instanceKlass.cpp:1462:33 frame #14: 0x00007f8878f6bff9 libjvm.so`JavaThread::allocate_threadObj(this=0x00005602b7769c10, thread_group=Handle @ r13, thread_name=<unavailable>, daemon=false, __the_thread__=<unavailable>) at javaThread.cpp:222:35 frame #15: 0x00007f8879000020 libjvm.so`attach_current_thread(vm=<unavailable>, penv=0x00007ffcbf7bf1c0, _args=<unavailable>, daemon=<unavailable>) at jni.cpp:3819:13 ... The second creation of JvmtiThreadState for the same JavaThread (for the attaching native thread) occurs during JvmtiExport::post_thread_start when attach: * frame #0: 0x00007f88791948aa libjvm.so`JvmtiThreadState::JvmtiThreadState(JavaThread*, oopDesc*) [inlined] OopHandle::OopHandle(this=0x00005602b76eb910) at oopHandle.hpp:46:17 frame #1: 0x00007f88791948aa libjvm.so`JvmtiThreadState::JvmtiThreadState(this=0x00005602b76eb900, thread=0x00005602b7769c10, thread_oop=0x0000000116000000) at jvmtiThreadState.cpp:56:19 frame #2: 0x00007f8879166936 libjvm.so`JvmtiEventControllerPrivate::thread_started(JavaThread*) [inlined] JvmtiThreadState::state_for_while_locked(thread=0x00005602b7769c10, thread_oop=<unavailable>) at jvmtiThreadState.inline.hpp:98:19 frame #3: 0x00007f88791668b3 libjvm.so`JvmtiEventControllerPrivate::thread_started(thread=0x00005602b7769c10) at jvmtiEventController.cpp:744:31 frame #4: 0x00007f887916c1f4 libjvm.so`JvmtiExport::post_thread_start(thread=0x00005602b7769c10) at jvmtiExport.cpp:1476:3 frame #5: 0x00007f88790000af libjvm.so`attach_current_thread(vm=<unavailable>, penv=0x00007ffcbf7bf1c0, _args=<unavailable>, daemon=<unavailable>) at jni.cpp:3849:5 ... Before the https://git.openjdk.org/jdk/commit/fda142ff6cfefa12ec1ea4d4eb48b3c1b285bc04 change, JvmtiEventControllerPrivate::thread_started calls JvmtiThreadState::state_for_while_locked directly, which actually finds the already created JvmtiThreadState from the JavaThread. However the 'if (state == nullptr || state->get_thread_oop() != thread_oop)' check fails since the thread_oop from the state is null and is not the same as the actual thread_oop. Since the check fails, it decides to create a new JvmtiThreadState for the JavaThread. https://git.openjdk.org/jdk/commit/fda142ff6cfefa12ec1ea4d4eb48b3c1b285bc04 changed to call JvmtiThreadState::state_for(thread) in JvmtiEventControllerPrivate::thread_started. JvmtiThreadState::state_for(thread) gets the JvmtiThreadState from the JavaThread and just returns. Hence there's no additional state being created here. So the issue of non-1:1 JvmtiThreadStand and JavaThread turns out to be relatively simple. Although https://git.openjdk.org/jdk/commit/fda142ff6cfefa12ec1ea4d4eb48b3c1b285bc04 resolved the issue by luck, I think we should put additional reinforcement. I'll create a follow up bug.
10-11-2023

Given this change adds missing thread-state objects, the fact it cleaned up "duplicate" ones seems a somewhat lucky coincidence and I think the original issue with having multiple thread-states per thread needs to be investigated further to see if the problem is still present in some form.
09-11-2023

Here are some details and clarifications on the various crashes (as described by Man Cao above) we encountered: The crashes are always due to a bad JavaThread pointer in the current stable ThreadsList::_threads array. Depending on when the corrupted pointer is visited by the VM, it exhibits various different crashes during SafepointSynchronize::synchronize_threads, or ThreadsSMRSupport::free_list, etc. The crashes are latent symptoms caused by random (but not completely random) memory corruption originated from JvmtiEventControllerPrivate::recompute_thread_enabled due to a stale JavaThead (is already exited) referenced in a JvmtiThreadState. For a particular test that I debugged with, the memory corruption always occurs at the 193-th element (a 64-bit word) in the ThreadsList::_threads. All other JavaThread pointers in the array are intact. In one of the failed instances that I investigated, the memory address being trashed was 0x00007f9dbc0018b8. The bogus value contained in the memory location was 0x00000000e00f4df0. The debugging information showed there was a JavaThread allocated at 0x00007f9dbc0012b0 earlier during the test execution. The thread later exited and was destroyed. There were two different JvmtiThreadStates were created as associated with this JavaThread. Since the JavaThread was bound to two different JvmtiThreadStates and only one was destroyed during the thread exit. The remaining JvmtiThreadState then contains an invalid pointer (to 0x00007f9dbc0012b0) after the thread exited. At a later point, an array of JavaThread* ( ThreadsList::_threads) was allocated at 0x00007f9dbc0012b0. 0x00007f9dbc0018b8 was a memory location within the array. It appeared that later during JvmtiEventControllerPrivate::recompute_thread_enabled operation, it caused memory trashing at 0x00007f9dbc0018b8 when it encounters the JvmtiThreadState that contains the stale JavaThread* 0x00007f9dbc0012b0. The fix for this bug appears to make the problem go away. I verified with the fix applied in JDK 21, only one JvmtiThreadState was created for one JavaThread. I haven't filled the gap on how the fix ensures the 1-1 binding yet. Others more familiar with this area may make the connection more quickly.
09-11-2023

A pull request was submitted for review. URL: https://git.openjdk.org/jdk21u/pull/337 Date: 2023-11-08 01:42:27 +0000
08-11-2023

Fix request (21u): This change mitigates JVM crashes and use-after-free bugs in JDK 21, see my comments above. The change applies cleanly to JDK 21u and passes pre-submit tests.
08-11-2023

We have observed several different JVM crashes with 21 even without the variant of AddressSanitizer, and this change mitigates those crashes. Example crashing stack traces: Stack: [0x00007f3661efc000,0x00007f3662000000], sp=0x00007f3661ffaf40, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libpthread.so.0+0x11a24] pthread_sigmask+0x84 V [libjvm.so+0x1059af1] JVM_handle_linux_signal+0x1c1 C [libpthread.so.0+0x151c0] V [libjvm.so+0x114fd23] ThreadsSMRSupport::free_list(ThreadsList*)+0xf3 V [libjvm.so+0x11563a8] Threads::remove(JavaThread*, bool)+0x78 V [libjvm.so+0xb6cf6a] JavaThread::exit(bool, JavaThread::ExitType)+0x68a V [libjvm.so+0xb6c8a2] JavaThread::post_run()+0x12 V [libjvm.so+0x114b56e] HotspotBaseThread::call_run()+0xae V [libjvm.so+0xf15f59] thread_native_entry(HotspotBaseThread*)+0x119 Or: Stack: [0x00007f651cb28000,0x00007f651cc2c000], sp=0x00007f651cc26d00, free space=1019k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) C [libpthread.so.0+0x11a24] pthread_sigmask+0x84 V [libjvm.so+0x1059af1] JVM_handle_linux_signal+0x1c1 C [libpthread.so.0+0x151c0] V [libjvm.so+0xfb268a] SafepointSynchronize::begin()+0x17a V [libjvm.so+0x11d65ee] VMThread::inner_execute(VM_Operation*)+0x19e V [libjvm.so+0x11d5d0a] VMThread::run()+0xba V [libjvm.so+0x114b565] HotspotBaseThread::call_run()+0xa5 V [libjvm.so+0xf15f59] thread_native_entry(HotspotBaseThread*)+0x119
08-11-2023

I'd like to backport this change to 21u, to avoid the bug mentioned above. I'm not really sure if state_for_while_locked() still needs a fix to avoid creating JvmtiThreadState instances pointing to the same JavaThread*. ~sspitsyn, any concern for backporting this to 21u?
02-11-2023

We found that this change also fixed or avoided a bug in JDK 21 that manifests even when virtual thread is not used. We run a variant of AddressSanitizer for malloc/free, and it detects a use-after-free bug during JVM shutdown, that JvmtiThreadState::set_should_post_on_exceptions() could access an already-freed JavaThread object from JvmtiThreadState::_thread. The stack trace looks like: #0 (inlined) JavaThread::set_should_post_on_exceptions_flag make/hotspot/src/hotspot/share/runtime/javaThread.hpp:1094 #1 (inlined) JvmtiThreadState::set_should_post_on_exceptions make/hotspot/src/hotspot/share/prims/jvmtiThreadState.inline.hpp:126 #2 0x7f2de8a81207 JvmtiEventControllerPrivate::recompute_thread_enabled make/hotspot/src/hotspot/share/prims/jvmtiEventController.cpp:588 #3 0x557cd03e9d2f FailureSignalHandler #4 (inlined) call_chained_handler make/hotspot/src/hotspot/os/posix/signals_posix.cpp:454 #5 0x7f2de8d756d4 PosixSignals::chained_handler make/hotspot/src/hotspot/os/posix/signals_posix.cpp:472 #6 0x7f2de8d75940 JVM_handle_linux_signal make/hotspot/src/hotspot/os/posix/signals_posix.cpp:674 #7 0x7f2dec9e91c0 __restore_rt #8 (inlined) JavaThread::set_should_post_on_exceptions_flag make/hotspot/src/hotspot/share/runtime/javaThread.hpp:1094 #9 (inlined) JvmtiThreadState::set_should_post_on_exceptions make/hotspot/src/hotspot/share/prims/jvmtiThreadState.inline.hpp:126 #10 0x7f2de8a81207 JvmtiEventControllerPrivate::recompute_thread_enabled make/hotspot/src/hotspot/share/prims/jvmtiEventController.cpp:588 #11 0x7f2de8a81698 JvmtiEventControllerPrivate::recompute_enabled make/hotspot/src/hotspot/share/prims/jvmtiEventController.cpp:668 #12 0x7f2de8a82b43 JvmtiEventController::set_user_enabled make/hotspot/src/hotspot/share/prims/jvmtiEventController.cpp:1060 #13 0x7f2de8a6c70b JvmtiEnv::SetEventNotificationMode make/hotspot/src/hotspot/share/prims/jvmtiEnv.cpp:586 #14 0x7f2de8a27642 jvmti_SetEventNotificationMode /tmp/jdkbuild/build/hotspot/variant-server/gensrc/jvmtifiles/jvmtiEnter.cpp:5321 #15 0x557cd0ebd366 google::javaprofiler::HeapMonitor::Disable #16 0x557cd0e691f8 OnVMDeath #17 0x7f2de8a84e5d JvmtiExport::post_vm_death make/hotspot/src/hotspot/share/prims/jvmtiExport.cpp:762 #18 0x7f2de886e9dd before_exit make/hotspot/src/hotspot/share/runtime/java.cpp:515 #19 0x7f2de8e72301 Threads::destroy_vm make/hotspot/src/hotspot/share/runtime/threads.cpp:890 My investigation shows that it is due to `JvmtiThreadState::state_for_while_locked()` creating multiple JvmtiThreadState instances pointing to the same JavaThread*. All these instances appear on the JvmtiThreadState::_head linked list. When a JavaThread T exits and calls JvmtiExport::cleanup_thread(), it only deletes one JvmtiThreadState instance on the linked list, leaving other instances pointing to T having a dangling pointer. It looks like the change in JvmtiEventControllerPrivate::thread_started() that changed JvmtiThreadState::state_for_while_locked() to JvmtiThreadState::state_for() is effective to avoid this bug.
02-11-2023

Changeset: fda142ff Author: Serguei Spitsyn <sspitsyn@openjdk.org> Date: 2023-09-12 02:46:47 +0000 URL: https://git.openjdk.org/jdk/commit/fda142ff6cfefa12ec1ea4d4eb48b3c1b285bc04
12-09-2023

A pull request was submitted for review. URL: https://git.openjdk.org/jdk/pull/15467 Date: 2023-08-29 10:09:21 +0000
29-08-2023