When deallocating memory to metaspace, we wrongly adjust the returned size before adding the block to the freeblocks:
```
// Prematurely returns a metaspace allocation to the _block_freelists
// because it is not needed anymore (requires CLD lock to be active).
void MetaspaceArena::deallocate_locked(MetaWord* p, size_t word_size) {
assert_lock_strong(lock());
// At this point a current chunk must exist since we only deallocate if we did allocate before.
assert(current_chunk() != nullptr, "stray deallocation?");
assert(is_valid_area(p, word_size),
"Pointer range not part of this Arena and cannot be deallocated: (" PTR_FORMAT ".." PTR_FORMAT ").",
p2i(p), p2i(p + word_size));
UL2(trace, "deallocating " PTR_FORMAT ", word size: " SIZE_FORMAT ".",
p2i(p), word_size);
size_t raw_word_size = get_raw_word_size_for_requested_word_size(word_size); <<<<<<
add_allocation_to_fbl(p, raw_word_size);
DEBUG_ONLY(verify_locked();)
}
```
The block size is adjusted in get_raw_word_size_for_requested_word_size to fit the size of a freeblocks block, which is 2 words. Therefore, if the caller wanted to return a single word, we would now possibly overwrite the following block, or it would overwrite the next slot in the freeblocks binlist.
This is purely theoretical as long as the user just deallocates whatever blocks he retrieved from metaspace since the allocation path does the same adjustment. The original intent was to mimic what the metaspace does on the allocation path (e.g. if user were to request 1 word, it would silently get 2, and if he would return the 1 word back via deallocate, we would correct this to hold 2 since we know better what we allocated internally). It becomes dangerous when one tries to deallocate parts of another block, as I attempt with Lilliput.