JDK-6521677 : (fc) "Cleaner terminated abnormally" error in simple mapping test
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.nio
  • Affected Version: 6
  • Priority: P3
  • Status: Closed
  • Resolution: Duplicate
  • OS: windows_xp
  • CPU: x86
  • Submitted: 2007-02-06
  • Updated: 2011-02-16
  • Resolved: 2007-02-06
Related Reports
Duplicate :  
Relates :  
Description
FULL PRODUCT VERSION :
java version "1.6.0"
Java(TM) SE Runtime Environment (build 1.6.0-b105)
Java HotSpot(TM) Client VM (build 1.6.0-b105, mixed mode, sharing)

ADDITIONAL OS VERSION INFORMATION :
Microsoft Windows XP [Version 5.1.2600]

EXTRA RELEVANT SYSTEM CONFIGURATION :
1.25 GB RAM, Pentium 4 1.8 GHz

A DESCRIPTION OF THE PROBLEM :
Attempt to use the NIO mapping with a large number of mapped blocks leads to an error:
java.lang.Error: Cleaner terminated abnormally

My test is very simple.
1) I create a new file.
2) I increase it's size to 32 MB (RandomAccessFile.setLength method, with closing after resizing).
3) Perform a loop of the following tests.
3.1) Open a file: raf = new RandomAccessFile(f, "rw");
3.2) Map 4096 non-overlapped regions of the file, every region is 8192 bytes, and fills every region by some value:
    for (int i = 0; i < 4096; i++) {
        ByteBuffer bb = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, (long)i * 8192, 8192);
        for (int j = 0; j < bb.limit(); j++)
            bb.put(j, (byte)i);
    }
3.3) Close a file: raf.close();

As a result, the internal error occured very soon: usually at the second or third iteration of the loop 3).

The concreate block size (8192) and number of blocks (4096) are not important; the error is stable if the number of mapped blocks is large enough.

If I place System.gc()  / System.runFinalization() in every iteration, this error occurs later, but still occurs.

I detected that this bug occurs only if the file is new. When I used an existing 32 MB file (skipping the steps 1 and 2), I could not wait for this error. BUT: in this case System.runFinalization() produces another strange error at iteration #30-40:
java.lang.OutOfMemoryError: unable to create new native thread
I'll create a separate bug report for this.

This bug is like as http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4938372 But my test is more simple, it is almost trivial. I really need to create many mappings. In the real application, I need to process files with unlimited size (maybe, tens of GB), and I must use not too large blocks (8 MB, maybe 64 MB).

At the same time, I don't need to support many SIMULTANEOUS mappings: I need only 4-8 mapped banks, that map different portions of the file, depending or application requests. But I cannot unmap unused banks! When the number of simultaneous MappedByteBuffer instances, still not cleaned by the garbage collector, becomes too large, JVM is abnormally exited with an error message.


REGRESSION.  Last worked in version mustang

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Please compile the attached source Java file and call it:

"\Program Files\Java\jdk1.6.0\jre\bin\java" net.algart.finalizing.demo.IntensiveMapping \TMP\test.dat 8192 4096 1 50

"\TMP\temp.dat" is a position of the new temporary file in the file-system; you may use any appropriate posision. This file must NOT exist before the call.

8192 is the size of mapped blocks; 4096 is the number of blocks. (So, the test will create 32 MB file.)
1 is the number of passes in every test; I didn't detect changes in the test behaviour for large number of passes.
50 is the total number of iteration. The error usually occured in one of first iterations.


EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Normal execution without JVM crash.
ACTUAL -
The results of execution (on the console):
 
Mapping the file D:\TMP\test.dat
It is a new file
Block size: 8192 bytes
File size: 4096 blocks, 33554432 bytes
Number of passes: 1
Number of tests: 50
System.gc() will not be called
System.runFinalization() will not be called

Test #1 / 50 (4.974368MB free, 5.177344MB total, 66.650112MB max):
Pass #1... done
Test #2 / 50 (4.529136MB free, 5.177344MB total, 66.650112MB max):
Pass #1...java.lang.Error: Cleaner terminated abnormally
     at sun.misc.Cleaner$1.run(Cleaner.java:130)
     at java.security.AccessController.doPrivileged(Native Method)
     at sun.misc.Cleaner.clean(Cleaner.java:127)
     at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:124)
Caused by: java.io.IOException: The process cannot access the file because another process has locked a portion of the file
     at sun.nio.ch.FileChannelImpl.unmap0(Native Method)
     at sun.nio.ch.FileChannelImpl.access$100(FileChannelImpl.java:32)
     at sun.nio.ch.FileChannelImpl$Unmapper.run(FileChannelImpl.java:680)
     at sun.misc.Cleaner.clean(Cleaner.java:125)
     ... 1 more


ERROR MESSAGES/STACK TRACES THAT OCCUR :
java.lang.Error: Cleaner terminated abnormally
     at sun.misc.Cleaner$1.run(Cleaner.java:130)
     at java.security.AccessController.doPrivileged(Native Method)
     at sun.misc.Cleaner.clean(Cleaner.java:127)
     at java.lang.ref.Reference$ReferenceHandler.run(Reference.java:124)
Caused by: java.io.IOException: The process cannot access the file because another process has locked a portion of the file
     at sun.nio.ch.FileChannelImpl.unmap0(Native Method)
     at sun.nio.ch.FileChannelImpl.access$100(FileChannelImpl.java:32)
     at sun.nio.ch.FileChannelImpl$Unmapper.run(FileChannelImpl.java:680)
     at sun.misc.Cleaner.clean(Cleaner.java:125)
     ... 1 more


REPRODUCIBILITY :
This bug can be reproduced always.

---------- BEGIN SOURCE ----------
package net.algart.finalizing.demo;

import java.io.*;
import java.nio.*;
import java.nio.channels.*;

/**
 * <p>Test for very intensive file mappings.</p>
 *
 * <p>Company: Smart Imaging Technologies</p>
 *
 * @author Daniel Alievsky
 * @version 1.0
 * @since JDK 1.4
 */
public class IntensiveMapping {
    public static void main(String[] args) throws IOException {
        if (args.length < 5) {
            System.out.println("Usage: " + IntensiveMapping.class.getName()
                + " someTempFile blockSize numberOfBlocks numberOfPasses numberOfTests [-gc] [-runFinalization]");
            return;
        }

        File f = new File(args[0]);
        int blockSize = Integer.parseInt(args[1]);
        int numberOfBlocks = Integer.parseInt(args[2]);
        int numberOfPasses = Integer.parseInt(args[3]);
        int numberOfTests = Integer.parseInt(args[4]);
        boolean doGc = args.length >= 6 && args[5].equalsIgnoreCase("-gc");
        boolean doRunFinalization = args[args.length - 1].equalsIgnoreCase("-runFinalization");
        long fileLength = (long)numberOfBlocks * (long)blockSize;
        System.out.println("Mapping the file " + f.getAbsolutePath());
        System.out.println("It is " + (f.exists() ? "an existing file" : "a new file"));
        System.out.println("Block size: " + blockSize + " bytes");
        System.out.println("File size: " + numberOfBlocks + " blocks, " + fileLength + " bytes");
        System.out.println("Number of passes: " + numberOfPasses);
        System.out.println("Number of tests: " + numberOfTests);
        System.out.println("System.gc() will "
            + (doGc ? "be called before each test" : "not be called"));
        System.out.println("System.runFinalization() will "
            + (doRunFinalization ? "be called before each test" : "not be called"));
        System.out.println();
        if (!f.exists())
            f.createNewFile();
        RandomAccessFile raf = new RandomAccessFile(f, "rw");
        raf.setLength(fileLength);
        raf.close();

        for (int testCount = 1; testCount <= numberOfTests; testCount++) {
            System.out.println("Test #" + testCount + " / " + numberOfTests
                + " (" + Runtime.getRuntime().freeMemory() / 1000000.0 + "MB free, "
                + Runtime.getRuntime().totalMemory() / 1000000.0 + "MB total, "
                + Runtime.getRuntime().maxMemory() / 1000000.0 + "MB max):");

            if (doGc) {
                System.out.print("System.gc()...");
                System.gc();
                System.out.println(" done");
            }
            if (doRunFinalization) {
                System.out.print("System.runFinalization()...");
                System.runFinalization();
                System.out.println(" done");
            }
            raf = new RandomAccessFile(f, "rw");
            for (int pass = 1; pass <= numberOfPasses; pass++) {
                System.out.print("Pass #" + pass + "...");
                for (int i = 0; i < numberOfBlocks; i++) {
                    ByteBuffer bb = raf.getChannel().map(FileChannel.MapMode.READ_WRITE,
                        (long)i * (long)blockSize, blockSize);
                    for (int j = 0; j < bb.limit(); j++)
                        bb.put(j, (byte)i);
                }
                System.out.println(" done");
            }
            raf.close();
        }
    }
}

---------- END SOURCE ----------

CUSTOMER SUBMITTED WORKAROUND :
In the current version of my Java library, I control the number of MappedByteBuffer instances that are not finalized by garbage collector. It is possible by using PhantomReference queue (containing phantom references to all byte buffers created by "map" call) and a separate thread, that performs an infinite loop of "refQueue.remove();" calls and counts the number of non-finalized instances. If I try to map new block, but the current number of non-finalized MappedByteBuffers exceeds the limit (50 in my tests), I start the loop of System.gc() until the system will finalize all extra buffers:

long tFix = System.currentTimeMillis();
boolean ok = false;
do {
    System.gc();
    // do not call System.runFinalization(): sometimes in produces
    // "OutOfMemoryError: unable to create new native thread"
    try {
        Thread.sleep(5);
    } catch (InterruptedException ex) {
        // nothing to do here
    }
    synchronized (lock) {
        mapCount = this.mappingCounter;
        // this counter is increased after every "map" call and decreased after
        // "refQueue.remove()" call in the separate thread
    }
    if (mapCount < desiredLimit) {
        ok = true;
        break;
    }
} while (System.currentTimeMillis() - tFix < timeout);

As you understand, it is a hack, not a good and stable solution.

I could offer to include "unmap()" method in MappedByteBuffer, or in some special version (subclass?) of MappedByteBuffer. This method will perform low-level unmapping, and you may recommend to call it any time when the mapped buffer becomes a garbage. After "unmap()" call, any access to this ByteBuffer should produce the corresponding runtime exception.

I understand that such "unmap()" method, probably, will reduce the performance of ByteBuffer due to necessity of correct syncronization (all threads should "see" the internal flag, allowing to produce an exception in a case of access after unmap() call). But it is much better than an internal error and crash of JVM that takes place now. It could be an optional solution. If the user does not need a lot of mappings, it uses standard quick MappedByteBuffer. If he really needs thousands of mappings, it uses modified MappedByteBuffer version with unmap() method and calls it when necessary, to avoid too large number of simultaneous mappings.

Comments
EVALUATION We will address this issue in jdk7 as 4938372.
06-02-2007

EVALUATION The unmapping attempts to flush dirty pages prior to unmapping the region. This is failing with a locking violation which causes the Cleaner to terminate. The locking violation is also the reason that the test in 4938372 is failing. When the Cleaner encounters an error it does terminate the VM (this was added in 4954921
06-02-2007