JDK-8230531 : API Doc for CharsetEncoder.maxBytesPerChar() should be clearer about BOMs
  • Type: Bug
  • Component: core-libs
  • Sub-Component: java.nio.charsets
  • Affected Version: 11,13,14
  • Priority: P4
  • Status: Closed
  • Resolution: Fixed
  • Submitted: 2019-08-30
  • Updated: 2021-03-01
  • Resolved: 2019-09-24
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 14
14 b16Fixed
Related Reports
CSR :  
Relates :  
Relates :  
Relates :  
Description
A DESCRIPTION OF THE PROBLEM :
The documentation for the `maxBytesPerChar` parameter of the constructors of the CharsetEncoder class is currently:
>A positive float value indicating the maximum number of bytes that will be produced for each input character

It is not clear if / how bytes which are added independently of the character count (e.g. BOM) should be considered.
String.getBytes(Charset) requires that the value returned by maxBytesPerChar() includes all character count independent bytes. If this is not the case a BufferOverflowException is thrown. So the maxBytesPerChar documentation should be clearer about this.

STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Run the code provided below. It implements a custom charset which prepends the encoded string with marker bytes (imagine they have a meaning, e.g. BOM). The encoding itself is performed by casting char to byte, so the encoder returns as maxBytesPerChar() 1.0f.

EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
No exception is thrown. Encoder encodes 1 char to 1 byte, so maxBytesPerChar value of 1.0f seems reasonable.
ACTUAL -
Exception is thrown. Encoder should have considered length of marker prefix for maxBytesPerChar value.

---------- BEGIN SOURCE ----------
import java.nio.ByteBuffer;
import java.nio.CharBuffer;
import java.nio.charset.Charset;
import java.nio.charset.CharsetDecoder;
import java.nio.charset.CharsetEncoder;
import java.nio.charset.CoderResult;

public class CharsetEncoderTest {
    public static void main(String[] args) {
        Charset customCharset = new Charset("debug-custom", new String[0]) {
            @Override
            public CharsetEncoder newEncoder() {
                /*
                 * maxBytesPerChar:
                 *  A positive float value indicating the maximum number of bytes that will be produced 
                 *  for each input character
                 *  
                 * It is not clear that this includes bytes which are written independently of amount 
                 * of actually encoded chars, e.g. BOM or similar
                 */
                return new CharsetEncoder(this, 1.0f, 1.0f) {
                    final byte[] PREFIX = new byte[] {'d', 'e', 'b', 'u', 'g'};
                    ByteBuffer prefixBuf = ByteBuffer.wrap(PREFIX);
                    
                    @Override
                    protected CoderResult encodeLoop(CharBuffer in, ByteBuffer out) {
                        // Write a prefix once when starting to encode
                        if (prefixBuf.hasRemaining()) {
                            // ByteBuffer does not provide method to only put as much bytes as possible 
                            // Therefore temporarily set limit (and remaining) to the amount of data 
                            // `out` can accept
                            prefixBuf.limit(Math.min(prefixBuf.capacity(), prefixBuf.position() + out.remaining()));
                            out.put(prefixBuf);
                            prefixBuf.limit(prefixBuf.capacity());
                        }
                        
                        int maxEncode = Math.min(in.remaining(), out.remaining());
                        for (int i = 0; i < maxEncode; i++) {
                            // Very simple encoding by casting char -> byte
                            out.put((byte) in.get());
                        }
                        
                        if (!in.hasRemaining() && !prefixBuf.hasRemaining()) {
                            return CoderResult.UNDERFLOW;
                        }
                        else {
                            return CoderResult.OVERFLOW;
                        }
                    }
                };
            }
            
            @Override
            public CharsetDecoder newDecoder() {
                return new CharsetDecoder(this, 1.0f, 1.0f) {
                    @Override
                    protected CoderResult decodeLoop(ByteBuffer in, CharBuffer out) {
                        // Not relevant for this demo
                        // ...
                        in.position(in.limit());
                        return CoderResult.UNDERFLOW;
                    }
                };
            }
            
            @Override
            public boolean contains(Charset cs) {
                return cs == this;
            }
        };
        
        customCharset.encode("test"); // Works fine
        "test".getBytes(customCharset); // Throws BufferOverflowException
    }
}

---------- END SOURCE ----------


Comments
URL: https://hg.openjdk.java.net/jdk/jdk/rev/01f7ba3a4905 User: naoto Date: 2019-09-24 15:56:15 +0000
24-09-2019

As the intention of the method is now clear (https://mail.openjdk.java.net/pipermail/core-libs-dev/2019-September/062521.html), I will clarify it in the method description.
20-09-2019

URL of affected documentation : https://docs.oracle.com/en/java/javase/11/docs/api/java.base/java/nio/charset/CharsetEncoder.html#maxBytesPerChar() https://download.java.net/java/early_access/jdk13/docs/api/java.base/java/nio/charset/CharsetEncoder.html#maxBytesPerChar()
04-09-2019