JDK-8068721 : RMI-IIOP communication fails when ConcurrentHashMap is passed to remote method
  • Type: Bug
  • Component: other-libs
  • Sub-Component: corba
  • Affected Version: 8u25
  • Priority: P3
  • Status: Closed
  • Resolution: Fixed
  • Submitted: 2015-01-09
  • Updated: 2016-08-24
  • Resolved: 2015-04-13
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 8 JDK 9
8u60Fixed 9 b61Fixed
Related Reports
Relates :  
Description
RMI-IIOP communication fails on JDK8 when ConcurrentHashMap instance is 
passed as parameter to remote method.
The same test case works on JDK7 as it's expected.

*** Quote from ERROR message on server side ***
C:\jdk1.8.0_25\bin\java -Duser.language=en -Duser.country=us -Xms128m 
-Xmx128m -classpath .  
-Djava.naming.factory.initial=com.sun.jndi.cosnaming.CNCtxFactory 
-Djava.naming.provider.url=iiop://localhost:1060 HelloServer
Hello Server: Ready...
Jan 09, 2015 8:05:12 PM com.sun.corba.se.impl.encoding.CDRInputStream_1_0 
read_value
WARNING: "IOP00810210: (MARSHAL) Error from readValue on ValueHandler in 
CDRInputStream"
org.omg.CORBA.MARSHAL:   vmcid: SUN  minor code: 210 completed: Maybe
        at 
com.sun.corba.se.impl.logging.ORBUtilSystemException.valuehandlerReadError(ORB
UtilSystemException.java:6976)
        at 
com.sun.corba.se.impl.encoding.CDRInputStream_1_0.read_value(CDRInputStream_1_
0.java:1013)
        at 
com.sun.corba.se.impl.encoding.CDRInputStream.read_value(CDRInputStream.java:2
71)
        at _HelloImpl_Tie._invoke(Unknown Source)
<snip>
******************************************



Comments
changes required in OutputStreamHook putFields and writeFields methods IIOPInputStream inputClassFields and inputPrimitiveField methods ConcurrentHashMap writeObject method This bug has highlighted a gross ambiguity in the RMI-IIOP specification in relation to custom marshalling of ValueType objects, and the interpretation of this in the implementation It also highlights an ambiguity in the putFields method spec. The OutputStreamHook returns a new instance of a subclass of PutField object "Retrieve the object used to buffer persistent fields to be written to the stream. The fields will be written to the stream when writeFields method is called." the use of a definitive article could be construed to imply that there is only one such object per ObjectOutputStream. But the spec lacks rigour in this regard allowing for a more liberal interpretation The RMI-IIOP aka Java Langiage to IDL mapping (ptc 03-01-17.pdf) is weak in its description of custom marshalling "1.4.10 Custom Marshaling Format When an RMI/IDL value type is custom marshaled over GIOP, the following data is transmitted: a. octet - Format version. 1 or 2. For serializable objects with a writeObject method: b. boolean - True if defaultWriteObject was called, false otherwise. c. (optional) Data written by defaultWriteObject. The ordering of the fields is the same as the order in which they appear in the mapped IDL valuetype, and these fields are encoded exactly as they would be if the class did not have a writeObject method. d. Additional data written by writeObject, encoded as specified below. For format version 1, this data is optional and if present must be written "as is". For format version 2, if optional data is present then it must be enclosed within a CDR custom valuetype with no codebase and repid "RMI:org.omg.custom.<class>" where <class> is the fully-qualified name of the class whose writeObject method is being invoked. For format version 2, if optional data is not present then a null valuetype (0x00000000) must be written to indicate the absence of optional data." for step b the current implementation takes a liberal view that when writeObject uses persistentSerialFields that this also equates to defaultWriteObject being called. as the spec is over 10 years old it would be difficult to debate this interpretation and investigate an alternative interpretation for now we have a solution to the immediate problem, which is being tested.
26-02-2015

The ConncurrentHashMap readObject method will invoke the ObjectInputStream (IIOPInputStream) defaultReadObject method the update in the ConcurrentHashMap means that the former serial fields segmentMask, segmentShift and segments (array of Segment) are no longer relevant, and as such would not be set during deserialization. Their synthesized form are present for backward compatibility. The IIOPInputStream processing of the serial field in defaultReadObject expects the primitive serial field to have an equivalent reflected field. If this is not the case the reader ignores the field and leaves it in the stream. It should read it from the stream and discard it. So the two primitive serial fields segmentMask and segmentShift are left in the input stream and are subsequently read as the valueTag and the length of the segments object, which causes problems. In jdk7 readObject used GetFields. So a couple of corrections on both the writeObject and readObject side provide the genesis for a solution. This needs to be refined and thoroughly tested.
24-02-2015

The ConcurrentHashMap had changed its readObject method to use defaultReadObject rather than GetField The defaultReadObject or more precisely defaultReadObjectDelegate in the IIOPInputStream doesn't appear to be handling the incoming stream reads as per the spec i.e. read primitives first and then objects. This may be due to how the various "class descriptors" have been setup
24-02-2015

On the marshalling side there is a problem in in the ConcurrentHashMap and its use of the PutField returned by the IIOPOutputStream/OutputStreamHook. the putField() method in the OutputStreamHook returns a new PutField on each call, unlike its superclass java.io.ObjectOutputStream which returns the same instance over multiple calls. so, for now, declaring a local PutField variable in CHM.writeObject() means that the send side appears to be working as per jdk7 this works against a jdk7 server now to investigate the receive an un-marshalling - no longer get the out of memory error but do get a marshalling overflow error
23-02-2015

It appears that there may be some issue with the way ConcurrentHashMap serializable data is now written within the context of RMI-IIOP mashalling. The defaultWriteObject method is not called yet an boolean value true indicating that it is called is written in the cdr stream. The writeObject now uses putFields and wrtiteFields. Thus, synthesizing the default write, which could warrant a true value being written in the CDR stream. ?? In the Unmarshalling flow this boolean is read triggering a particular processing flow which sees the synthesized serial data being read, but it seems to be read in the wrong order which leads to the value tag being read as an int for an array size and too big an memory allocation requested. focusing analysis on this writing/reading of serial data and it conformance with the RMI-IIOP spec
23-02-2015

In JDK7 the serialization of ConcurrentHashMap has a writeObject which in turn calls defaultWriteObject to serialise the serializable fields. The main things is that it will serialize an array of Segment objects which are essentially hashtable with inbuilt locking through inheritance. The latter introduces its own complexity. In jdk8 the serialization of ConncurentHashMap has changed and the Segments are no longer serializable fields, but are serialized in a synthesized manner. This is to support backward compatibility between 8 and 7 While in the serialization process in 8 has a readObject which invokes defaultReadObject, while in 7 it uses GetFields during marshaling the ValueType tags appear to be written, but when unmarshaling the ValueType tag is not being read properly The Marshaling and UnMarshaling of RMI-IIIOP ValueTypes is intrinsically linked to the serialization process of the class. The subtle changes in the ConcurrentHashMap is central to the problem encountered on the receive side. the flow analysis continues
11-02-2015

The modification to ConcurrentHashMap made in jdk8 are not being handled correctly for the writing or reading of the serial data Analyzing the flows in jdk7 for comparison As Segment is a subclass or ReentrantLock, a test of using ReentrantLock as parameter sees it serialized ok in jdk8 and its handling is similar to the handling of ConcurrentHashMap's Segment in jdk7. The modified ConcurrentHashMap write is not being handled correctly it would seem. The Segment objects are not being handled as value objects. there are also interoperability issue between jdk7 and jdk8 on this issue. we take it that the serialized form of the ConcurrenthHashMap, in rmi-iiop context, is a correct one, then jdk8 is not producing a correct object value representation and it is not able to deserialize the correct form. this will take a while to sort out.
21-01-2015

continuing the call flow analysis, it appears that the receiving side recognizes that the ConcurrentHashMap is in chunked format and when processing the chunks representing the object value, it attempts to read the codebase, which it seems it shouldn't, based on an earlier check. This leaves the received stream misaligned and the next long retrieves a very large value, which causes an attempt to read excessive amounts of data from the stream and the out of memory error. from the RMI-IIOP spec: The start_value method ends any currently open chunk, writes a valuetype header for a nested custom valuetype (with a null codebase and the specified repository ID), and increments the valuetype nesting depth. The end_value method ends any currently open chunk, writes the end tag for the nested custom valuetype, and decrements the valuetype nesting depth. the null codebase is causing a problem on the receive side. when compared with HashMap .. no attempt to read codebase IIOPInputStream.defaultReadObjectDelegate: ENTER IIOPInputStream.defaultReadObjectDelegate: use local fields IIOPInputStream.inputClassFields: ENTER o == java.util.HashMap cl == java.util.HashMap IIOPInputStream.invokeObjectReader: osc.readObjectMethod for class java.util.concurrent.ConcurrentHashMap IIOPInputStream.defaultReadObjectDelegate: ENTER IIOPInputStream.defaultReadObjectDelegate: use local fields IIOPInputStream.inputClassFields: ENTER o == java.util.concurrent.ConcurrentHashMap cl == java.util.concurrent.ConcurrentHashMap the problem appears to be in the unmarhsalling of the inner Segment class in ConcurrentHashMap Need to check the send flows again to ensure the chunks are written correctly
20-01-2015

using a HashMap appears to work ok. relevant from the perspective that HashMap may provide a work around some extracts from trace for this invocation ----- Input Buffer ----- Current position: 0 Total length : 312 47 49 4f 50 01 02 00 00 00 00 01 2c 00 00 00 06 GIOP.......,.... 03 00 00 00 00 00 00 00 00 00 00 19 af ab cb 00 ............???. 00 00 00 02 e4 55 8b 41 00 00 00 08 00 00 00 01 .....U?A........ 00 00 00 00 14 00 00 08 00 00 00 15 73 61 79 48 ............sayH 65 6c 6c 6f 57 69 74 68 48 61 73 68 4d 61 70 32 elloWithHashMap2 00 00 00 00 00 00 00 03 00 00 00 01 00 00 00 0c ................ 00 00 00 00 00 01 00 01 00 01 01 09 00 00 00 11 ................ 00 00 00 02 00 02 00 06 4e 45 4f 00 00 00 00 02 ........NEO..... 00 14 00 28 49 44 4c 3a 7f ff ff 0a 00 00 00 38 ...(IDL:.......8 52 4d 49 3a 6a 61 76 61 2e 75 74 69 6c 2e 48 61 RMI:java.util.Ha 73 68 4d 61 70 3a 38 36 35 37 33 35 36 38 41 32 shMap:86573568A2 31 31 43 30 31 31 3a 30 35 30 37 44 41 43 31 43 11C011:0507DAC1C 33 31 36 36 30 44 31 00 00 00 00 0c 02 01 00 00 31660D1......... 3f 40 00 00 00 00 00 00 7f ff ff 0a 00 00 00 46 ?@.............F 52 4d 49 3a 6f 72 67 2e 6f 6d 67 2e 63 75 73 74 RMI:org.omg.cust 6f 6d 2e 6a 61 76 61 2e 75 74 69 6c 2e 48 61 73 om.java.util.Has 68 4d 61 70 3a 38 36 35 37 33 35 36 38 41 32 31 hMap:86573568A21 31 43 30 31 31 3a 35 30 37 44 41 43 31 43 33 31 1C011:507DAC1C31 36 36 30 44 31 00 00 02 00 00 00 08 00 00 00 10 660D1........... 00 00 00 00 ff ff ff ff ........ ------------------------ CDRInputStream.read_string: ENTER CDRInputStream_1_0.readStringOrIndirection: ENTER allowIndirection == false CDRInputStream_1_0.internalReadString: ENTER len == 21 CDRINputStream_1_0.getConvertedChars: ENTER with number of bytes == 20 SelectorImpl(SelectorThread): .enableInterestOps: com.sun.corba.se.impl.transport.SelectorImpl$SelectionKeyAndOp@2900d228 SelectorImpl(SelectorThread): .enableInterestOps:<- CDRInputObject(p: default-threadpool; w: Idle): .unmarshalHeader<-: com.sun.corba.se.impl.protocol.giopmsgheaders.RequestMessage_1_2@f533689 org.omg.CORBA.InputStream.ctor org.omg.CORBA.InputStream.ctor CDRinputStream.read_value: ENTER with class == java.util.HashMap CDRInputStream.read_value: impl == com.sun.corba.se.impl.encoding.CDRInputStream_1_2 CDRInputStream_1_0.read_value: ENTER with class == class java.util.HashMap CDRInputStream_1_0.readStringOrIndirection: ENTER allowIndirection == true CDRInputStream_1_0.internalReadString: ENTER len == 56 CDRINputStream_1_0.getConvertedChars: ENTER with number of bytes == 55 CDRInputStream.read_value: repositoryIDString == RMI:java.util.HashMap:86573568A211C011:0507DAC1C31660D1 CDRInputStream.start_block: ENTER CDRInputStream.read_value: getclass information RMI:java.util.HashMap:86573568A211C011:0507DAC1C31660D1 CDRInputStream.read_value: VALUE Type ValueHandlerImpl.readValue: ENTER with class == java.util.HashMap ValueHandletImpl.readValueInternal: ENTER class java.util.HashMap and repId == RMI:java.util.HashMap:86573568A211C011:0507DAC1C31660D1 IIOPInputStream.simpleReadObject: ENTER ValueHandlerImpl.useFullValueDescription: ENTER with class == java.util.HashMap repositoryID == RMI:java.util.HashMap:86573568A211C011:0507DAC1C31660D1 IIOPInputStream.simpleReadObject: inputObject IIOPInputStream.inputObject: ENTER IIOPInputStream.invokeObjectReader: ENTER with class == java.util.HashMap CDRInputStream.start_block: ENTER CDRInputStream_1_0.readStringOrIndirection: ENTER allowIndirection == true CDRInputStream_1_0.internalReadString: ENTER len == 70 CDRINputStream_1_0.getConvertedChars: ENTER with number of bytes == 69 CDRInputStream.start_block: ENTER CDRInputStream.start_block: ENTER CDRInputStream.start_block: ENTER CDRInputStream.start_block: ENTER
13-01-2015

I ran this in on jdk7, the same problem exists. The deserialization of a ConcurrentHashMap in the Object by Value runs into a size calculation problem. IIOPInputStream.simpleReadObject: inputObject IIOPInputStream.inputObject: ENTER CDRinputStream.read_value: ENTER with class == [Ljava.util.concurrent.ConcurrentHashMap$Segment; CDRInputStream.read_value: impl == com.sun.corba.se.impl.encoding.CDRInputStream_1_2 CDRInputStream_1_0.read_value: ENTER with class == class [Ljava.util.concurrent.ConcurrentHashMap$Segment; CDRInputStream_1_0.readStringOrIndirection: ENTER allowIndirection == true CDRInputStream_1_0.internalReadString: ENTER len == 0 CDRInputStream_1_0.readStringOrIndirection: ENTER allowIndirection == true CDRInputStream.start_block: ENTER CDRInputStream_1_0.internalReadString: ENTER len == 2147483402 CDRINputStream_1_0.getConvertedChars: ENTER with number of bytes == 2147483401 need to look at the output side of the interaction and to see what is being sent on the wire. The map is empty so should be a couple of kilobytes at most, one would assume ConcurrentHashMap appears to have quite a complex serialized structure, which could be the subject of some debate! a bit more panel beating to be done.
13-01-2015

@ root cause here is memory function : @ com.sun.corba.se.impl.orbutil.threadpool.ThreadPoolImpl$WorkerThread.run(ThreadPoolImpl.java:519) @ Caused by: java.lang.OutOfMemoryError: Java heap space at @ com.sun.corba.se.impl.encoding.CDRInputStream_1_0.getConvertedChars(CDRInputStream_1_0.java:2230) @ What is the working heap set for JDK 7 test run ? Could you try running JDK 7 @ at 96MB max heap and see if same error is seen ? We should probably take a @ heap dump at OOM time and see if there's any leak of objects.
12-01-2015