Problem description is very simple. In reality people should not use unsynchronized HashMap concurrently. However, we have a situation with customer where they are using unsynchronized HashMap and running into OptionalDataException. Here is the problem description. Under load they get into OptionalDataException while unmarshalling the arguments that are passed to a remote object when the argument happens to be either HashMap or an object that contains HashMap in its serialization chain. The problem is very simple in the sense that serialized data of HashMap has size written as 5 entries but numbers of actually written elements are less than 5 and thats when it is encountering OptionalDataException since it is trying to read the entry and at that time it is encountering TC_END_BLOCKDATA (78 in hex). However, to even figure this out we had to spend lot of time in the investigation by making 9 instrumented patches and also asked them to run it with different jvm too (Xint, hotspot, server and client) and the reason for wasting so much of time is just because of the fact that it is manifesting with various different fashion. And the reason for having out of sync data in HashMap is that there is a small window in writeObject() between writing the size and writing each key, value pair iterating through the entries of HashMap that some other thread can remove an object from the HashMap. This will result in situation where the size will not match with the number of actual elements written to the stream. If number of actual elements written is greater than the size written then there is no problem since the data is Optional and extra entries after size will be skipped by the reader. However the problem will only be there if the number of actual entries written is less than the size written since it will try to read all entries and encounters END_BLOCK_DATA after reading all actual entries. HashMap.writeObject() code is as follows: private void writeObject(java.io.ObjectOutputStream s) throws IOException { // Write out the threshold, loadfactor, and any hidden stuff // Write out number of buckets // Write out size (number of Mappings) s.writeInt(size); // this is where the window is. Between writing the size and getting the Iterator from the HashMap. // Write out keys and values (alternating) for (... ) { ................. } } ###@###.### 10/16/04 00:32 GMT Customer just clarified: Please do note that this is not only with HashMap, but also the same with lot of the Collection objects (Ex: HashSet, ArrayList, TreeMap, TreeSet, LinkedList, etc.). Changing the synposis accordingly.. ###@###.### 10/18/04 21:45 GMT
|