Blocks :
|
|
Relates :
|
|
Relates :
|
|
Relates :
|
When allocating an array of statically-known size, our current hot-path zeroing strategy seems to split out the zeroing into the individual stores when the size is small. However, this does not happen at all for the arrays of non-constant size, which sets us up for the significant penalty when allocating small arrays. Benchmark: http://cr.openjdk.java.net/~shade/8146801/EmptyArrayBench.java http://cr.openjdk.java.net/~shade/8146801/benchmarks.jar Performance data: http://cr.openjdk.java.net/~shade/8146801/notes.txt The crux of the issue seems to be a large "rep stos" setup cost (see [1]). Note that Agner argues [2] that rep instructions are still future-proof, because they allow CPUs to select the appropriate implementation. It seems we only need to cater for the setup costs here. In C2, we avoid "rep stos" on small arrays when the size is known statically. In C1, we always do the looped mov, which is amusingly faster than C2-ish attempt at "rep stos"-ing on small arrays. It might be worthwhile to check the array size in zeroing path, and do the looped initialization for small sizes. C2 non-constant size = 8: 12.610 �� 0.193 ns/op http://cr.openjdk.java.net/~shade/8146801/c2-field-8.perfasm C2 constant size = 8: 4.681 �� 0.135 ns/op http://cr.openjdk.java.net/~shade/8146801/c2-const-8.perfasm C1 non-constant size = 8: 6.839 �� 0.103 ns/op http://cr.openjdk.java.net/~shade/8146801/c1-field-8.perfasm C1 constant size = 8: 6.843 �� 0.079 ns/op http://cr.openjdk.java.net/~shade/8146801/c1-const-8.perfasm [1] http://www.agner.org/optimize/optimizing_assembly.pdf , 17.9, "Moving blocks of data (All processors)" [2] http://www.agner.org/optimize/optimizing_assembly.pdf , 17.9, "Moving data on future processors"
|