Most of our performance work is focused on making C2 to perform up to our expectations. Many key performance optimizations for Compact Strings are implemented in C2. However, some users may expect C1 to perform reasonably well with Compact Strings as well. While we cannot provide the same performance levels with a basic compiler like C1, we would still want to provide users with something that can get the performance back to pre-Compact Strings levels.
Case in point. Running SPECjbb2005 with -XX:TieredStopAtLevel=1 on my dev desktop, we have the degradation from 190K to 160K, or around 15%. The profile with Compact Strings show the time is mostly spent compressing and decompressing stuff:
1882.33 <Total>
183.43 java.lang.StringUTF16.compress(char[], int, byte[], int, int)
39.64 java.lang.StringLatin1.inflate(byte[], int, char[], int, int)
20.79 java.lang.String.length()
15.15 java.lang.String.<init>(java.lang.String)
12.76 java.lang.String.coder()
11.83 java.lang.StringUTF16.compress(char[], int, int)
11.00 java.lang.StringLatin1.compareTo(byte[], byte[])
Disabling Compact Strings falls the score further to 150K, but compression and decompression goes away, only to let UTF16 accessors dominate:
1890.32 <Total>
100.05 java.lang.StringUTF16.putChar(byte[], int, int)
93.21 java.lang.StringUTF16.toBytes(char[], int, int)
56.73 java.lang.StringUTF16.getChars(byte[], int, int, char[], int)
51.78 java.lang.StringUTF16.getChar(byte[], int)
21.09 java.lang.String.length()
Note that getChars/toBytes are actually calling (get|put)Char.
Therefore, I would guess intrisifying StringUTF16.(get|put)Char in C1 would help to get our the escape hatch working with -XX:-CompactStrings.