The thresholds to select plain versus scientific notation in Float16 are currently the same as for double and float, namely 1e-3 and 1e7.
That is, once the decimal to represent the Float16 value is selected, if it lies in [1e-3, 1e7) then plain notation is used.
Given the narrow range of Float16, all decimals greater than or equal to 1e-3 are formatted with plain notation.
Starting with 1024.0, all Float16 values are integers, and are formatted in plain notation.
As a consequence, given the narrow precision, Float16.toString(Float16.valueOf("8293")) generates "8296.0".
While this is correct according to the spec, it is probably confusing to most users.
They parse a small integer that is converted back to a visually different integer.
It might be less confusing if the output would instead be formatted in scientific notation, like "8.296E3".
Most users would accept this, because scientific notation already conveys some sense of imprecision happening behind the scenes.
The results are numerically equal, but the "psychologic" effect is arguably different.
The issue is observable starting with 2048.0, whose Float16 successor is 2050.0 instead of the naïvely expected 2049.0.
Thus, parsing "2051" and converting the result back to a string gives "2052.0", and "65519" gives "65500.0".
Using scientific notation would give "2.052E3" and "6.55E4", and less confused users.
All this lead to proposing the range [1e-3, 1e3) for plain formatting.
Nothing else changes in the spec.