JDK-6366359 : Fullscreen suite DisplayModeTest failing when switching from some 8-bit to 32-bit modes on WinXP
  • Type: Bug
  • Component: client-libs
  • Sub-Component: 2d
  • Affected Version: 6
  • Priority: P2
  • Status: Closed
  • Resolution: Fixed
  • OS: windows_xp
  • CPU: x86
  • Submitted: 2005-12-22
  • Updated: 2008-02-06
  • Resolved: 2006-02-22
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 6
6 b73Fixed
Related Reports
Duplicate :  
Duplicate :  
Relates :  
Description
Fullscreen suite DisplayModeTest failing when switching from some 8-bit to 32-bit modes on WinXP.  The testcase will fail with Hotspot error.

Not all mode switches from 8-bit to 32-bit will result in testcase failure, however some specific mode switches will always generate the testcase failure (Hotspot error).

- This failure was found during Mustang beta milestone testing
- The bug was introduced in Mustang build B62
- Disabling D3D with -Dsun.java2d.d3d=false has no effect on the failure
- Attaching testcase to this bug report (with Hotspot error log)

- Filing the bug as high priority as it is a Mustang regression (from 1.5 release)

Test Configuration
==================
- WinXP with Nvidia GeForce4 Ti 4800 SE
- Default resolution: 1280 x 1024, 32-bit, 60 Hertz (LCD Display with DVI)

Steps to reproduce (for always reproducible failure)
==================
- On WinXP, set JAVA_HOME to Mustang build B62 or later
- Compile and run the java sources
%JAVA_HOME%\bin\java DisplayModeTest

1) After testcase launches switch to display mode:  1024 x 768, 8- bit, 60 Hertz
2) Then switch back to default mode:  1280 x 1024, 32-bit, 60 Hertz

- The Hotspot error should occur at this time.

Comments
EVALUATION The crash happens because of a mismatch between the device bit depth and the screen SurfaceData depth. In particular, when switching from 8-bit display mode to 32-bit the sequence of events is as follows: ToolkitThread: WS_DISPLAYCHANGE: initScreens(): reinitializes AwtGraphicsDevices, directdraw, etc [0] calls WToolkit.displayChanged(), which posts a display change event to EDT: EDT: Win32GraphicsEnvironment.displayChanged(); Win32GraphicsConfigurations.displayChanged() - [1] OtherdisplayChangeListeners.displayChanged() - [2] [1] this, in particular, resets the dynamicColorModel, which Win32SurfaceData uses for getting the depth when creating Win32SurfaceData instances. Unfortunately a Win32SurfaceData can be created after a display change event already happened [0] and before [1] when the dynamicColorModel is reset. This happens, for example, when we detect a surface loss immediately after a display change event. We immediately recreate the surface data, and it thinks that the device is 8 bit (since the dynamicColorModel hasn't been updated), but on the native level the AwtGraphicsDevice has already been updated to 32 bit. Then when we attempt to render to this surface and end up in Win32SD_GetRasInfo trying to lock by DIB. There we die when copying from nonexisting device's palette at line 896 in SurfaceData.c . Since there's no easy way to detect all situations where we may end up crashing because of the surface/device depth mismatch I suggest that we check if the device and surface depths are compatible at the surface creation time in Win32SurfaceData.initOps() and mark the surface as invalid if there's a mismatch. Note that the ddraw offscreen surfaces already handle this situation in DDCreateOffScreenSurface. Normally we'd just throw an InvalidPipeException in such case, but since the onscreen surface can be created from places where there's no easy way to handle the exception (like in WComponentPeer.replaceSurfaceData()) I suggest we mark the surface as invalid and it will be recreated next time we attempt to render to it. After fixing the crash I've run into another issue: I've found that there were weird things happening with BufferStrategy. What happens is that after [2] - when the rest of the display listeners are notified, and among those is BackBufferSurfaceManager from the FlipBufferStrategy, the BBSM recreates the back-buffer surface and it's ready to be used. (A side note, a back-buffer is also recreated when a screen surface data is replaced, which also happens in response to displayChange because peers are registered as display change listeners, so potentially it can be recreated multiple times) But the thing is that the FlipBufferStrategy in Component class doesn't not notice that the back-buffer was updated and continues to use the old invalidated backbuffer (which belongs to the invalidated SurfaceData), since the cached copies of the buffers are only replaced if the size of the window changed since the last rendering - see FlipBufferStrategy.revalidate(boolean). One solution is to request a back-buffer each time revalidate() is called (and it's called from getDrawGraphics()). So if the back-buffer was updated in the peer, an up to date back-buffer will be used.
02-02-2006

EVALUATION This bug was introduced by one of the fullscreen fixes that went into b33 (or possibly b35). A simplified test would just enter the fullscreen mode, change the dm mode to a mode with a different bit depth and attempt to blit a volatile image to the screen (or a back-buffer). Reproducible with and w/o d3d enabled.
22-12-2005