JDK-8089846 : Blur effects no longer provide a side effect of using IDENTITY transform for inputs
  • Type: Bug
  • Component: javafx
  • Sub-Component: graphics
  • Affected Version: 8u20
  • Priority: P3
  • Status: Closed
  • Resolution: Not an Issue
  • Submitted: 2014-07-30
  • Updated: 2015-07-24
  • Resolved: 2015-07-24
The Version table provides details related to the release that this issue/RFE will be addressed.

Unresolved : Release in which this issue/RFE will be addressed.
Resolved: Release in which this issue/RFE has been resolved.
Fixed : Release in which this issue/RFE has been fixed. The release containing this fix may be available for download as an Early Access Release or a General Availability Release.

To download the current JDK release, click here.
JDK 9
9Resolved
Related Reports
Relates :  
Relates :  
Description
I've been testing out Java 8u20 Early Access, and I've found a serious issue in the way that Effects and Transforms are applied, which can hopefully be fixed before general availability, since it directly affects my company's product.

The bug is that when you have a Node that has both a Scale transformation and a blur Effect applied to it, the Scale is ALWAYS applied FIRST now, and then the blur Effect is always applied to the scaled down result.

This is happening even if you apply the Blur Effect to the node itself, and then apply the Scaling transform to a Group that contains the Node.

This is a huge problem for my company, because our JavaFX product uses a BoxBlur effect as a form of anti-aliasing on images, prior to scaling them down.

We are developing a tool that can view and edit pages in PDF files. This involves taking large image files containing text and placing them on an ImageView. We want to apply the BoxBlur effect to the ImageView and THEN scale the blurred image down. The user is able to scale the image in and out, so we adjust the Blur distance on the fly accordingly. Until Java 8u20 this worked beautifully, and made the text readable even when zoomed far out. In Java 8u20, however, it's impossible to do what we need, and the text is an unreadable mess no matter what we do.

-------------------------------------------

Here's a simple sample program that illustrates the problem:

@Override
    public void start(Stage primaryStage) {
        Text text = new Text("Here Is Some Sample Text. Here Is Some Sample Text\nHere Is Some Sample Text. Here Is Some Sample Text");
        text.setFont(Font.font("Courier New", 72));
        
        Image unblurredTextImage = text.snapshot(null, null);
        text.setEffect(new BoxBlur(5, 5, 1));
        Image preBlurredTextImage = text.snapshot(null, null);
        
        ImageView scaledOnlyImageView = new ImageView(preBlurredTextImage);
        scaledOnlyImageView.getTransforms().add(new Scale(.2, .2));
        
        ImageView scaledAndBlurredImageView = new ImageView(unblurredTextImage);
        scaledAndBlurredImageView.setEffect(new BoxBlur(5, 5, 1));
        scaledAndBlurredImageView.getTransforms().add(new Scale(.2, .2));
        
        VBox root = new VBox();
        root.getChildren().addAll(new Label("What we should see:"), new Group(scaledOnlyImageView),
                new Label("What we actually see:"), new Group(scaledAndBlurredImageView));
        
        Scene scene = new Scene(root);
        primaryStage.setTitle("Transform and Effect order bug demo");
        primaryStage.setScene(scene);
        primaryStage.show();
    }
Comments
The original report represents a developer relying on a side effect that reduced quality for the Blur effects. We have no plans to revert that fix and the only discussion here was on how to better implement the code they were using that relied on an undocumented side effect. The change in behavior of the Blur effects being reported on here is intentional and so this issue is "not an issue" in the end. Closing as such.
24-07-2015

We need to defer this out of 8u40 since the solution will be somewhat complex, and we haven't yet sorted out all of the trade-offs. I proposed to defer it to 9, but it seems a good candidate to port to 8u60.
26-11-2014

SQE is ok to defer from 8u40.
26-11-2014

Adding another test program (ManualMipmapTest4.java) which adds an "area averaging" type of algorithm that was adapted from Java2D and JAI. In my opinion the trilinear algorithm produces much more stable and smooth images than this algorithm which can look a bit like plaid output from the synthetic grid image created for testing. The new area averaging algorithm is shown in the lower right corner and is quite compute intensive so it slows down the operation of the test program quite a bit. (Ignore the "custom" code in the test file, that was a testbed for trying out different kinds of filtering kernels and comparing them to mipmapping and the boxblur algorithm, but nothing has panned out so far from those tests.)
02-10-2014

Adding another version of the MipMap test (ManualMipmapTest3.java) that shows the benefits of the standard LINEAR_MIPMAP_LINEAR mode that most graphics cards provide. The 4 panes each describe the technique they are using which include: Normal - just an ImageView with a scale transform (using bilinear interpolation from the base image by default) BoxBlur - the technique from the example in the description and comments. Simple Mipmap - using bilinear interpolation with mip-mapping, but only using a single mipmap image (the next smallest image based on scale). Trilinear Mipmap - simulating using bilinear interpolation with mipmaps, but also blending the results of bilinear interpolation from the next smaller and next larger maps, otherwise known as "LINEAR_MIPMAP_LINEAR" in OpenGL terms. (This is simulated by using two of the above simulated mipmap imageviews, but changing the opacity of the upper view based on the relative interpolation factor between levels.) The Trilinear pane provides the smoothest transitions across all of the scales and is often the clearest of the 4, though there are some scale values where one of the other panes are slightly clearer for a small range. Still the fact that the trilinear pane never looks more than very slightly off and the fact that it often looks much clearer than the other solutions, and the fact that it is supported for free in hardware makes it an excellent solution for dynamic scaling.
27-09-2014

I should add that Chien has recently implemented LINEAR_MIPMAP_LINEAR support for 3D mesh textures and it would be fairly straightforward to adopt the same code into the NGImageView implementation, though there are some complications: - The support may make assumptions that L_M_L is used if and only if the image is a mesh image (in particular, it hard-codes a REPEAT wrap type). - Tiled images - NGImageView has to deal with large textures that are larger than the largest supported hardware texture by breaking it up into multiple texture "tiles". Unfortunately, the code that breaks up such large images into a bunch of textures does so by padding the sub-images with 1 or 2 shared pixel rows/columns that are only enough to account for simple bilinear filtering that spans between tiles. If we are then mipmapping those textures we would need to increase the padding large enough to support the maximum number of mipmap levels we want to support. - Alternatively NGImageView could provide hardware mipmapping only for images under the maximum texture size (4k x 4k in most cases), but I'm guessing that extremely small scales are often correlated with very large source image sizes. The submitter didn't indicate the size of the images that were used in the application in question...? - Tiled Images could also be supported in a hybrid mode - enough padding added for just a few levels of mipmap and then decide if we've provided enough mipmap levels for the quality we want to provide, or switch to a single mipmap tile when the scale gets small enough that the mipmap image for that scale factor fits into a single texture. The amount of padding needed increases for each level of scaling as such: level 0 - no scale - used down to 0.5 scale - 1 shared row/column level 1 - 0.5 scale - used down to 0.25 scale - 2 shared rows/columns level 2 - 0.25 scale - used down to 0.125 scale - 4 shared rows/columns level 3 - 0.125 scale - used down to 0.0625 scale - 8 shared rows/columns and so forth. Note that trilinear filtering for a scale of 0.6 would use both level 0 and level 1 (simple bilinear mipmapping would only use level 0) so providing only these 4 levels would only allow us to use trilinear filtering down to a scale of 0.125. If the image is less than 8 times the maximum texture size (i.e. 32k x 32k) then we could simply pad it by 8 (or fewer) rows/columns, generate 4 (or fewer) levels of mipmap for it, use trilinear filtering on the tiles for scales larger than 0.125, and then create a 4k x 4k version of it with trilinear filtering on that for scales below 0125. This would, of course, require a huge amount of texture space, hopefully mitigated by the fact that not all tiles would be visible for such a large image even with minute scaling. Also note that a 32k x 32k image would use 4gb of vram just for the main image itself, so such an application would already be restricted to desktops with lots of vram and they would already be required to override our default vram limits (currently set at 256mb) just to display the image without any mipmapping.
27-09-2014

Integer dimensioned BoxBlurs are faster than Gaussians in software because they can use an iterative algorithm (add in pixel+N/2, subtract out pixel-N/2 instead of summing N pixels). But, in a shader both are the same cost because GPUs cannot do iterative procedures so they really do add up all N pixels at a time. In fact, we use the same shader for both in hw, we just use a kernel of all 1/N's for the BoxBlur. The simplistic iterative software algorithm is one of the reasons why BoxBlur floors the dimensions, but it wouldn't be too hard to update the software algorithms to be able to process a non-integer sized box kernel while still doing so iteratively (it gets slightly more complicated in that you have to process the +/-N/2 pixels with a fractional value first, them accumulate them as whole values after). We just haven't done that yet. Until then, Gaussian does process fractional dimensions right now.
13-08-2014

FWIW, I did try GaussianBlur blur way back when we first started development, and as I recall it produced very similar results to BoxBlur after scaling. I just went with BoxBlur because it was supposed to be the less costly of the two.
13-08-2014

I added Brian Burkhalter to the watch list as he has some experience in how JAI used to achieve small downscales with reasonable quality.
13-08-2014

In looking further at the progression of results in the two algorithms, there are areas where one is better than the other for both of them. The boxblur algorithm gets a shift somewhere around .70X where it suddenly gets extra murky/fuzzy, but the bilinear/mipmap image stays pretty sharp and consistent. When we hit .5 then the bilinear/mipmap image suddenly shifts and has a jump in its murky/fuzzy-ness as well. Another jump happens for the boxblur image near .35 where it again has a jump in murky/fuzzy-ness. This is basically due to the sudden shifts in the mipmap choice or the box kernel size at those values. One thing to note is that the BoxBlur kernel currently floors its dimensions (it is possible to use non-integer box sizes by varying the factors on the edges of the box, but the code doesn't currently support those) so when the chosen blur sizes go past an integer, there is a sudden change in the size and coefficients in the kernel. A Gaussian blur would grow more gradually as the dimensions change, but unfortunately might hot have the desired AA effect for scaling the image. Hardware mipmapping has a technique where it can linearly blend between two stages of the mipmaps which avoids such jumps as you change the scale. Manually computing a custom kernel for each scale size that more gradually shifts rather than suddenly bumping up by a whole box value would similarly smooth out the jumps as well, but we don't have an FX API for supplying custom computed kernels at this time.
13-08-2014

Thanks for the test case. Clearly there are better and worse kernels to use for filtering scaled images and my test was off the top of my head so I'm not entirely sure that it represents the full capabilities that modern hardware presents for mipmaps. The technique of box-blurring and then scaling using linear interpolation is roughly equivalent to scaling with a larger sampling kernel. Bilinear interpolation, which can be had for free on most modern GPUs, is simply the smallest smoothing kernel. Mipmapping also provides more sophisticated algorithms than "choose the nearest power of 2 and bilinearly sample it". Specifically your technique could be duplicated, mathematically, with a shader that simply used a box kernel of all 1/N coefficients in place of the standard "pass through" sampler normally used for rendering images. That would take less memory than exploding the representations and wouldn't require intermediate renderings as would using an effect. That might produce better results than either since my simplistic mipmap was using cascading filters of 2x2 (with the last one being the implicit 2x2 kernel in the BILINEAR hardware sampler) and yours was using an NxN filter in the BoxBlur stage that then fed into the implicit 2x2 BILINEAR sampler filter. A better technique might be to use a manual (larger) NxN box filter in the shader sampling from a texture unit that had its bilinear mode turned off. I say it "might produce" better results because it may be that the NxN + 2x2 filters involved in your technique might be synergistic in some way that I haven't discovered yet.
11-08-2014

Hi Jim, thanks for the test case! It definitely produces nicer results than plain scaling. I put together a modification of your program that uses the same blur-scaling algorithm that I'm using in my company's product, and compares it against your mipmap algorithm, so you guys can see exactly how we've been using it. It will only work in 8u11 and before, of course. Since I can't do attachments, I'll just post the text of the program at the bottom of this comment. On images with lots of text with thin lines (such as the Courier New font used in my sample code above), the blur-scaling technique we use seems to produce text that is somewhat smoother and more readable when scaled far down than the test mipmapping technique, and the blurring and scaling still happens fast enough on the fly that it can be smoothly scaled up and down with the slider, even with large images (my company's program actually uses a slider for zooming in and out as well). FWIW, I think that Windows Photo Viewer actually uses the very same blur-then-scale technique that we do, except that theirs is "integer-based" like your mipmap (ie, when the scale reaches 50% they abruptly apply a BoxBlur of 3 pixels, at 25% it changes to 5 pixels, and so on), whereas we gradually increase the blur distance in an "analog" manner using a logarithm. You can tell that it's the same technique, or at least very similar, by opening the same file in both this sample program and Photo Viewer, scaling to 50% intervals (50%, 25%, 12.5%, etc) or just below, then scaling the image in Photo Viewer to the same size (if you scale slowly in it you can visibly tell when the blur distance increase "kicks in") and comparing them side-by-side. To my eye, they look completely identical at the intervals. I understand why blurring prior to scaling results in data loss when scaling up, but it seems to me that scaling first results in data loss when scaling down. In any event, it would be useful, at least for us, if there were some means by which developers could control the order that effects and transforms are applied to a Node to get the desired results (Assuming that's realistically possible. I don't know how your rendering works beneath the covers, so I don't know if that's a reasonable request or not). Anyhow, thanks for your time. ----------------------------------------- public class ManualMipmapTest extends Application { static FileChooser.ExtensionFilter imgfilter = new FileChooser.ExtensionFilter("Img", ".jpg", ".gif", ".jpeg", ".png"); List<Image> scaledImages = new ArrayList<>(); ImageView blurScaledView; ImageView mipmapView; Label scaleLabel; double scale = 1.0; @Override public void start(Stage stage) { blurScaledView = new ImageView(); mipmapView = new ImageView(); Button loadButton = new Button("Load Image"); loadButton.addEventHandler(ActionEvent.ACTION, e -> { FileChooser dlg = new FileChooser(); dlg.setSelectedExtensionFilter(imgfilter); File file = dlg.showOpenDialog(stage.getScene().getWindow()); Image img = new Image(file.toURI().toString()); setImage(img); }); Slider scaleSlider = new Slider(0.0, 1.0, 1.0); scaleSlider.valueProperty().addListener((ObservableValue<? extends Number> observable, Number oldValue, Number newValue) -> { setScale(newValue.doubleValue()); }); scaleLabel = new Label("Scale = 1.0"); ScrollPane blurScalePane = new ScrollPane(new Group(blurScaledView)); blurScalePane.setPrefSize(10000, 10000); ScrollPane mipmapPane = new ScrollPane(new Group(mipmapView)); mipmapPane.setPrefSize(10000, 10000); mipmapPane.hvalueProperty().bindBidirectional(blurScalePane.hvalueProperty()); mipmapPane.vvalueProperty().bindBidirectional(blurScalePane.vvalueProperty()); Label blurScaleLabel = new Label("Blur Scaling"); Label mipmapLabel = new Label("Mipmap Scaling"); VBox blurredColumn = new VBox(blurScalePane, blurScaleLabel); VBox mipmappedColumn = new VBox(mipmapPane, mipmapLabel); HBox scaledPanesHolder = new HBox(blurredColumn, mipmappedColumn); HBox controlsHolder = new HBox(loadButton, scaleSlider, scaleLabel); VBox root = new VBox(scaledPanesHolder, controlsHolder); stage.setScene(new Scene(root)); stage.setWidth(1000); stage.setHeight(600); stage.show(); } void setImage(Image img) { blurScaledView.setImage(img); blurScaledView.setClip(new Rectangle(img.getWidth(), img.getHeight())); scaledImages.clear(); scaledImages.add(img); ImageView tmpview = new ImageView(); tmpview.getTransforms().setAll(new Scale(.5, .5, 0, 0)); while (scaledImages.get(scaledImages.size()-1).getWidth() > 100.0) { tmpview.setImage(scaledImages.get(scaledImages.size()-1)); tmpview.setClip(new Rectangle(tmpview.getImage().getWidth(), tmpview.getImage().getHeight())); scaledImages.add(tmpview.snapshot(null, null)); } setScale(scale); } void setScale(double scale) { this.scale = scale; scaleLabel.setText("Scale = " + scale); blurScaledView.getTransforms().setAll(new Scale(scale, scale, 0, 0)); double selectorNum = scale < 1.0? Math.abs(Math.log(scale) / Math.log(2.0)) : 0; double blurDistance = scale < 1.0? selectorNum * 2.0 + 1.0 : 0; blurScaledView.setEffect(new BoxBlur(blurDistance, blurDistance, 1)); mipmapView.setImage(scaledImages.get((int)Math.floor(selectorNum))); double mipmapScale = blurScaledView.getBoundsInParent().getWidth() / mipmapView.getImage().getWidth(); mipmapView.getTransforms().setAll(new Scale(mipmapScale, mipmapScale, 0, 0)); } }
10-08-2014

I'm attaching a proof of concept test case for doing manual mipmaps. Run ManualMipmapTest and it comes up with a default image that has a lot of detail. Use the scale slider to preview the results of a regular ImageView scale and a power-of-2 manual mipmap scale. There is also a button to load other images for further testing...
07-08-2014

There is already a bug submitted against image scaling quality: RT-13296 A suggestion for a new workaround: int MAX_SCALED = 4; Image scaledImages[] = new Image[MAX_SCALED+1]; Image[] loadImagesScaled(String url) { scaledImages[0] = new Image(url); ImageView tmpiv = new ImageView(); tmpiv.setScale(0.5, 0.5); for (int i = 1; i <= MAX_SCALED; i++) { tmpiv.setImage(scaledImages[i-1]); scaledImages[i] = tmpiv.snapshot(null, null); } } setScale(double renderScale) { int i = 0; while (i < MAX_SCALED) { if (renderScale > 0.5) break; renderScale *= 2.0; i++; } theView.setImage(scaledImages[i]); theView.setScale(renderScale, renderScale); } All of the scaled variants are produced at load time so the only continual or on-the-fly operation is to set the image and the modified scale onto the view. No box blurs are needed in this case and the frame to frame render time should be much faster, though memory usage may be higher (though some system of weak references and regenerating the images as needed might help that).
06-08-2014

It seems as if this is not a bug but an unfortunate side effect of our scaling algorithm. Hopefully, we can settle on a work around but it is likely that we should close this bug as WontFix and open a new one to improve out scaling.
06-08-2014

There are no plans to change the way that the blur effects use transforms. Intuitively it makes best sense to perform the blur at the desired resolution that the node will appear on the output device so that we don't lose detail when it is scaled up. Also, in most cases, rendering the node at the lower resolution would improve it. Text, for example, renders better if it knows how big it will be rather than rendering at a larger resolution and then being down-scaled using image scaling techniques. In some sense you have already discovered the issue with text "downscaling by image", which prompted you to spend some time to come up with the workaround you did. Ironically, the reason for your workaround is primarily why we don't want to do it the old way. It's best to move forward and fix image rendering so that it does the best job under small scale situations and remove the need for your workaround entirely. Basically all rendering should be done in the resolution required for the screen real estate it will occupy and all node types should do the best job possible at rendering under all possible transforms. The change to Blurs took us a fair way in that desirable direction with the side effect that it created one way for the shortcomings of one of the few nodes that doesn't deal well with small scales - the ImageView node - to surface as it fixed rendering quality in a number of other areas. I could point out that there may be other effects that perform a similar "forced Identity rendering" technique, but any of those would be ripe for us "fixing" later and breaking any workarounds so I hesitate to recommend one there. In particular, the real culprit here is that ImageView cannot scale down very well. Another issue to point out is that you are already specifying a lot of busy work for the system to perform a huge blur on a very large object just so that most of that work can be thrown away in a (poorly implemented) downscale. There should be a fair amount of "on the fly" work that could be substituted before you exceed the time that was required for your former workaround. With respect to doing your own offline scaling, you wouldn't have to continually snapshot anything. All you need is a variety of power-of-2 scaled images. 1.0, 0.5, 0.25, 0.125, etc. Then use the next largest image needed for the current zoom factor (assuming your scale factor is uniform in X & Y). Having just the .5 and .25 variants available should provide a decent downscale of .2 by substituting the 0.25 image for that scale. This is essentially what Mipmap does - generates power-of-2 scaled versions of the original and then provides 2 or 3 variant algorithms for which is chosen under which scale factors. (Upscaling from identity should never need any Mipmap variants larger than 1.0, though.)
06-08-2014

Jim: ***I'm curious what the reason for the blur is - is it there to work around the poor output of Bilinear as I mentioned above*** Yes. As you can see from the sample images above, prior to 8u20, it allowed images that contained text to remain readable when scaled down. ***Your pre-blur technique can cover the problems a bit, and it worked due to a side effect of the short-cuts we originally took in the blur code that ignored the transform and applied it to the blurred output rather than to the input*** Would it be possible to provide an option that would allow us to specify that we want to apply blur effects prior to scaling transforms as before? How about when the Node that has the blur effect is contained by a node with a scale transform, rather than both being applied to the same node? Intuitively, I would expect the blur effect to be applied first in that case, and the scale transform to be applied to the blurred output. ***If we Mipmap scaled the images, then you wouldn't even need the blur in the first place.*** That might work. Another option, for us, would be if the smoothing on ImageView produced better results. Previously, instead of scaling, I just used ImageView's built-in fit-to-size abilities, and simply changed the size of the ImageView. I switched to the blur-then-scale method, however, because even with setSmooth() set to true, the quality of the image was too poor to read the text. ***About the only mechanism that promises to do such a "snapshot with specific rendering attributes unrelated to the hierarchy" is, as you point out in your example code, the "snapshot" mechanism.*** Unfortunately, while I used that in my example, the images we're working with in the actual application are much too large, and hence snapshotting much too slow, to continually snapshot them in a background ImageView and swap the results into the onscreen ImageView as the user is scaling.
05-08-2014

This is heavily related to, and perhaps a duplicate of, RT-13296 depending on why the technique of "blur then scale" was being exploited.
05-08-2014

The primary problem is that a single scale of .2, .2 on an image using Bilinear interpolation does not produce good results. Unfortunately, that is the only scaling technique we currently provide. Your pre-blur technique can cover the problems a bit, and it worked due to a side effect of the short-cuts we originally took in the blur code that ignored the transform and applied it to the blurred output rather than to the input, but a better down-scaling algorithm is really the right answer in both cases. If we Mipmap scaled the images, then you wouldn't even need the blur in the first place. The JDK used to provide a computationally expensive "average all pixels that map into each destination pixel" technique that was very slow for on-the-fly purposes, but reasonably OK for doing once to an image and then reusing the results. I'm curious what the reason for the blur is - is it there to work around the poor output of Bilinear as I mentioned above? Or is there some other reason to blur the images? We have a number of mechanisms that intercept rendering and invoke a temporary rendering of a subtree with different attributes so that they can perform some manipulation of the output, but I can't think of any off-hand that would reliably end up asking the subtree to render with an identity matrix - also any of those mechanisms are "implementation details" and are not meant to be a behavioral contract - as is the case here with the way that we rendered the inputs for blurs using IDENTITY and then fixed that issue. About the only mechanism that promises to do such a "snapshot with specific rendering attributes unrelated to the hierarchy" is, as you point out in your example code, the "snapshot" mechanism. All others have no specification for how they will effect rendering of their subtrees.
05-08-2014

This bug was introduced by the fix for RT-13275, and is the result of an intentional change to avoid cases where we we blurring unnecessarily. Jim can comment on whether there is a workaround and also evaluate whether this issue can be fixed without reintroducing RT-13275.
31-07-2014

Hopefully Jim can find a work around for you. It's a shame that this problem was not discovered sooner but 8u20 is supposed to ship in August and is to too late to fix this bug.
31-07-2014

Thanks for the quick reply, Kevin! Glad you guys are looking into this. I've just updated the issue with a test case that illustrates it. In my test case, I applied both the BoxBlur effect and the Scale transform to the same ImageView (which worked for us until 8u20), but even if I apply the BoxBlur to the ImageView, and the Scale to a Group containing the ImageView, I still get the same result.
31-07-2014

Thank you for reporting this. We're sorry, but it's too late for 8u20 (which is in final staging / testing and not accepting product bug fixes such as this), so we will target it for 8u40. Jim might be able to suggest a workaround for you. Can you provide a simple test case?
30-07-2014