Floyd-Steinberg dithering possible in GLSL?

Started by
6 comments, last by 4fingers 15 years, 9 months ago
Hi, I was just wondering how you would go about applying the Floyd-Steinberg dithering algorithm in GLSL. The main thing I am not sure about is how I would store the future pixels: pixel[x+1][y] := pixel[x+1][y] + 7/16 * quant_error pixel[x-1][y+1] := pixel[x-1][y+1] + 3/16 * quant_error pixel[x][y+1] := pixel[x][y+1] + 5/16 * quant_error pixel[x+1][y+1] := pixel[x+1][y+1] + 1/16 * quant_error I only thought of this because I saw that there was a halftone shader in nvidia's shader library and thought it would be cool if I could get a Floyd-Steinberg version going. Any ideas would be great. Thanks
Advertisement
Floyd-Steinberg dithering, aka error diffusion, is a serial algorithm -- it depends on processing all the pixels in a defined order. In GLSL, processing of fragments is scheduled by the GPU in an arbitrary order, and you can't share data between them. You may wish to investigate ordered dithering algorithms.

(There has been some research on parallelizing error diffusion, for example the paper "Optimal Parallel Error-Diffusion Dithering". At first glance, though, it looks to be more suited to something like CUDA than GLSL.)

[Edited by - dwahler on June 24, 2008 12:38:41 AM]
Can't you just render your scene to a render to texture, then use that texture to sample and do the dither algorithm as you render again to the framebuffer?
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
That works for methods like thresholding and ordered dithering, but error diffusion requires continuously updating the input texture. In GLSL, all buffers are either read-only or write-only.
While you are looking at alternative dithering techniques, you might consider Atkinson dithering (here and here), which some consider to be the "prettiest" dithering around.
First of all thanks for your all your replies:
Quote:(There has been some research on parallelizing error diffusion, for example the paper "Optimal Parallel Error-Diffusion Dithering". At first glance, though, it looks to be more suited to something like CUDA than GLSL.)

Yeah it starts talking about using a linear array of processing elements:
Quote:The parallel algorithm we propose uses a linear array (Fig. 6) of processing elements (PE's). Each such PE is has input/output capability and can communicate directly with its left and right neighbours. The first and last processors may have only one neighbour each or may be connected to each other.
Which I don't think is possible in GLSL.

Thanks rgoer for your noteworthy suggestion of Atkinson dithering but just like Floyd-Steinberg it suffers from the requirement of needing future pixels read in a particular order and reading and writing. Still I am sure this information will come in handy in the future.
Quote:... error diffusion requires continuously updating the input texture. In GLSL, all buffers are either read-only or write-only.

I suppose at the moment the best thing I can do is take your advice and focus instead on ordered dithering. Since it doesn't require that future pixels be read or that both reading and writing need to be carried out.

Thanks again.
Edit: Oh does anyone know how the halftone shader works so I can maybe get a GLSL version going? Since the whitepaper link is just a generic one that they give for all the image processing shaders. Although I might open a new thread for this.

[Edited by - 4fingers on June 23, 2008 5:48:52 PM]
It looks like they render the scene to a texture that has one pixel per halftone dot, and then re-render that with the halftone texture tiled across the image at the same spacing. The dots are precomputed in a 3D texture in which the x-y plane corresponds to the maximum bounding square of a dot, and the z-axis corresponds to the radius. The pixel shader just samples the intensity from the rendered scene, and uses that to find a texel in the halftone volume.

If you look closely at the sample image, you'll notice that the dots aren't quite circular, because the scene texture is sampled with bilinear interpolation. This allows the halftone radius to vary slightly within a grid square.

By the way, by implementing a variation of this effect in object space rather than screen space, you can create a wide variety of artistic effects just by changing the lookup texture. This 2002 paper has the details.
Thanks for your time on this dwahler, that description helped with my understanding of the process. Thanks also for the link to the paper I hope to read more into that once I get some more free time.

This topic is closed to new replies.

Advertisement