It occurred to me that to reduce banding of per-pixel lighting, it should be possible to dither using only information local to each fragment, along with some noise. The key is to calculate the error between the desired, high-precision fragment value and the low-precision fragment shader output. A fragment which will be rounded up in quantization is "too bright" and has higher probability of being bumped down one quantum. Similarly, a fragment which will become rounded down is "too dark" and is more likely to be bumped up.
Particularly, the way I have it now, there's a straight 50% chance of a fragment passing unmolested. Otherwise, there's a 50% chance at the boundary with a darker band of the fragment getting darker, lerped down to 0% at the opposite boundary. The same on the other side: a 50% chance at the brighter band boundary of becoming brighter, lerped to 0% at the other boundary. I originally tried only promoting fragments, but it always produced poor results, unlike this more symmetrical approach.
Here's the (still crude) cg code:
float colMax = max(col.r,max(col.g,col.b))*255; float error = (colMax-floor(colMax+0.5))+0.5; float probPromote = error*0.5; float rand = tex2D(_Noise1,i.uvs.zw).a; float noise; if(rand < 0.5) noise = 0; else if(rand < 0.5+probPromote) noise = (1.0/255.0); else noise =-(1.0/255.0); return fixed4(col+float3(noise,noise,noise),1);
col is just the 'final', gamma-adjusted color coming in. colMax itself is a terrible kludge- I know from testing I must perform this dithering separately across all channels, but it's unrelated to the problem I bring to you, which is present even for single-channel & white lights.
For the most part, nice enough results are produced. Compare:
Naturally, an organic graininess is forced onto the aesthetic, but this is serendipity to me. The real problem are these little discontinuities. I've effectively swapped big bands for little ones. They can be seen in the above if you look very close. Zooming in on the scene:
These small bands, by the way, gradually become wider, then start back thin. There's definitely something amiss with the calculated error. See a plot of error alone:
It's the right basic shape, and goes in the right direction (white means too dark, black means too bright), but it's spatially amiss. Here's a doozy. The contrast-enhanced bands without dithering and with the error function superimposed:
And just contrast-enhanced bands without dithering, if it helps:
So what gives? I tried a couple other ways of calculating error, such as
float colMax = max(col.r,max(col.g,col.b)); float error = (fmod(colMax+(0.5/255),(1.0/255))*255);
which produced identical results. Surely there's some way to know exactly what fragment will get spit out by a shader. I also tried the obvious of converting to fixed and back to float, but couldn't cajole that into doing anything at all (was it getting optimized out? is the behavior slightly different when things are sent to the backbuffer?). I don't even understand how there can be so much spatial error (as in bad error) in the error calculation. Thanks for reading and for any pointers.