I've got a simple optimization to do later today (details below), but other than that I think my HDRI post processing is finished. It'll probably need some tweaking later on, but it's all functional now.
Quote:Good guess - they are indeed the corners of the floor plane. Due to the multi-pass approach that floor plane is being rendered twice (might be 3x, I forget) such that it ends up having a value greater than the bright-pass threshold. It is a bug and I intend to fix it at some point [smile]
There shouldn't be any of those bright-white triangular regions outside the main geometry. Even if that's the edge of a "floor plane" (best guess I've got as to why they're there) the light shouldn't be highlighting those areas
Quote:To keep it fairly brief, it is an approximation to the way that (bright) light reflects/refracts and generally distorts when going through a lens. More generically it's the same physical effect as light energy going through a different density medium (e.g. heat haze or thick glass). In this context it adds slightly to the over-exposed areas where there is lots of energy travelling through the (glass) lens.
You mentioned that this sort of thing is a lens effect? What is it meant to model, or what sort of lens causes that sort of effect?
It's not particularly advised, but you can often see it if you stare at car headlights, street lamps or even the sun [smile]
Quote:There is a bright-pass involved. Currently a lot of the object is above the bright-pass threshold... but, as hinted at above, this is one of the things to be tweaked later on.
It looks like you are implementing this on the entire color range of the image - not just the bright parts. Try to do a bright pass filter before you start doing the star filters.
Quote:Yeah, the offset is computed as a float2 then multiplied by the width/height of a pixel - so it should be correct. However, given it's a FP32 render target it's not going to filter the sampling points if they aren't perfect (quite likely in some cases).
Are you sure that the samples are approximately one pixel apart? Or are you trying a different width approach?
I decided to stick with Kawase's method. I've tweaked a few parameters based on aesthetics, but my "fresh start" attempt didn't yield any better results so I dropped it:
When we put it all together:
Performance started to get pretty sucky with those star filters (1024x768 @ 16fps), so I tested out a simple optimization that I intend to implement properly later today.
Currently I'm doing all the post-processing on a 2x downsampled image. Up-sampling again for the final composition.
Unlike "normal" shader usage it's difficult to know how many invocations of a particular shader there will be. For post-processing it should be executed proportionally to the dimensions of the screen.
So, I figured that I'd work out the texture read/write bandwidth (I think I'm more bandwidth than arithmetic limited):
Bright Pass: (Width/2) * (Height/2) * 2x2 Downsample * 16 bytes ~= 4.7mb of texture data read
Star Filter: (Width/2) * (Height/2) * 9 samples * 3 passes * 5 blades * 16 bytes + (Width/2) * (Height/2) * 5 samples * 16 bytes ~= 164mb of texture data read
If I were to change it to use a 4x downsample:
Bright Pass: (Width/4) * (Height/4) * 4x4 Downsample * 16 bytes ~= 4.7mb of texture data read
Star Filter: (Width/4) * (Height/4) * 9 samples * 3 passes * 5 blades * 16 bytes + (Width/4) * (Height/4) * 20 samples * 16 bytes ~= 45.3mb of texture data read
The quality isn't that much worse, and going from 164mb/frame down to 45mb/frame is a very nice saving without doing much damage [grin]