Sign in to follow this  
jollyjeffers

Method/offsets for "Star Filter" post processing...

Recommended Posts

Hi all, I've been working though my HDRI code adding in a couple of additional lens effects/filters. I've got the basic bloom working just fine, but I at least want to experiment with a few others [smile] In particular, I'm trying to get a star filter working - something along the lines of:
The above is taken from the documentation for the HDRLighting sample in the DX SDK. Of particular interest is this:
Quote:
The star effect may require as many as three passes to render each of the star lines. The star patterns used in this sample require as many as eight lines, resulting in a potential cost of 24 passes to render fully.
Currently my simple implementation uses 5 passes for 5 lines - each pass just blurs in the correct direction and then composites them together. From close analysis it seems to be doing almost the right thing - but it's nowhere near as impressive/good as the above screenshot. I've been staring at the SDK code for a few hours now, and trying to extract/analyse the right bits is proving to be more difficult than I'd hoped [oh] So, my question to you guys... How is each of those blades/lines rendered? They mention 3 passes, but I can't quite figure out what translates into when configuring my shader(s) [headshake] Cheers, Jack

Share this post


Link to post
Share on other sites
If I am reading it correctly, here is the part that creates the star textures:

float4 Star
(
in float2 vScreenPosition : TEXCOORD0
) : COLOR
{
float4 vSample = 0.0f;
float4 vColor = 0.0f;

float2 vSamplePosition;

// Sample from eight points along the star line
for(int iSample = 0; iSample < 8; iSample++)
{
vSamplePosition = vScreenPosition + g_avSampleOffsets[iSample];
vSample = tex2D(s0, vSamplePosition);
vColor += g_avSampleWeights[iSample] * vSample;
}

return vColor;
}

So the application would perform this 'line' sampling by setting the g_avSampleOffsets[iSample] array with a line of sampling offsets for the current line direction. Each sample is taken from a bright pass version of the previously post processed scene, so for three passes you are stretch the light sources by 8 samples and compositing them. It seems to be based on Kawase's bloom filter, only applied to lines instead of a gaussian distribution.

Does that help at all?

Share this post


Link to post
Share on other sites
Check out this slide deck: http://www.ati.com/developer/gdce/Oat-ScenePostprocessing.pdf

Slides 17 through 23 (Kawase's Light Streak Filter) discuss this filter and give a sample HLSL implementation. The trick is getting the growable filter kernel right.

--Chris

Share this post


Link to post
Share on other sites
Thanks for the replies - appreciated [smile]

That ATI slide deck looks perfect, I'm pretty sure I understand the concept now - or, more importantly, why mine was wrong [lol]


Cheers,
Jack

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this