Sign in to follow this  
lipsryme

Volumetric Scattering (screen-space) done efficiently ?

Recommended Posts

Is there a way to do this efficiently without loosing the "sharpness/detail" of the sun shafts (aka downsample to 1/2 or 1/4) ?

Something like this: http://gaminggix.com/wp-content/uploads/2014/12/dai_emerald_graves1024.jpg

Doing this in full resolution is quite expensive and needs above 100 samples to look good.

I've come across this presentation (http://crytek.com/download/GDC08_SousaT_CrysisEffects.ppt) that says that they used an iterative approach each time with 8 samples summing up to virtual 512 samples. Has anyone successfully implemented this approach ? Because I've yet to get it working...

Edited by lipsryme

Share this post


Link to post
Share on other sites

You can used interleaved sampling to increase how good it looks with fewer samples. However you will need to run a blur pass over the result so it does not look noisy. 

 

I've not used the Crysis approach but will look at that paper. 

Share this post


Link to post
Share on other sites

Oh and Intel has a nice approach that uses epipolar lines. It looks great, and I started working on a version that used it, but its not a drop-in type thing, there is a lot of preprocessing work, and you have to do some cleanup on the results to avoid artifacts. 

 

There is also an uber-cheap yet great looking approach that I dont fully understand, even after reading it, but I will post the link to the paper in case you do. It uses 'rectified shadow maps' 

 

http://jcgt.org/published/0003/03/02/

Share this post


Link to post
Share on other sites

While these more advanced techniques are pretty interesting...what I'm trying to do right now is have the more basic screen space radial blur working fast (and good looking!).

 

@DASTech can you give me more details about this interleaved sampling approach ? I thought about some kind of temporal supersampling but I'm not sure how to do that.

Edited by lipsryme

Share this post


Link to post
Share on other sites

Cryteks approach is really good quailty performance wise if you dont want to downsample, though you really need the 8x8x8, 4 samples per pass doesn't really cut it for a sun scatter. You just have to divide your sample dist by half after every blur. They dont say do so in their paper, but I assume they mean to, looks no good other wise.  I quickly set up 3 radial blurs as an example for you.

 

one pass 128 Samples

[attachment=25602:Full128Samples.png]

 

one pass 512 Samples

[attachment=25603:Full512Samples.png]

 

three pass 8x8x8 samples effectively 512

[attachment=25604:3Pass8x8x8Samples.png]

Share this post


Link to post
Share on other sites

So let me get this right....you use the blurred output from the first pass with 8 samples as the input to the second pass with 8 samples and so on ?

I'm just having a hard time seeing how 8 samples (basically being 8 little ghosts/copies of the picture) combine to a smooth light beam.

Also the sample distance...would that be the deltaTexCoord from the currentTexCoord to the sun position in screen space ? So each time it has to be half the distance from before meaning effectively 1/2,1/4 ?

 

My result looks like this:

KUEApbR.png?1

Edited by lipsryme

Share this post


Link to post
Share on other sites

I am doing volumetric shadows with quite aggressive temporal supersampling with jittern and bilateral upsample. This way you can get sharp details with just 8-12 saples. Screenspace effects just does not give the proper look.

 

https://www.dropbox.com/s/hx517wmtok21c5w/VolumetricShadowsOn.png?dl=0

https://www.dropbox.com/s/u973s7t4g8rblrs/VolumetricShadowsOff.png?dl=0

 

Another one with just fog contribution. https://www.dropbox.com/s/j8xxil9zk1neyac/VolumetricShadows.png?dl=0

Grains come form filmgrain and not from technique.

Share this post


Link to post
Share on other sites

Correct, you use the previously blured texture for the next blur pass with the sample distance divided always in half every next pass, the code from the example is below in glsl. Im not additively blending the samples together but should make no difference if blended correctly. Your result should look much smoother, looks like your still downsampling, maybe this method doesn't play nicely with it never tried.

 

I was shocked myself when I first tried cryteks method of doing most types of blurs, with how smooth it was with so few samples and just multiple passes. Crytek said they used the same method for their motion blur in that paper to.

uniform sampler2D inputTexture;

vec2 texCoord = gl_TexCoord[0].xy;

uniform float x;
uniform float y;

vec2 lightPos = vec2(x, y);

uniform float density = 1.0; // next pass 0.5 then 0.25 and so on

const int SAMPLES = 8;

vec4 radialBlur(vec2 texCoord)
{

    
    vec2 deltaTexCoord = gl_TexCoord[0].st - lightPos;
    deltaTexCoord *= 1.0 / SAMPLES * density;
    
    vec4 color;
    
    for(int i = 0; i < SAMPLES; i++)
    {
        texCoord -= deltaTexCoord;
        
        color += texture2D(inputTexture, texCoord);
        
    }

    color /= SAMPLES;
    
    return color;
}


void main(void)
{

    vec4 color = radialBlur(texCoord);

    gl_FragColor = color;
}
Edited by RichPressence

Share this post


Link to post
Share on other sites

Hmm I'm using a similar code based on the sample from GPU gems 3

// Performs a directional blur from the light source to approximate volumetric scattering
float3 Scatter(in VSOutput input) : SV_Target
{
	// Calculate vector from pixel to light source in screen space
	float2 sunSS = SunPositionSS.xy;
	float2 deltaTexCoord = (input.TexCoord - sunSS.xy);

	// Divide by number of samples and scale by control factor
	deltaTexCoord *= (1.0f / numSamples) * Density;
	
	// Store initial sample
	float3 color = InputTexture0.Sample(LinearClampSampler, input.TexCoord).rgb;
		
	// Set up illumination decay factor
	float illuminationDecay = 1.0f;
	float2 uv = input.TexCoord;

	// Evaluate summation n iterations
	for (int i = 0; i < numSamples; i++)
	{
		// Step sample location along ray
		uv -= deltaTexCoord;

		// Retrieve sample at new location
		float3 sample = InputTexture0.Sample(LinearClampSampler, uv).rgb;

		// Apply sample attenuation scale/decay factor
		sample *= illuminationDecay * Weight;
	
	        // Accumulate combined color
		color += sample;

		// Update exponential decay factor
		illuminationDecay *= Decay;
	}


	return  (exposure * color);
}

I just tried your code and it does seem to produce a difference result but still not as smooth as in your screenshot.

Share this post


Link to post
Share on other sites

Try to add some jittern to mix. Even half step jittern with checkerboard pattern produce lot better results. As optimization you can get Weight multiply part out of the inner loop.

Share this post


Link to post
Share on other sites

Yeah mines just the pure radial blur without gpu gems physically based additions, Weird, well only other thing is my setup does look undersampled and close to yours if I don't divde the sample distance of the blur in half after each pass. Maybe try it with the same distance on each of the three passes.

 

All three 8x8x8 passes with a density of 0.8

[attachment=25628:noOffset.png]

Share this post


Link to post
Share on other sites

Unfortunately I can't upload a video here because of my crappy internet line but I exported screenshots from RenderDoc showcasing it using the 8x8x8 with different densities each.

 

Note: Seems like it doesn't gamma correct when exporting that's why they look so dark here.

 

Input Mask:

JQ4qv0p.jpg

 

First 8 sample pass (Density = 1.0):

3PEVZX2.jpg

 

Second 8 sample pass (Density = 0.5):

1HfFd57.jpg

 

Final 8 sample pass Density = 0.25):

S4FOxyL.jpg

 

 

If I multiply the final color by 0.2 or so it doesn't look as overblown but the sky completely fades to black which is not what I want...

 

The best quality/performance result I've gotten at this point is downsample to 1/2 using a wide gaussian blur then scatter using ~96 samples and upsample again using wide gaussian. Runs at about 1.8ms on my AMD Radeon 5750M. However the result is obviously a little blurry and not as tight as pure 128 samples in full res.

 

@kalle_h I'm not very familiar with using the jittering you describe. How is this done ? Multiply the Density by some random 2D vector ?

Or could I use a different step size every frame and combine these ?

Edited by lipsryme

Share this post


Link to post
Share on other sites

Are you doing your additve blending back into your main color only once in a final blend pass? so the three intermediate blur passes not additively blended then combine them in your final additive combine shader, so I guess (Blurpass1 > Blurpass2 > Blurpass3 > combinePass) I guess it needs the same treatment as to how multipass bloom is done.

Edited by RichPressence

Share this post


Link to post
Share on other sites

I'm not combining them seperately in a final pass I'm using the blurred output from the first as the input to the second and at the third pass.

By the way the above screens were taken using your code. I don't get why we have so different results.

Edited by lipsryme

Share this post


Link to post
Share on other sites

float2 uv = input.TexCoord;

float jittern = frac(0.5 * (input.position.x + input.position.y)); //pixel positions

uv -= jittern * deltaTexCoord;

 

This basically start half step away from start position every second pixel. Basically you trade banding for high freq noise. Another name for jittern is dithering. There is good source for broader knowledge. http://loopit.dk/banding_in_games.pdf

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this