Approximate Hybrid Raytracing - Work in Progress

Started by
18 comments, last by Rbn_3D 9 years, 8 months ago


[numthreads(256, 1, 1)]
void BlurH( uint3 DTid : SV_DispatchThreadID )
{
	float z = tCoarseDepth[DTid.xy];

	float4 gi = tGI[DTid.xy];
	float4 rfl = tRFL[DTid.xy];
	float acc = 1;

	float2 spos = float2(1, 0);
	for (float i = 1; i <= samples; i++)
	{
		// Sample right
		//		Calculate weights
		float zWeightR = 1.0f / (g_epsilon + abs(z - tCoarseDepth[DTid.xy + spos*i]));
		float WeightR = zWeightR * GaussianWeight(i, sigma);
		
		//		Sample 
		gi += tGI[DTid.xy + spos*i]* WeightR;
		rfl += tRFL[DTid.xy + spos*i] * WeightR;
		acc += WeightR;
		
		// Sample left
		//		Calculate weights
		float zWeightL = 1.0f / (g_epsilon + abs(z - tCoarseDepth[DTid.xy - spos*i]));
		float WeightL = zWeightL * GaussianWeight(i, sigma);

		//		Sample 
		gi += tGI[DTid.xy - spos*i] * WeightL;
		rfl += tRFL[DTid.xy - spos*i] * WeightL;
		acc += WeightL;
	}
	
	output[DTid.xy] = gi / acc;
	outputRFL[DTid.xy] = rfl / acc;
}
float GaussianWeight(float x,float sigmaSquared)
{
        float alpha = 1.0f / sqrt(2.0f * 3.1416f * sigmaSquared);
	float beta = (x*x) / (2*sigmaSquared);

	return alpha*exp(-beta);
}

This is a pretty classic brain fart. You're passing integer sample points [1..samples] to the gaussian function, but the domain of the gaussian is [-inf, +inf]. You should be passing GaussianWeight(i - centre, sigma_sq), where centre is the middle sample. Here centre = (samples+1) / 2 = 4.5. People conventionally use an odd number of samples to get an integer centre. You've also labelled your sigma squared constant as sigma, that was a little confusing.

There's other thing I want / need to add to my blur shader, and that is to take into account the distance of the pixel, so everything gets the same amount of blur. Right now, things that are too far get too much blur, and things that are to close don't get enough.
I need suggestion on how to do that, I suppose someone here has done it before.

This may not work well with a separable blur like this one, it would create anisotropic blur as different size kernels mix together on the second pass.
Advertisement


This is a pretty classic brain fart. You're passing integer sample points [1..samples] to the gaussian function, but the domain of the gaussian is [-inf, +inf]. You should be passing GaussianWeight(i - centre, sigma_sq), where centre is the middle sample. Here centre = (samples+1) / 2 = 4.5. People conventionally use an odd number of samples to get an integer centre. You've also labelled your sigma squared constant as sigma, that was a little confusing.

Thanks! Tried that, and it works better now. Still, I have some things to fix.
Anyone knows something about the atomic average problem I mentioned earlier? Still struggling with that.

I managed to implement a better material model, one similar to the one found on Blender's Cycles render engine(pathtracing), but simplified.
Only two types of materials exist. One is a Diffuse material, that has it's rays distributed around the normal hemisphere, and the other is a Glossy material, that has it's rays distributed around the reflection hemisphere.
A value in the [0,1] range defines how much angle does the rays have with the hemisphere base(the normal for Diffuse materials, the reflection vector for Glossy materials)

Both materials can have their own tracing parameters, such as rays, samples per rays, samples displacement, etc, and can be combined as you like.

I'll try to upload a video tomorrow, but here are some screens. The screens show only a Glossy material, because is the most interesting one. More materials and combinations later. Also, to combine direct and indirect I'm just doing lerp(shadowMask,direct,indirect) that's why there's so much difference. I have to find a better way to mix the materials( that's up for the artists rolleyes.gif ?)

No blur
wRuVeui.png

7Yw1ZP4.png

G3A2coe.png

8 Samples Depth-Aware Gaussian blur

GpZHJJy.png

gScLWls.png

qYHrWkJ.png

Well, that's it for today. If I don't get to post the video tomorrow, don't expect anything from me on the next two weeks. Exams period on college wink.png .

Just a quick post. I played a bit with raytraced shadows, using AHR. Here are some (preliminary) results:
6SxS9PJ.png

Today I found out I was rejected to present my technique at SIGGRAPH, so I'm now open to talk about the technique, but I don't know how to continue from here. Should I do a tutorial, some kind of article, a downloadable demo? Maybe try to integrate it with some enigne, like Unity or UDK? Should I make a new post here on GameDev? What are other places where I can make this idea public?
If anyone can provide some suggestions, I would be really glad, as I said, I don't know where to move with this from now on. I'll continue improving it though.

How big of a presentation are you wanting to make? A journal series is best if you want to talk about processes you went though during development; an article is best if you just want to talk about the technique itself; and if you want to discuss it, this thread should be perfect :-)

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

How big of a presentation are you wanting to make? A journal series is best if you want to talk about processes you went though during development; an article is best if you just want to talk about the technique itself; and if you want to discuss it, this thread should be perfect :-)

Ok so I think that for now, I'll keep with this thread, and discuss about it. Ask away!
When I have it more polished, maybe I'll make an article.
Also I plan to try to enter next year GDC. Can't hurt trying, plus according to the SIGGRAPH reviewers, I was really close to make it.
AHR needs, as they pointed out, more polish and also compare it to other techniques.

I uploaded a new video, is at youtube now :

Looks pretty good. I'd love to know more about your technique. What's hybrid about it? And what's approximate about it? Have you given an explanation (even in broad terms) anywhere about your process?

Could you share more details?

A playable demo maybe?

Looks really promising.

Are you taking in account the G-buffer to compute GI? or just world space data?

What about spreading the work across frames? Or even guess wich areas needs to be updated (based on lighting changes, moving objects, etc) at runtime.

Could be really useful for architectural visualization.

Keep up the good work!

This topic is closed to new replies.

Advertisement