Approximate Hybrid Raytracing - Work in Progress

Started by
18 comments, last by Rbn_3D 9 years, 9 months ago

Hy everyone,

I had been working for the last few months in a method to render reflections / GI ( and other things too) at real time in mid range hardware, and a few days back I finished a presentation video and applied to the Real Time Live! conference at SIGGRAPH.

Because of that I can't talk details now, but I wanted to show what I had accomplished.

The video is below:

As I said at the description, the demo actually runs more fluent, but Camtasia took some of it away.

Anyway, what I can say is that is a combination of standard rasterization and raytracing, supports fully dynamic scenes, doesn't requires extra passes on the geometry( well, the triangles are duplicated, but inside a geometry shader), and is really simple to implement, which leaves a lot of room for improvement and new techniques based on it.

Advertisement

Awesome,

Can you say which data structure you used to accelerate the intersection test? I'm struggling to implement my k-d tree efficient enough to use it dynamically.

Awesome,

Can you say which data structure you used to accelerate the intersection test? I'm struggling to implement my k-d tree efficient enough to use it dynamically.

I'm sorry, but I don't want to get into implementation details (ph34r.png ) until I know if I'm accepted / rejected to present at SIGGRAPH.

I can say that I don't use acceleration structures per se , it's more like a brute force raytracer. Next month I should know the verdict, and I'll get into details then.
Hope you understand.

Unfortunately, from my perspective, the only really interesting part of a demo like this is how it's done. Specifically: what makes your approach superior to the innumerable other demos that purport to accomplish the exact same thing :-)

It's also really hard to discuss a graphics demo without knowing the techniques behind it; I can't tell if your grossly over-saturated diffuse interreflection is an artifact of rendering method or just the result of using too much specular gloss on the floors, for instance.

Having worked in realtime GI back in the days before it was actually hardware-practical, I appreciate the work that goes into a project like this, and I'm certainly not trying to denigrate your efforts. In fact it's probably my background in RTGI that makes me overly critical. What I'd personally rather see (besides the stock castle/banner scene) is stuff like material demos where you illustrate how your technique handles various surface types, especially in combination.

One of my favorite test scenes is Cornell Box variants for exactly that reason. It's a standard scene (even though the castle scene is pretty close to de-facto GI demo standard these days) and for someone with a trained eye it makes it immediately effortless to recognize the strengths and tradeoffs of a given rendering method.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Unfortunately, from my perspective, the only really interesting part of a demo like this is how it's done. Specifically: what makes your approach superior to the innumerable other demos that purport to accomplish the exact same thing :-)

It's also really hard to discuss a graphics demo without knowing the techniques behind it; I can't tell if your grossly over-saturated diffuse interreflection is an artifact of rendering method or just the result of using too much specular gloss on the floors, for instance.

Having worked in realtime GI back in the days before it was actually hardware-practical, I appreciate the work that goes into a project like this, and I'm certainly not trying to denigrate your efforts. In fact it's probably my background in RTGI that makes me overly critical. What I'd personally rather see (besides the stock castle/banner scene) is stuff like material demos where you illustrate how your technique handles various surface types, especially in combination.

One of my favorite test scenes is Cornell Box variants for exactly that reason. It's a standard scene (even though the castle scene is pretty close to de-facto GI demo standard these days) and for someone with a trained eye it makes it immediately effortless to recognize the strengths and tradeoffs of a given rendering method.

I totally understand your point, I know it's hard to talk about something not knowing how it works.
Also, I'm all in for constructive criticism.

Now to the scene related things:
You said you'll like to see different materials, I think I can put together a scene with a bunch of spheres with different reflection amount, gi amount, etc in the next couple hours. Is that what you had in mind?
Regarding the reflections, is mostly the fact I had quite a problem combining the GI , Reflections and Direct Illumination buffers, and making it look good, but is more of an "artistical" problem of sorts wink.png

Something like that sounds cool, yeah - ideally something that showcases the strengths of your approach as much as you can manage, even if you don't get into any detail on how you pulled it off just yet.

I figured the reflection problem was likely an aesthetic tradeoff. It can be very hard to get multiple layers of GI to play nicely together depending on how you do your rendering, and definitely making them all look good when combined is a real trick.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

Something like that sounds cool, yeah - ideally something that showcases the strengths of your approach as much as you can manage, even if you don't get into any detail on how you pulled it off just yet.

I figured the reflection problem was likely an aesthetic tradeoff. It can be very hard to get multiple layers of GI to play nicely together depending on how you do your rendering, and definitely making them all look good when combined is a real trick.

Great then! I should have that scene ready in a couple hours

Well, I ran into some unexpected problems, so it took more than I though.
Here are some screens of a dirty simple Cornell Box:

I disabled reflections, as I had some problems getting them to look good on that scene. Just GI for now.
The blur needs a lot of work too, but I believe the idea is there.
As you can see by the FPS, my poor GTX 750 Ti is trying it's best, but is not enough.
the 75 W TDP and mainly the 128 bits bus really hurts performance, as this is a bandwidth-bound algorithm. I ran it a couple times on a 7870, and it runs at twice or more framerate ( 256 bit bus ).

It depends a lot on the scene and the settings, but I cant get 30 FPS with the 750 Ti at the Sponza scene( wich is actually faster, as a lot of rays collide really close to the start position)

EDIT: Does anyone knows how can I put an images album? Or at least the images one by one, nothing seems to work. For now, here goes the link
https://imgur.com/a/T0SJP

Well, I had been working on improving the scene representation, and I managed to increase performance by about 20% - 25% with just a 3 % increase in memory usage and a small increase in the GBuffer stage, but I believe I could tune it up to 30% - 35%. Also now reflections take into account the sky, and I added specular lighting to the GI calculation.

I'm having a problem though, and I wanted to ask you guys. I can't get the blur to work correctly. I'm implementing a Gaussian blur, separated on two passes, with the weights calculated on the fly, as I like to be able to tune the number of samples of the filter. Also, the Gaussian weight is multiplied by 1 / (epsilon + abs(z - depth[sample])) to take into account depth discontinuities. It works, but instead of producing a nice smooth blur, it creates a black cross in the middle. Below is a screen. Rays where reduced to 1 per pixel and strength upped to better display the problem. The blur is using 8 samples and a sigma of 4.3.

ywTzM6l.png

And here is the code for the horizontal pass. The vertical one is exactly the same, changing float2(1, 0) for float2(0, 1). I read two images, do the horizontal blur to two temp images, do the vertical blur sampling the temp images, and output to the original images.

Texture2D<float> tCoarseDepth : register(t0);
Texture2D<float4> tCoarseNormal : register(t1);
Texture2D<float4> tGI : register(t2);
Texture2D<float4> tRFL : register(t3);

RWTexture2D<float4> output : register(u0);
RWTexture2D<float4> outputRFL : register(u1);

static const float samples = 8;
static const float g_epsilon = 0.0001f;
static const float sigma = 4.3f * 4.3f;

[numthreads(256, 1, 1)]
void BlurH( uint3 DTid : SV_DispatchThreadID )
{
	float z = tCoarseDepth[DTid.xy];

	float4 gi = tGI[DTid.xy];
	float4 rfl = tRFL[DTid.xy];
	float acc = 1;

	float2 spos = float2(1, 0);
	for (float i = 1; i <= samples; i++)
	{
		// Sample right
		//		Calculate weights
		float zWeightR = 1.0f / (g_epsilon + abs(z - tCoarseDepth[DTid.xy + spos*i]));
		float WeightR = zWeightR * GaussianWeight(i, sigma);
		
		//		Sample 
		gi += tGI[DTid.xy + spos*i]* WeightR;
		rfl += tRFL[DTid.xy + spos*i] * WeightR;
		acc += WeightR;
		
		// Sample left
		//		Calculate weights
		float zWeightL = 1.0f / (g_epsilon + abs(z - tCoarseDepth[DTid.xy - spos*i]));
		float WeightL = zWeightL * GaussianWeight(i, sigma);

		//		Sample 
		gi += tGI[DTid.xy - spos*i] * WeightL;
		rfl += tRFL[DTid.xy - spos*i] * WeightL;
		acc += WeightL;
	}
	
	output[DTid.xy] = gi / acc;
	outputRFL[DTid.xy] = rfl / acc;
}

If someone can shed some light, I would be grateful.

There's other thing I want / need to add to my blur shader, and that is to take into account the distance of the pixel, so everything gets the same amount of blur. Right now, things that are too far get too much blur, and things that are to close don't get enough.
I need suggestion on how to do that, I suppose someone here has done it before.

Well, that's it for now, I'll try to keep this thread with some more regular updates, maybe a screen every two or three days.

EDIT: Here's the code for the GaussianWeight function


float GaussianWeight(float x,float sigmaSquared)
{
        float alpha = 1.0f / sqrt(2.0f * 3.1416f * sigmaSquared);
	float beta = (x*x) / (2*sigmaSquared);

	return alpha*exp(-beta);
}

Anyone have anything to say about the blur problem? I'm still fighting with that.

I'm also having another problem, and I posted on Stack Overflow about it : http://stackoverflow.com/questions/23189808/interlocked-average-cas-not-working-on-hlsl

I'm trying to remove the flickering of the light by storing it using atomic ops, but I can't get it to work correctly. Here's the code I'm using:


[allow_uav_condition] while (true)
{
    // Get old value
    uint old = irradianceVolume[vpos];

    // Average
    float3 avg = saturate((UnpackR5G5B5A1(old) + irradiance) * 0.5f);
    uint final = PackR5G5B5A1(float4(avg, 1));

    // Try to store
    uint ret = -1;
    InterlockedCompareExchange(irradianceVolume[vpos], old, final, ret);

    if (ret == old)
        break;
}

If anyone can say anything, I would really appreciate it.

This topic is closed to new replies.

Advertisement