Realtime Ambient Occlusion, part 2

Started by
16 comments, last by Enrico 16 years, 5 months ago
Target hardware... Well, let's say a GeForce 8 or something. I haven't tried it yet, and maybe I just should try it. Let's see how many nodes can be measured at a reasonable speed.

I can't use 8 cubeMaps though. The ambient output is placed in one of the deferred target textures. This also includes the emissive texture and reflection from a cubeMap. The ambient light itself uses a normalMap( and possibly with a heightMap for parallax), and should be multiplied with a diffuse texture. I think I can use 2 or 3 cubeMaps at maximum, the other channels are occupied, and the deferred lighting approach makes it difficult to change that. Maybe I can squeeze a little bit more out of it with multipassing, but that costs extra energy of course. And I need to know how to blend between the cubeMaps. If each vertex gets 1 cubeMap, I could blend between 3 with the help of the distance between the pixel and the vertex. However, all of this also makes the ambient lighting a lot more expensive. If each vertex is connected to just 1 color, the vertex shader could do a lot of the work.

>> All this stuff is useless without proper indirect lighting (in my opinion).
Absolutely true in my case. Without ambient light, 80% of my scenes would be pitch black.

I will take a look into your paper, thanks for posting that :)

Greetings,
Rick
Advertisement
I apologise in advance if I have missed something in this thread, but it seems as if you are all regarding Crysis as using a fairly realistic Ambient Occlusion model. In fact they are using a screen space technique to fake AO, as they decided that true AO provided very little additional benefit, and cost a lot more.

Here is Crytek's paper on their visuals, which describes the technique (albeit somewhat lacking in detail):
Finding Next Gen – CryEngine 2

And here is Inigo Quilez' excellent IOTD thread (with pretty pictures) where he takes a good bash at implementing the technique:
SSAO

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Judging on the screenshots I posted earlier here, the ambient lighting is indeed not that spectacular in Crysis. The outdoor scenes are very impressive, but the indoor locations are a little bit... flat / no athmosphere. Nevertheless, its an interesting technique. Thanks for the links, now I have something to read before going to sleap :)

Cheers,
Rick
If you are targeting Geforce 8 hardware you should look into ATI's global illumination demo. They use the cubemap probe approach but use of geometry shaders greatly reduces the number of passes for the cubemap rendering. They also only update a fixed number of probes per frame to keep performance up. There is a PDF describing the technique burried somewhere in the zip file. It can be found on this page.

http://ati.amd.com/developer/SDK/Samples_Documents.html#d3d10
Ah, thanks again! It's not a bad idea to only update a few nodes per frame. I could place them in a queue or something.

The ATI demo didn't run though. I have Windows Vista and DirectX10 (I suppose), but the demo says "D3D device creation failed". I also had this error with other nVidia demo's that would normally run. Any idea why that is?

The paper included in that demo is talking about rendering "slices" in a 3D texture. With that, multiple nodes can be done in 1 pass... I don't really understand that part. I never used 3D textures (what's the difference with a cubeMap), and what do they mean with slices?

Greetings,
Rick
A 3D texture is basically an array of 2D textures, a slice just refers to a single 2D texture in that array. A 3D texture with 6 slices is essentially like a cubemap. But D3D10 doesnt let you have an array of cubemaps so you can only update one per pass, but with a 3D texture you can render to all the slices in a single pass.

I have the same problem with that demo, crashes when it starts for me (have a 8800GTS) so I havent looked through the code much. Maybe its not compatible with current nvidia drivers? I'm kindof in the same situation as you trying to find a good way to do realtime GI. So far using cubemap probes like this is the most appealing to me. Would be nice to be able to run this to see if its worth trying to implement...

If anyone is able to run that, any comments on the performance and visual quality?
Quote:Original post by spek
Ah, thanks again! It's not a bad idea to only update a few nodes per frame. I could place them in a queue or something.

The ATI demo didn't run though. I have Windows Vista and DirectX10 (I suppose), but the demo says "D3D device creation failed". I also had this error with other nVidia demo's that would normally run. Any idea why that is?

Greetings,
Rick


It's a D3D10.1 demo. You need an HD3870/50 to et it.
Here is another paper of screen space ambient occlusion.
--

This topic is closed to new replies.

Advertisement