Sign in to follow this  
spek

Realtime Ambient Occlusion, part 2

Recommended Posts

spek    1240
Hi, A couple of days ago I asked about ambient lighting. With the help of you guys here, I managed to create Ambient Occlusion lighting via a static baked map. After seeing a demo of the Crysis Sandbox editor, realtime AO seems very attractive to me. Implementing SSAO itself might not be that difficult, but I wonder how they link the ambient colors from the surrounding world. The occlusion factor is one thing, but I also need an ambient color and light direction(s) for normalMapping. In the previous post Harry Hunt came with the idea of placing nodes in the world that measure the incoming light on that point. These nodes can be converted to a cubeMap so that I can perform normalMapping onto the ambient light, or I pass the colors from that node to the nearby vertices. This could be a realtime solution, if I change the node colors, the ambient light of the nearby surfaces will change as well. But... how to measure the incoming colors of such a node fast enough to make it realtime? Currently I was planning to render the surrounding world in multiple passes for a radiosity effect. The "ambient nodes" would measure the surrounding colors for 6 sides, and store that. But obviously, that is too slow for realtime applications. I wonder how games as Crysis do it. Anyone a good idea? I could simply use the lightColors from nearby lights arround a node. But I don't know if that gives the desired (indirect light) effect. It's also hard to tell if there something between the node and the lightSource. Greetings, Rick

Share this post


Link to post
Share on other sites
spek    1240
I don't think that is an option (yet). Even if I would render the surrounding scene in a simply way on a low resolution, it will still give problems. If I would render the surrounding area like you would do for a cubeMap, I need to render the scene 6 times per node. Maybe that's not a problem for 1 node, but there will be definitely more (in fact, the more, the better). At least each corner of a room should have a node. Swapping FBO buffers already gives me slowdowns, and maybe even the vertex GPU could come into problems for so many passes...

Not all nodes should be updated all the time of course. In many cases, the lighting remains static. But if I need to figure out which nodes should be updated when a light or part of the scenery changes, is hard to manage.

I downloaded the Crysis demo, wanted to see how the (realtime) ambient would look there. But for some reason the demo won't do any shadowing. It looks worse than good old Farcry, and it still runs terrible slow. I guess my computer is not up for the task :|
Greetings,
Rick

Share this post


Link to post
Share on other sites
Enrico    316
Quote:
Original post by spek
In the previous post Harry Hunt came with the idea of placing nodes in the world that measure the incoming light on that point. These nodes can be converted to a cubeMap so that I can perform normalMapping onto the ambient light, or I pass the colors from that node to the nearby vertices. This could be a realtime solution, if I change the node colors, the ambient light of the nearby surfaces will change as well. But... how to measure the incoming colors of such a node fast enough to make it realtime?

This sounds like what Half-Life 2 is doing. They places Ambient Cubemaps throughout a level and these are then used for rendering. However, I think they use prerendered cubemaps for this...

Share this post


Link to post
Share on other sites
spek    1240
Correct, they were pre-calculated. I'm trying to do the same thing, but if possible, they should update themselves. HL2 measures the (indirect) light from the surrounding environment by rendering it with the radiosity lightMap. In my case there is no lightMap, everything is litten with realtime lighting (shadowMaps). Besides from the cost to update these nodes, its hard to find out which ambient/indirect light a node catches. Crysis managed to do it somehow though...

Greetings,
Rick

Share this post


Link to post
Share on other sites
Enrico    316
Quote:
Original post by spek
Correct, they were pre-calculated. I'm trying to do the same thing, but if possible, they should update themselves. HL2 measures the (indirect) light from the surrounding environment by rendering it with the radiosity lightMap. In my case there is no lightMap, everything is litten with realtime lighting (shadowMaps). Besides from the cost to update these nodes, its hard to find out which ambient/indirect light a node catches.

I am not sure what you are looking for. Are you looking for a possibility to calculate ambient lighting at real time or indirect diffuse lighting? Both maybe close together, but might need different techniques.

When you know what you want, you need to ask yourself questions like "Is precomputation possible? If not, why not?". When you have answers for these questions, we might help you in a better way ;)

Share this post


Link to post
Share on other sites
spek    1240
Pre-calculated is certainly possible. I did it before, and since much of the lighting is pretty static, a pre-calculated solution won't hurt. Most probably it even gives better and certainly faster results.

However, I was interested in realtime solutions like Crysis managed to do somehow. I guess pre calculated maps won't disappear soon, but realtime lighting gets the future I think. In my case it's just for learning new/different techniques. And its nice for my level editor. Earlier we had to wait hours before a lightMap was finished, check if it was any good, and do it over again. With a realtime solution an artist can directly see the result in a map editor. Nice in our case, since we do don't have that much time for our hobby :)


I don't know the exact difference between indirect diffuse lighting and ambient lighting. As far as I know, ambient light doesn't really exist in the real world. But that is not a problem, we are not trying to achieve 100% physical correct lighting.... it just has to look nice. In our case there are many indoor locations with just a few lights. We are using deferred lighting and shadow maps. It looks nice, but everything outside the lightspot is pitch dark. Since there are not many lights (think about dark basements, corridors, industrial settings), ambient light really becomes necessary. And we also want to apply normalMapping on the ambient portion. This is possible if those nodes measure the light like a cubeMap (just like HL2 performed normalMapping as well on the dynamic objects via "ambient cubes").

I think a relative simple solution would be to check which lightSources are nearby a node, and add that color to 1 or more sides of the cube (depending on the angle between node and light). It doesn't matter if that light is blocked by some little obstacles, since it somewhat simulates the indirect lighting effect. However, if a big wall is between the node and the light, it shouldn't be used. It might be tricky to calculate that.

This is not a very realistic approach of course, but maybe its good enough... I found a few shots of indoor locations of Crysis. I doubt if they are using a much more realistic approach. It probably has a reason that 99 of the 100 screenshots are outdoor locations... Its less spectacular than the outdoor shots.

http://crysis-online.com/Media/Screenshots/Screenshots/Carrier-01.jpg
http://media.pc.ign.com/media/694/694190/img_4948263.html
http://media.pc.ign.com/media/694/694190/img_4948266.html
http://media.pc.ign.com/media/694/694190/img_3533351.html

The realtime ambient occlusion does 50% of the job. But they also need the colors and eventually light directions from the (indirect) light for the normalMapping.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Enrico    316
Ok, then I do not see why you do not want to create the cubemap probes dynamically at runtime. The cubemaps can be very small in size (maybe 32x32) and do not need any fancy filtering or anti aliasing. And you do not need to update them every frame ;)

Or maybe you should try Spherical Harmonic lighting, which is good for this kind of ambient style lighting...

Share this post


Link to post
Share on other sites
spek    1240
If I understand you right, you would:
- Place the camera in a node, point it towards 1 of the 6 sides
- Render the area in front of the camera with "simple" shading, on a low resolution
- Each surface gets its node(cubeMap) for the ambient part

You think this can be done fast enough? I have my doubts... I can indeed measure incoming light by rendering the scene without fancy effects. Just render the lights(shadowMaps in my case) and multiply it with the surface "reflection" color (no texturing/normalMapping/etc.), eventually add emissive.

However, everything is done on HDR buffers which makes it a little bit slower. And a simple square room should at least have 4 nodes I think (unless the lighting is really not much different for each corner). That would be 6 x 4 = 24 passes. Maybe that is still possible, and maybe I can do with less nodes. But I also expect more nodes for larger complex scenes (outdoor, or a big warehouse or something).

Another little downside about using cubeMaps is that I can use only 1 per polygon. If I pass colors per vertex, I can blend between 3 nodes, which is especially nice for larger surfaces. I could get these colors by downscaling the 32x32 renderings to 1x1 and pick the pixel, but that would be much slower (picking colors is slow, switching between FBO's as well).



Probably this will be possible sooner or later with faster cards. And maybe its already possible, but I'll have to add that a lot of power is already gone to deferred lighting, tonemapping, blurring shadowMaps, and so on...

I don't know how Spherical Harmonic lighting works, I shall look into that.
In the end it doesn't matter that much if we have to use pre-calculated nodes, especially not if the quality is much better. But I'd like to explore the current available options. I haven't been graphics programming for 1.5 years, I picked that hobby up again 2 months ago, so I want to be up-to-date again :)

Thanks for the tips!
Rick

Share this post


Link to post
Share on other sites
Enrico    316
Quote:
Original post by spek
If I understand you right, you would:
- Place the camera in a node, point it towards 1 of the 6 sides
- Render the area in front of the camera with "simple" shading, on a low resolution
- Each surface gets its node(cubeMap) for the ambient part

You forgot to set the viewport to 32x32 for each cubemap face ;)

Quote:
You think this can be done fast enough? I have my doubts...

What is your target hardware then, if you think it is too slow? Have you tried it? ;-) There is an article in GPU Gems 1 or 2 which is about Realtime calculation of (ir)radiance maps. I need to look up the complete title at home, but this might be exactly what you want ;-)
As I said, you do not need to update the cubemaps every frame...

Quote:
Another little downside about using cubeMaps is that I can use only 1 per polygon.

No, you can use as much as you have texture units. 8 texture units = 8 cubemaps per polygon.

Quote:
Probably this will be possible sooner or later with faster cards. And maybe its already possible, but I'll have to add that a lot of power is already gone to deferred lighting, tonemapping, blurring shadowMaps, and so on...

All this stuff is useless without proper indirect lighting (in my opinion).

Quote:
I don't know how Spherical Harmonic lighting works,

Try my article/paper and if you have problems, just ask. Oh, and the article does not talk about runtime SH lighting, where you dont need any preprocessing. Still need to add this...


Best regards,
Enrico

Share this post


Link to post
Share on other sites
spek    1240
Target hardware... Well, let's say a GeForce 8 or something. I haven't tried it yet, and maybe I just should try it. Let's see how many nodes can be measured at a reasonable speed.

I can't use 8 cubeMaps though. The ambient output is placed in one of the deferred target textures. This also includes the emissive texture and reflection from a cubeMap. The ambient light itself uses a normalMap( and possibly with a heightMap for parallax), and should be multiplied with a diffuse texture. I think I can use 2 or 3 cubeMaps at maximum, the other channels are occupied, and the deferred lighting approach makes it difficult to change that. Maybe I can squeeze a little bit more out of it with multipassing, but that costs extra energy of course. And I need to know how to blend between the cubeMaps. If each vertex gets 1 cubeMap, I could blend between 3 with the help of the distance between the pixel and the vertex. However, all of this also makes the ambient lighting a lot more expensive. If each vertex is connected to just 1 color, the vertex shader could do a lot of the work.

>> All this stuff is useless without proper indirect lighting (in my opinion).
Absolutely true in my case. Without ambient light, 80% of my scenes would be pitch black.

I will take a look into your paper, thanks for posting that :)

Greetings,
Rick

Share this post


Link to post
Share on other sites
swiftcoder    18432
I apologise in advance if I have missed something in this thread, but it seems as if you are all regarding Crysis as using a fairly realistic Ambient Occlusion model. In fact they are using a screen space technique to fake AO, as they decided that true AO provided very little additional benefit, and cost a lot more.

Here is Crytek's paper on their visuals, which describes the technique (albeit somewhat lacking in detail):
Finding Next Gen – CryEngine 2

And here is Inigo Quilez' excellent IOTD thread (with pretty pictures) where he takes a good bash at implementing the technique:
SSAO

Share this post


Link to post
Share on other sites
spek    1240
Judging on the screenshots I posted earlier here, the ambient lighting is indeed not that spectacular in Crysis. The outdoor scenes are very impressive, but the indoor locations are a little bit... flat / no athmosphere. Nevertheless, its an interesting technique. Thanks for the links, now I have something to read before going to sleap :)

Cheers,
Rick

Share this post


Link to post
Share on other sites
Leafteaner    860
If you are targeting Geforce 8 hardware you should look into ATI's global illumination demo. They use the cubemap probe approach but use of geometry shaders greatly reduces the number of passes for the cubemap rendering. They also only update a fixed number of probes per frame to keep performance up. There is a PDF describing the technique burried somewhere in the zip file. It can be found on this page.

http://ati.amd.com/developer/SDK/Samples_Documents.html#d3d10

Share this post


Link to post
Share on other sites
spek    1240
Ah, thanks again! It's not a bad idea to only update a few nodes per frame. I could place them in a queue or something.

The ATI demo didn't run though. I have Windows Vista and DirectX10 (I suppose), but the demo says "D3D device creation failed". I also had this error with other nVidia demo's that would normally run. Any idea why that is?

The paper included in that demo is talking about rendering "slices" in a 3D texture. With that, multiple nodes can be done in 1 pass... I don't really understand that part. I never used 3D textures (what's the difference with a cubeMap), and what do they mean with slices?

Greetings,
Rick

Share this post


Link to post
Share on other sites
Leafteaner    860
A 3D texture is basically an array of 2D textures, a slice just refers to a single 2D texture in that array. A 3D texture with 6 slices is essentially like a cubemap. But D3D10 doesnt let you have an array of cubemaps so you can only update one per pass, but with a 3D texture you can render to all the slices in a single pass.

I have the same problem with that demo, crashes when it starts for me (have a 8800GTS) so I havent looked through the code much. Maybe its not compatible with current nvidia drivers? I'm kindof in the same situation as you trying to find a good way to do realtime GI. So far using cubemap probes like this is the most appealing to me. Would be nice to be able to run this to see if its worth trying to implement...

If anyone is able to run that, any comments on the performance and visual quality?

Share this post


Link to post
Share on other sites
Cypher19    768
Quote:
Original post by spek
Ah, thanks again! It's not a bad idea to only update a few nodes per frame. I could place them in a queue or something.

The ATI demo didn't run though. I have Windows Vista and DirectX10 (I suppose), but the demo says "D3D device creation failed". I also had this error with other nVidia demo's that would normally run. Any idea why that is?

Greetings,
Rick


It's a D3D10.1 demo. You need an HD3870/50 to et it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this