Sign in to follow this  

Real-time Ambient Maps (CryEngine)

This topic is 4266 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi... I was watching CryEngine's GDC video and i was wondering what "Ambient maps" could be. I haven't worked on any PRT like effects (SH, ambient occlusion, etc.) and i don't know what they can really do. Does anyone has any idea how this is possible. The most impressive thing in the video (i think) is the moving light. If i remember correctly (i haven't read anything on the subject for a while) SH can be used for static lights, so this isn't SH. From running nVidia's demo Dynamic Ambient Occlusion on a GeForce 7800GTX, i think it is too slow for a game environment. The frame rate was about 50 with 2 passes and about 90 with 1 pass. What do you think it is? Thanks in advance. HellRaiZer

Share this post


Link to post
Share on other sites
I've been wondering the same thing. I posted a similar question on Beyond 3D and no one really knew there either. I'm pretty sure however that it has nothing to do with Dynamic Ambient Occlusion as per the nvidia demos (and the gpu gems article).

Maybe it's just a trick like in Fear?

Share this post


Link to post
Share on other sites
Quote:

Maybe it's just a trick like in Fear?


Because i don't remember seeing something like this in FEAR, do you have any references for the trick you are talking about?

Or if you remember any point in the game where this is too obvious, please tell it. There's been some time since i last saw FEAR in action, so i don't remeber much of it (except from the scenario and some things like the volumetric lights from the windows and its soft shadows).

HellRaiZer

Share this post


Link to post
Share on other sites
Quote:
Original post by HellRaiZer
...


Common SH are distance irrelevent. But they can be made to depend on distance.
Just a suggestion

Share this post


Link to post
Share on other sites
I'm guessing they have ambient maps rendered offline at several locations and the environment interpolates through them based on where the light is positioned, along with a shadow volume for the direct shadows.

Share this post


Link to post
Share on other sites
Quote:
Original post by The Parrot
I'm guessing they have ambient maps rendered offline at several locations and the environment interpolates through them based on where the light is positioned, along with a shadow volume for the direct shadows.


I think that is way too brute force and it would require an insane amount of texture memory.

Maybe they are using something like Precomputed Local Radiance Transfer for Real-Time Lighting Design

Share this post


Link to post
Share on other sites
i'll explain the way that i'm going to integrate radiosity-like effects in my engine(once it reaches the state of handling shaders+multiple passes properly), maybe Crytek is doing something similar:
So i have no idea if that would work/look good/be fast.
I'm treating dynamic objects(players/stuff flying around) differently than static gemoetry(the map(indoor)).

What we need for radiosity:

(1)The radiance emitted from every dynamic object
(2)the radiance that every object gets from the sorrounding environment.



(1):
Render every object from some different directions(e.g. 5) into a texture storing the color in RGB and the depth in A. With the deoth stored in the texture we can compute the AO for every texel(kind of like shadowmapping with a single mesh but multiple lights) and multiply the AO value with the color in the environmentmap(maybe blur the environmentmap before.


(2):
This is a little more complicated. I'm planning to store some environmentmaps of the environemnt for certain points in my maps(at the room centers for example). This isn't accurate but one could also store the depth in the environmentmaps and use raytracing to find the correct intersectionpoint with the enviironment(still not 100% but better,i have a paper about that(the raytracing part) in my bookmarks if anyone is interested).You could also go for the correct solution but that will be horrible slow i think(rendering the map for every object six times).

The radiance emitted from the object onto the map is still a problem. You'd need to determine if(or how much) of a mesh is visible to a certain point of the map. I think the paper about AO-Fields could help but i havenÄt understood that properly yet.


This is the first idea i came up when thinking about AO/GI and similar effects. So maybe Crytek ambient maps are sth like the method described above.


As stated before i don't know if that's fast or even works. So if somebody discovers a problem tell me plz.

Edit: wow PLRT looks cool but if read correctly than it doesn't wok on dynamic objects right?


regards,
m4gnus

Share this post


Link to post
Share on other sites
If I remember correctly, in Fear there's a room with like a red wall or something similar and a white light shining on it, and the adjacent walls appear lit from the bounced red light. I would say it looks staged (ie. just some red lights on shining on the wall), but I remember the light being dynamic some how. I'm sure nothing advanced was involved and it was a trick of some sort.

I still can't speculate from the video and screenshots what they're doing (and I'm far more interested in their skin/face rendering tech) but I did come up with something similar to what m4gnus described a while back.

The technique had a number of variations but the general idea was to store a very low resolution cube map (around 32x32x6) for each light in a scene that is regenerated based on it's on/off state or an update triggered if the environment was in motion (which didn't happen often anyways because of the static bsp). The cube map ownership was based on the portal area it was in and could then be "reprojected" onto the scene geometry in a seperate ambient pass. This obviously didn't work well with flickering lights unless you wanted to calculate the ambient cube map dynamically each frame (not too bad, but scene setup is painfully slow and done for each face).

Because the cube map was so low-rez, the upsampling gave you a nice blur which averaged the scene colors for you, which was the trick. Blurring it manually would make this technique prohibitively expensive (well, for the real-time version). The idea was to distribute the average scene color, not the actual reflected color off every surface after all.

Funny note, NVIDIA ended up doing something like this for that mad mod mike demo. If you watch closely you'll see mikes environment acutally lending subtle color variations to his body. Very cool! VERY expensive though since they DO use a dynamic cube map which they then blur in real-time (!!).

I don't see the Crytek guys doing this. More likely it'd be something like Ambient Occlusion Fields. I'd be shocked to find it was a completely real-time dynamic procedure and not pre-calculated for efficiency in some way (memory contraints non-withstanding).

[Edited by - Dirge on May 9, 2006 11:58:35 AM]

Share this post


Link to post
Share on other sites
Thank you all for the replies.

Zemedelec:
Quote:

Common SH are distance irrelevent. But they can be made to depend on distance.
Just a suggestion

From your suggestion i found this paper :
Spherical Harmonic Gradients for Mid-Range Illumination
From a quick look though it doesn't look real time.

DonnieDarko:
Thanks for the link. I haven't looked at it yet, but i will. From the abstract it looks to be close to what i'm looking for (and what the guys at Crytek are doing). One note on what m4gnus said about dynamic objects. They don't show any dynamic object in this particular scene so i can assume they don't support dynamic objects. I haven't read it so i can't comment further.

m4gnus:
Your explanation reminded me of the nVidia's Mad Mod Mike demo (which Dirge mentioned in his last post). My thought on that wasn't about the performance, but the way the use it. I mean at CryEngine. If they use this approach how can it be applied to static geometry. In nVidia's demo, this is done for the main character, and not the room. It maybe close but it's not that.

Dirge:
Quote:

The technique had a number of variations but the general idea was to store a very low resolution cube map (around 32^32^6) for each light in a scene that is regenerated based on it's on/off state or an update triggered if the environment was in motion (which didn't happen often anyways because of the static bsp). The cube map ownership was based on the portal area it was in and could then be "reprojected" onto the scene geometry in a seperate ambient pass.

I don't really understand the process. Can you make it more clear. What i understand from that is this:
You store a small cubemap in each room at its center (e.g. sector in a portal engine). During runtime you render all the lights in this cubemap (only the lights) as (e.g.) colored spheres. Then when you are done you project the cubemap to all faces in this sector. Is this close?
As you said by using a very small cubemap, you don't need to blur it after rendering the lights. So this is good. :)
If i remember correclty there is a paper from nVidia about "Diffuse Cubemaps", where they used a cubemap to render the illumination on an object from many lights in one pass (except the cubemap update of course). Or it was ATI's paper? I don't remember. I'll have to dig it from my HD (or the net).

I'll have to try some of these ideas in action (probably the cubemaps thing first, because SH and PLRT seems a little more complex that that).

Thanks again for the useful replies. Any other suggestion is welcome.

HellRaiZer

PS.
Quote:

...and I'm far more interested in their skin/face rendering tech

I haven't saw any screenshot on that, but i have this in mind (skin rendering i mean, not particullarly CryEngine's way) for when we finally finish our skeletal animation system. I'm more interested on "inorganic" geometry right now. :)

Share this post


Link to post
Share on other sites
Quote:
Original post by HellRaiZer
Thank you all for the replies.

Zemedelec:
Quote:

Common SH are distance irrelevent. But they can be made to depend on distance.
Just a suggestion

From your suggestion i found this paper :
Spherical Harmonic Gradients for Mid-Range Illumination
From a quick look though it doesn't look real time.



This paper talks about computing coefficients in realtime, which is not the case here. Realtime AO in CryEngine is for static world only, and it is precomputed.

Share this post


Link to post
Share on other sites
Hello,

Just wondering aloud here - has anyone ever tried simulating ambient point light contribution look with a simple hack? If you look at lots of vids or screen shots showing an ambient light solution, you notice that there is still a somewhat spherical falloff visible. I wonder if that could be simulated with a pointlight at the same position , larger radius, ~20% of the strength and desaturated a bit, and then biased it so that even surfaces pointing away from it get a large contribution. Obviously, that's a completely bullsh*t solution in terms correctness, but it might give the impression of ambient light... especially when used in conjunction with the other dynamic light/shadowing calcs.

Don't know how you'd do a similar thing with a directional light since it can't be bounded like a point light.

Sorry for the spam, just musing. Might try this at some point and post a pic, regardless of the outcome. :)

T

Share this post


Link to post
Share on other sites
Hell: Almost, although it doesn't need to be placed at the center of the room (it's origin can be at the light position). Also I don't render the lights into the cube map, but instead the lit environment. That ambient cube map is meant to pick up the bounced light (the light reflected off of any diffuse surfaces).

The idea is very similar to diffuse cube maps although their use is to provide a full lighting solution so the diffuse lighting term is actually baked into the cube map (in cube world space) as you described.


Some crysis face shots:

img

img

Thats the best looking real-time technique I've seen yet, and I've done a lot of research on the subject.

[Edited by - Dirge on May 9, 2006 11:41:49 AM]

Share this post


Link to post
Share on other sites
Quote:

Hell: Almost, although it doesn't need to be placed at the center of the room (it's origin can be at the light position). Also I don't render the lights into the cube map, but instead the lit environment. That ambient cube map is meant to pick up the bounced light (the light reflected off of any diffuse surfaces).


Thanks for clearing that. I saw your previous reply (the one saying "Precisely...") and i was a little confused. I tried to visualize the whole thing (on paper), and it didn't looked correct. I also thought of the correct solution and your confirmation made it clear.

I now have to implement it as a simple demo to see what i can get. What i really need to test is if it works with level geometry (non convex, complex shapes where the light is inside), and if the errors are too obvious.

I have a picture in mind where this approach wont work (it'll have errors), but i don't have time right now to put it on paper (and post it). I'll do that tomorrow.

Thanks for the help.

HellRaiZer

Share this post


Link to post
Share on other sites
Yeah, I re-read your post and noticed you were saying rendering all the lights as sphere's instead of just rendering the lit scene, ha.

It should work fine with any kind of sealed environment, but yeah, it's definitely not going to be 100% accurate. Think of it as a supplemental ambient lighting hint generated from the lights perspective.

BTW I should also mention Quake 3 had an interesting concept called the ambient light grid. Essentially it's a 3D grid of color values pre-calculated during the radiosity stage. It looks pretty darn good even to this day. Would be interesting to see if anyone ever extends the concept to be hardware accelerated.

Good luck with that.

Share this post


Link to post
Share on other sites
After some more research this morning i found some interesting papers on the subject. The main idea is around obscurances as presented in the paper "An Ambient Light Illumination Model" by Zhukov et al.
I haven't found this paper anywhere (see, the 1st page on google :)) but i found a few others describing the technique. I'm posting them here in case someone is interested.

1. Fast realistic lighting for video games
2. Comparing hemisphere sampling techniques for obscurance computation
3. Combining light animation with obscurances for glossy environments
4. Real-time Obscurances with Color Bleeding (needs ACM account).
5. The same as 4, but as an article in ShaderX 4.

The only one i have read is the 1st. From the description it looks like standard lightmapping but instead of storing the indirect lighting from a specific light configuration, you store the obscurance of the point, which is independent of the light configuration. From the 1st paper :
Quote:

Roughly speaking, obscurance measures the part of the hemisphere obscured by the neighbor surfaces. E.g., near a corner of a room the obscurance of the patches is higher than in the central region. From the physics of light transport point of view, obscurance expresses the lack of secondary (reflected) light rays coming to the specific parts of the scene thus making them darker. This is unlike radiosity where secondary reflections are accounted to increase the intensity.


If we assume that in CryEngine (because this was the question in the first place) they don't take into account dynamic objects, then this method gives identical results to their video, and it is 100% real-time. At it's bottom, it's standard lightmapping.
If they take into account dynamic objects, then i think it is going to be a little painful. It is possible, according to the 4th paper, and you can see it in this video, but not exactly what you call real-time game frame rates :) (7 fps for this scene if i remember correctly from the paper).

Unfortunatelly i don't have ShaderX 4, so i can't comment on the article or a possible demo that came with it.

Any ideas/comments on the method? Has anyone implemented this in the past? Any pitfalls that may arise, or suggestions on a possible implementation?

HellRaiZer

Share this post


Link to post
Share on other sites
Quote:
The only one i have read is the 1st. From the description it looks like standard lightmapping but instead of storing the indirect lighting from a specific light configuration, you store the obscurance of the point, which is independent of the light configuration.


Hmm, that first paper sounds an awful lot like Ambient Occlusion, only packed into texture pages (like a lightmap or light atlas) for an entire level. Instead of using a radiosity generator, which generates patchs similar to how they describe, you use those patchs to get the average obsucurance (which actually sounds more efficient than testing a ton of rays shooting from every triangle on every surface in the world).

I don't know how Crysis could be doing this dynamically, but I wouldn't hold it above them to do a separate ambient occlusion pass, and since lighting is additive, it would blend properly when introducing new lights into the scene... making it _dynamic_?? Thats a long shot though.

Share this post


Link to post
Share on other sites
Quote:
Original post by HellRaiZer
The only one i have read is the 1st. From the description it looks like standard lightmapping but instead of storing the indirect lighting from a specific light configuration, you store the obscurance of the point, which is independent of the light configuration. From the 1st paper :


This is just ambient occlusion map, from ambient lighting, nothing more.
It can't render local lighting, caused by bounced light.
On video, one can clearly see, how backfaced faces got lit by the moving light.

Share this post


Link to post
Share on other sites
Zemedelec:
Quote:

This is just ambient occlusion map, from ambient lighting, nothing more.
It can't render local lighting, caused by bounced light.
On video, one can clearly see, how backfaced faces got lit by the moving light.


You are right. I must learn to choose more appropriate words :) It's not "identical" (as i said) but it looks similar. I don't know how it will perform with back facing triangles.

But from a quick look at the equation that includes the lights (equ. 5) there is a dependence of the ambient light intensity with the light's position (Is'). And as you can see from equ. 7, this ambient light intensity doesn't include the angle between the light vector and the normal (as in equ. 6 which is for direct illumination).

The 1st paper isn't 100% accurate. If you read the 3rd paper they describe a way to calculate a more correct value for Ia, which unfortunetelly assumes that the camera is fixed and needs some extra computations if the light moves. If i understood it correctly, you must find the new Ai = all patches visible from the light and At = the total area visibl from the light, again, whenever something moves.

Never the less, and because the initial question was about a more correct ambient model, i'll to try it. This may not be exactly what they are doing in CryEngine 2 but it is better that the constant ambient term.

And one last thing. From those of you who have implemented AO maps, is this really AO, or there is any detail that makes it a different thing?

Thnx for the comments.

HellRaiZer

PS. I also found this paper describing the cubemap process Dirge mentioned (not exactly, but it's close) :
Cube-Map Data Structure for interactive global illumination computation in dynamic diffuse environments. There is one thing that looks too slow (cubemap readback and SH coef calculations) but i don't know how this is done (sh coeffs, this is) so i can't comment on that.

Share this post


Link to post
Share on other sites
Quote:
Original post by HellRaiZer
Never the less, and because the initial question was about a more correct ambient model, i'll to try it. This may not be exactly what they are doing in CryEngine 2 but it is better that the constant ambient term.


There a number of hacks, that can be made and look like it is real. It is very hard to estimate the amount of bounced light, so if there is ANY, the eye will just appreciate it.
I know some ppl, that do just hacks and got it working (bounced lighting in indoor environments).

The first step, is to compute just AO and compute local ambient light, that will be multiplied by that AO map. Use AO map, only on geometry that's in shadow. That way, when light comes into a room/sector and you increase ambient light strength, you'll get similar effect of backfaces being lit.
Main problem is the same amount of light that the whole sector will get.

Next one, is to try SHs with distance information in them. Per-vertex, with per-pixel AO. That can look better, but it is up to you to decide is the efford worth it (since precomputation can be way slower).


Quote:
Original post by HellRaiZer
And one last thing. From those of you who have implemented AO maps, is this really AO, or there is any detail that makes it a different thing?

This is not AO. AO affects only ambient lighting, and here we have local light that changes lighting in the whole room, including areas in shadow.

Anyway, AO-only looks sweet too, in outdoor scenes for example, where ambient term is strong. We use that and it the difference is visible, for sure :)

Share this post


Link to post
Share on other sites
Hello again...

After approx 2 days of coding (i know i'm a little slow :) ), i finally finished the obscurance method as described in the 1st of the 4 papers i mentioned in a previous post.
Here is a video showing the technique in action. It's in wmv format. Size is 2.71MB and duration 46 secs.

I know that it's nothing fancy, but i wanted to test if it works like i expected. And it seems more or less ok. I mean there is a change in ambient color with respect to the light's position, and backfaces are getting some "bounced" light (this isn't really bounced light, but it fakes it).

In the video there is a room consisting of 62 triangles. The whole room is about 10 x 10 x 5 units, and it was subdivided into square patches of size 0.125 (approx 26688 patches total). The beta parameter from equation (7) was set to a high value (0.5) to exaggerate the back face lighting. Also light's intensity was set to a high value, in case to keep the original equations intact and being able to see the effect.

I didn't use lightmaps for the rendering. Instead i rendered the patches themselves. That's why it looks like point filtering! :)

What do you think? Is this a good approximation for ambient lighting? Suggestions/comments are welcome.

HellRaiZer

Share this post


Link to post
Share on other sites
Looks cool, nice job for 2 days of work. Definitely too squarish/jaggy though, and most certainly not a real-time solution, but whatever, still cool. Why don't you submit it as an image of the day ;-)

Share this post


Link to post
Share on other sites
Quote:

Looks cool, nice job for 2 days of work.

Thanks. :)

Quote:

Definitely too squarish/jaggy though, and most certainly not a real-time solution, but whatever, still cool.

As i said i'm rendering the patches with constant obscurances for each one, so that's why it looks squarish/jaggy. This is the same results when you render radiosity illumination with patches. When i convert them to lightmaps (or more accurately obscurance maps) then i think it will look better due to bilinear/trilinear filtering.

As for the real-time part. Rendering cost is negible for the additional ambient term, if you have the maps. Of course the preprocess step is a little slow, but it can be optimized with some kind of tree (octree or bsp). What i'm trying to say is that it is 100% what you can safely call real-time. This is for static objects. I was rendering the patches (27000 of them) and still the fps was around 300! I think by using a texture it will be a little bit better :)

This holds true for static scenes. If you are talking about dynamic objects, then, yes, my implementation can't handle that in real-time :( But as you see in this the technique can handle dynamic objects very well in near real-time with object-to-object interactions.

Or, as an alternative, you can skip object to object obscurances (not so cool though). This way all objects with static geometry (either dynamic or static) can be preprocessed, because obscurance isn't dependent on any light. Of course for skinned models you can try something like nVidia's way on dynamic ao.

HellRaiZer

EDIT : One thing to mention about obscurance maps. As i said in the video there are about 27000 patches. This means that for this kind of resolution (which i think is good for ambient lighting), and with good packing, a 256x128 obscurance map is more than enough. So i don't think that texture memory can be an issue for this technique.

Share this post


Link to post
Share on other sites
Quote:
Original post by HellRaiZer
...


Sorry, didn't read the 1st paper - it is about having AO map for constant ambient term?
This is cool, but I think you need more complex geometry, to show the advantages of the technique.
Cornell box-type tests are really for color test, not obstruction one :)

P.S.: Good work, for 2 days! Respect!

Share this post


Link to post
Share on other sites
Quote:

Sorry, didn't read the 1st paper - it is about having AO map for constant ambient term?

The equation for the ambient term is this (from the 1st paper) :


I_ambient = (I_ambient_const + b * Sum(I_light[i] / r^2)) * k_ambient * obscurance


where :

I_ambient_const is what is says
b is a scaling factor in the range [0, 1]
I_light is i-th light's intensity
r is the distance from the light to the surface
k_ambient is the material's ambient reflectivity factor and
obscurance is a number in the range [0, 1] where 0 means the surface is completely close (obscured) and
1 means the surface is completely open.


This is the ambient term alone. You can add to this whatever lighting function you want (per-vertex, dot3 diffuse, dot3 specular, etc.)

Quote:

This is cool, but I think you need more complex geometry, to show the advantages of the technique.
Cornell box-type tests are really for color test, not obstruction one :)

Unfortunatelly, i made the geometry as simple as possible just to test the technique. It took me about half an hour to do it (i'm no artist :) ). I'll try to make something better/more complex for when i convert patches to textures.
Also, one more reason i kept the geometry simple is because i don't know how the patch generator will handle more complex things, like rounded pillars, etc.

HellRaiZer

Share this post


Link to post
Share on other sites
Sign in to follow this