Real-time Ambient Maps (CryEngine)

Started by
25 comments, last by Dirge 17 years, 11 months ago
Quote:Original post by HellRaiZer
Thank you all for the replies.

Zemedelec:
Quote:
Common SH are distance irrelevent. But they can be made to depend on distance.
Just a suggestion

From your suggestion i found this paper :
Spherical Harmonic Gradients for Mid-Range Illumination
From a quick look though it doesn't look real time.



This paper talks about computing coefficients in realtime, which is not the case here. Realtime AO in CryEngine is for static world only, and it is precomputed.

Advertisement
Hello,

Just wondering aloud here - has anyone ever tried simulating ambient point light contribution look with a simple hack? If you look at lots of vids or screen shots showing an ambient light solution, you notice that there is still a somewhat spherical falloff visible. I wonder if that could be simulated with a pointlight at the same position , larger radius, ~20% of the strength and desaturated a bit, and then biased it so that even surfaces pointing away from it get a large contribution. Obviously, that's a completely bullsh*t solution in terms correctness, but it might give the impression of ambient light... especially when used in conjunction with the other dynamic light/shadowing calcs.

Don't know how you'd do a similar thing with a directional light since it can't be bounded like a point light.

Sorry for the spam, just musing. Might try this at some point and post a pic, regardless of the outcome. :)

T
Hell: Almost, although it doesn't need to be placed at the center of the room (it's origin can be at the light position). Also I don't render the lights into the cube map, but instead the lit environment. That ambient cube map is meant to pick up the bounced light (the light reflected off of any diffuse surfaces).

The idea is very similar to diffuse cube maps although their use is to provide a full lighting solution so the diffuse lighting term is actually baked into the cube map (in cube world space) as you described.


Some crysis face shots:

img

img

Thats the best looking real-time technique I've seen yet, and I've done a lot of research on the subject.

[Edited by - Dirge on May 9, 2006 11:41:49 AM]
"Artificial Intelligence: the art of making computers that behave like the ones in movies."www.CodeFortress.com
Quote:
Hell: Almost, although it doesn't need to be placed at the center of the room (it's origin can be at the light position). Also I don't render the lights into the cube map, but instead the lit environment. That ambient cube map is meant to pick up the bounced light (the light reflected off of any diffuse surfaces).


Thanks for clearing that. I saw your previous reply (the one saying "Precisely...") and i was a little confused. I tried to visualize the whole thing (on paper), and it didn't looked correct. I also thought of the correct solution and your confirmation made it clear.

I now have to implement it as a simple demo to see what i can get. What i really need to test is if it works with level geometry (non convex, complex shapes where the light is inside), and if the errors are too obvious.

I have a picture in mind where this approach wont work (it'll have errors), but i don't have time right now to put it on paper (and post it). I'll do that tomorrow.

Thanks for the help.

HellRaiZer
HellRaiZer
Yeah, I re-read your post and noticed you were saying rendering all the lights as sphere's instead of just rendering the lit scene, ha.

It should work fine with any kind of sealed environment, but yeah, it's definitely not going to be 100% accurate. Think of it as a supplemental ambient lighting hint generated from the lights perspective.

BTW I should also mention Quake 3 had an interesting concept called the ambient light grid. Essentially it's a 3D grid of color values pre-calculated during the radiosity stage. It looks pretty darn good even to this day. Would be interesting to see if anyone ever extends the concept to be hardware accelerated.

Good luck with that.
"Artificial Intelligence: the art of making computers that behave like the ones in movies."www.CodeFortress.com
After some more research this morning i found some interesting papers on the subject. The main idea is around obscurances as presented in the paper "An Ambient Light Illumination Model" by Zhukov et al.
I haven't found this paper anywhere (see, the 1st page on google :)) but i found a few others describing the technique. I'm posting them here in case someone is interested.

1. Fast realistic lighting for video games
2. Comparing hemisphere sampling techniques for obscurance computation
3. Combining light animation with obscurances for glossy environments
4. Real-time Obscurances with Color Bleeding (needs ACM account).
5. The same as 4, but as an article in ShaderX 4.

The only one i have read is the 1st. From the description it looks like standard lightmapping but instead of storing the indirect lighting from a specific light configuration, you store the obscurance of the point, which is independent of the light configuration. From the 1st paper :
Quote:
Roughly speaking, obscurance measures the part of the hemisphere obscured by the neighbor surfaces. E.g., near a corner of a room the obscurance of the patches is higher than in the central region. From the physics of light transport point of view, obscurance expresses the lack of secondary (reflected) light rays coming to the specific parts of the scene thus making them darker. This is unlike radiosity where secondary reflections are accounted to increase the intensity.


If we assume that in CryEngine (because this was the question in the first place) they don't take into account dynamic objects, then this method gives identical results to their video, and it is 100% real-time. At it's bottom, it's standard lightmapping.
If they take into account dynamic objects, then i think it is going to be a little painful. It is possible, according to the 4th paper, and you can see it in this video, but not exactly what you call real-time game frame rates :) (7 fps for this scene if i remember correctly from the paper).

Unfortunatelly i don't have ShaderX 4, so i can't comment on the article or a possible demo that came with it.

Any ideas/comments on the method? Has anyone implemented this in the past? Any pitfalls that may arise, or suggestions on a possible implementation?

HellRaiZer
HellRaiZer
Quote:The only one i have read is the 1st. From the description it looks like standard lightmapping but instead of storing the indirect lighting from a specific light configuration, you store the obscurance of the point, which is independent of the light configuration.


Hmm, that first paper sounds an awful lot like Ambient Occlusion, only packed into texture pages (like a lightmap or light atlas) for an entire level. Instead of using a radiosity generator, which generates patchs similar to how they describe, you use those patchs to get the average obsucurance (which actually sounds more efficient than testing a ton of rays shooting from every triangle on every surface in the world).

I don't know how Crysis could be doing this dynamically, but I wouldn't hold it above them to do a separate ambient occlusion pass, and since lighting is additive, it would blend properly when introducing new lights into the scene... making it _dynamic_?? Thats a long shot though.

"Artificial Intelligence: the art of making computers that behave like the ones in movies."www.CodeFortress.com
Quote:Original post by HellRaiZer
The only one i have read is the 1st. From the description it looks like standard lightmapping but instead of storing the indirect lighting from a specific light configuration, you store the obscurance of the point, which is independent of the light configuration. From the 1st paper :


This is just ambient occlusion map, from ambient lighting, nothing more.
It can't render local lighting, caused by bounced light.
On video, one can clearly see, how backfaced faces got lit by the moving light.
Zemedelec:
Quote:
This is just ambient occlusion map, from ambient lighting, nothing more.
It can't render local lighting, caused by bounced light.
On video, one can clearly see, how backfaced faces got lit by the moving light.


You are right. I must learn to choose more appropriate words :) It's not "identical" (as i said) but it looks similar. I don't know how it will perform with back facing triangles.

But from a quick look at the equation that includes the lights (equ. 5) there is a dependence of the ambient light intensity with the light's position (Is'). And as you can see from equ. 7, this ambient light intensity doesn't include the angle between the light vector and the normal (as in equ. 6 which is for direct illumination).

The 1st paper isn't 100% accurate. If you read the 3rd paper they describe a way to calculate a more correct value for Ia, which unfortunetelly assumes that the camera is fixed and needs some extra computations if the light moves. If i understood it correctly, you must find the new Ai = all patches visible from the light and At = the total area visibl from the light, again, whenever something moves.

Never the less, and because the initial question was about a more correct ambient model, i'll to try it. This may not be exactly what they are doing in CryEngine 2 but it is better that the constant ambient term.

And one last thing. From those of you who have implemented AO maps, is this really AO, or there is any detail that makes it a different thing?

Thnx for the comments.

HellRaiZer

PS. I also found this paper describing the cubemap process Dirge mentioned (not exactly, but it's close) :
Cube-Map Data Structure for interactive global illumination computation in dynamic diffuse environments. There is one thing that looks too slow (cubemap readback and SH coef calculations) but i don't know how this is done (sh coeffs, this is) so i can't comment on that.
HellRaiZer
Quote:Original post by HellRaiZer
Never the less, and because the initial question was about a more correct ambient model, i'll to try it. This may not be exactly what they are doing in CryEngine 2 but it is better that the constant ambient term.


There a number of hacks, that can be made and look like it is real. It is very hard to estimate the amount of bounced light, so if there is ANY, the eye will just appreciate it.
I know some ppl, that do just hacks and got it working (bounced lighting in indoor environments).

The first step, is to compute just AO and compute local ambient light, that will be multiplied by that AO map. Use AO map, only on geometry that's in shadow. That way, when light comes into a room/sector and you increase ambient light strength, you'll get similar effect of backfaces being lit.
Main problem is the same amount of light that the whole sector will get.

Next one, is to try SHs with distance information in them. Per-vertex, with per-pixel AO. That can look better, but it is up to you to decide is the efford worth it (since precomputation can be way slower).


Quote:Original post by HellRaiZer
And one last thing. From those of you who have implemented AO maps, is this really AO, or there is any detail that makes it a different thing?

This is not AO. AO affects only ambient lighting, and here we have local light that changes lighting in the whole room, including areas in shadow.

Anyway, AO-only looks sweet too, in outdoor scenes for example, where ambient term is strong. We use that and it the difference is visible, for sure :)

This topic is closed to new replies.

Advertisement