Real-time Ambient Maps (CryEngine)

Started by
25 comments, last by Dirge 17 years, 11 months ago
Hi... I was watching CryEngine's GDC video and i was wondering what "Ambient maps" could be. I haven't worked on any PRT like effects (SH, ambient occlusion, etc.) and i don't know what they can really do. Does anyone has any idea how this is possible. The most impressive thing in the video (i think) is the moving light. If i remember correctly (i haven't read anything on the subject for a while) SH can be used for static lights, so this isn't SH. From running nVidia's demo Dynamic Ambient Occlusion on a GeForce 7800GTX, i think it is too slow for a game environment. The frame rate was about 50 with 2 passes and about 90 with 1 pass. What do you think it is? Thanks in advance. HellRaiZer
HellRaiZer
Advertisement
I've been wondering the same thing. I posted a similar question on Beyond 3D and no one really knew there either. I'm pretty sure however that it has nothing to do with Dynamic Ambient Occlusion as per the nvidia demos (and the gpu gems article).

Maybe it's just a trick like in Fear?
"Artificial Intelligence: the art of making computers that behave like the ones in movies."www.CodeFortress.com
Quote:
Maybe it's just a trick like in Fear?


Because i don't remember seeing something like this in FEAR, do you have any references for the trick you are talking about?

Or if you remember any point in the game where this is too obvious, please tell it. There's been some time since i last saw FEAR in action, so i don't remeber much of it (except from the scenario and some things like the volumetric lights from the windows and its soft shadows).

HellRaiZer
HellRaiZer
Quote:Original post by HellRaiZer
...


Common SH are distance irrelevent. But they can be made to depend on distance.
Just a suggestion
I'm guessing they have ambient maps rendered offline at several locations and the environment interpolates through them based on where the light is positioned, along with a shadow volume for the direct shadows.
--Jeroen Stout - WebsiteCreator of-Divided
Quote:Original post by The Parrot
I'm guessing they have ambient maps rendered offline at several locations and the environment interpolates through them based on where the light is positioned, along with a shadow volume for the direct shadows.


I think that is way too brute force and it would require an insane amount of texture memory.

Maybe they are using something like Precomputed Local Radiance Transfer for Real-Time Lighting Design
Quote:Original post by DonnieDarko
is way too brute force and it would require an insane amount of texture memory.

Maybe they are using something like Precomputed Local Radiance Transfer for Real-Time Lighting Design


Agreed.
i'll explain the way that i'm going to integrate radiosity-like effects in my engine(once it reaches the state of handling shaders+multiple passes properly), maybe Crytek is doing something similar:
So i have no idea if that would work/look good/be fast.
I'm treating dynamic objects(players/stuff flying around) differently than static gemoetry(the map(indoor)).

What we need for radiosity:

(1)The radiance emitted from every dynamic object
(2)the radiance that every object gets from the sorrounding environment.



(1):
Render every object from some different directions(e.g. 5) into a texture storing the color in RGB and the depth in A. With the deoth stored in the texture we can compute the AO for every texel(kind of like shadowmapping with a single mesh but multiple lights) and multiply the AO value with the color in the environmentmap(maybe blur the environmentmap before.


(2):
This is a little more complicated. I'm planning to store some environmentmaps of the environemnt for certain points in my maps(at the room centers for example). This isn't accurate but one could also store the depth in the environmentmaps and use raytracing to find the correct intersectionpoint with the enviironment(still not 100% but better,i have a paper about that(the raytracing part) in my bookmarks if anyone is interested).You could also go for the correct solution but that will be horrible slow i think(rendering the map for every object six times).

The radiance emitted from the object onto the map is still a problem. You'd need to determine if(or how much) of a mesh is visible to a certain point of the map. I think the paper about AO-Fields could help but i havenÄt understood that properly yet.


This is the first idea i came up when thinking about AO/GI and similar effects. So maybe Crytek ambient maps are sth like the method described above.


As stated before i don't know if that's fast or even works. So if somebody discovers a problem tell me plz.

Edit: wow PLRT looks cool but if read correctly than it doesn't wok on dynamic objects right?


regards,
m4gnus
"There are 10 types of people in the world... those who understand binary and those who don't."
If I remember correctly, in Fear there's a room with like a red wall or something similar and a white light shining on it, and the adjacent walls appear lit from the bounced red light. I would say it looks staged (ie. just some red lights on shining on the wall), but I remember the light being dynamic some how. I'm sure nothing advanced was involved and it was a trick of some sort.

I still can't speculate from the video and screenshots what they're doing (and I'm far more interested in their skin/face rendering tech) but I did come up with something similar to what m4gnus described a while back.

The technique had a number of variations but the general idea was to store a very low resolution cube map (around 32x32x6) for each light in a scene that is regenerated based on it's on/off state or an update triggered if the environment was in motion (which didn't happen often anyways because of the static bsp). The cube map ownership was based on the portal area it was in and could then be "reprojected" onto the scene geometry in a seperate ambient pass. This obviously didn't work well with flickering lights unless you wanted to calculate the ambient cube map dynamically each frame (not too bad, but scene setup is painfully slow and done for each face).

Because the cube map was so low-rez, the upsampling gave you a nice blur which averaged the scene colors for you, which was the trick. Blurring it manually would make this technique prohibitively expensive (well, for the real-time version). The idea was to distribute the average scene color, not the actual reflected color off every surface after all.

Funny note, NVIDIA ended up doing something like this for that mad mod mike demo. If you watch closely you'll see mikes environment acutally lending subtle color variations to his body. Very cool! VERY expensive though since they DO use a dynamic cube map which they then blur in real-time (!!).

I don't see the Crytek guys doing this. More likely it'd be something like Ambient Occlusion Fields. I'd be shocked to find it was a completely real-time dynamic procedure and not pre-calculated for efficiency in some way (memory contraints non-withstanding).

[Edited by - Dirge on May 9, 2006 11:58:35 AM]
"Artificial Intelligence: the art of making computers that behave like the ones in movies."www.CodeFortress.com
Thank you all for the replies.

Zemedelec:
Quote:
Common SH are distance irrelevent. But they can be made to depend on distance.
Just a suggestion

From your suggestion i found this paper :
Spherical Harmonic Gradients for Mid-Range Illumination
From a quick look though it doesn't look real time.

DonnieDarko:
Thanks for the link. I haven't looked at it yet, but i will. From the abstract it looks to be close to what i'm looking for (and what the guys at Crytek are doing). One note on what m4gnus said about dynamic objects. They don't show any dynamic object in this particular scene so i can assume they don't support dynamic objects. I haven't read it so i can't comment further.

m4gnus:
Your explanation reminded me of the nVidia's Mad Mod Mike demo (which Dirge mentioned in his last post). My thought on that wasn't about the performance, but the way the use it. I mean at CryEngine. If they use this approach how can it be applied to static geometry. In nVidia's demo, this is done for the main character, and not the room. It maybe close but it's not that.

Dirge:
Quote:
The technique had a number of variations but the general idea was to store a very low resolution cube map (around 32^32^6) for each light in a scene that is regenerated based on it's on/off state or an update triggered if the environment was in motion (which didn't happen often anyways because of the static bsp). The cube map ownership was based on the portal area it was in and could then be "reprojected" onto the scene geometry in a seperate ambient pass.

I don't really understand the process. Can you make it more clear. What i understand from that is this:
You store a small cubemap in each room at its center (e.g. sector in a portal engine). During runtime you render all the lights in this cubemap (only the lights) as (e.g.) colored spheres. Then when you are done you project the cubemap to all faces in this sector. Is this close?
As you said by using a very small cubemap, you don't need to blur it after rendering the lights. So this is good. :)
If i remember correclty there is a paper from nVidia about "Diffuse Cubemaps", where they used a cubemap to render the illumination on an object from many lights in one pass (except the cubemap update of course). Or it was ATI's paper? I don't remember. I'll have to dig it from my HD (or the net).

I'll have to try some of these ideas in action (probably the cubemaps thing first, because SH and PLRT seems a little more complex that that).

Thanks again for the useful replies. Any other suggestion is welcome.

HellRaiZer

PS.
Quote:
...and I'm far more interested in their skin/face rendering tech

I haven't saw any screenshot on that, but i have this in mind (skin rendering i mean, not particullarly CryEngine's way) for when we finally finish our skeletal animation system. I'm more interested on "inorganic" geometry right now. :)
HellRaiZer

This topic is closed to new replies.

Advertisement