Sign in to follow this  
spek

Realtime Ambient lighting with radiosity, results

Recommended Posts

Hi, Realtime Ambient lighting again. But this time no questions, it's time to show some results. Everyone who took time to help me with my long and boring questions; it was worth it :) http://img241.imageshack.us/my.php?image=realtimegi5wi8.jpg http://img295.imageshack.us/my.php?image=realtimegi6hy1.jpg http://img295.imageshack.us/my.php?image=realtimegi7tg4.jpg http://img501.imageshack.us/my.php?image=realtimegi8ps7.jpg [ How its working ] What you see in the screenshots is a scene with a radiosity lightmap. The same stuff as used in Halflife2 (where there are actually 3 lightmaps, to enable per-pixel normalMapping). But the big difference is that the lightmap is updated realtime. The direct lighting uses shadowMapping. In this test scene I have a simple green and red spotlight. The lightmap generation updates a couple of patches per frame, by putting the camera inside that patch and render the environment with direct lighting (simplified to win speed): color = emissive + albedo * sum(light[x] * shadowMap[x]) These tiny (16x16) textures are later on used to generate 1 point on the lightMap. With the help of a cosine lookup texture, the average incoming color is calculated for each patch. The scene you see is using 5 bounces. That means the lightmap will be generated 5 times. In the first bounce we use direct lighting, bounce 2 = direct lighting + result bounce1 map, bounce 3 = direct lighting + bounce 2 reult, ... and so on. The final bounce will generate 3 points, for 3 different light directions. This is somewhat similiar to "radiosity normal mapping" in HL2 Source engine. Using multiple bounces is essential for making a good looking result. With only 1 bounce, the light will not spread out over the scene. For example, a ceiling light will shine down. In bounce1 the ceiling will catch reflected light, but the surroundings are still dark. The more bounces, the better results (until there is no real difference anymore between bounceX and bounceX+1). So in short, for each bounce do: 1.- Update x patches, render their environment to 16x16 textures. Environment color = direct lighting(simplified)+emissive + results from previous bounce 2.- Convert 16x16 textures to an average color (or 3, for the final pass). average = sum( patchEnvironmentTex_16x16[x,y] * cosineLookup[x,y] ) / cosineSum The average is adjusted though. I had a lot of problems with light dissapearing too soon. When render the environment, all the pixels are black unless its litten. If the distance between patch(camera) and the litten/emissive surface increases, less 'bright' pixels will be in the result. So, patches nearby a litten surface will catch indirect lighting, but it will fade out very quickly when the distance increases == scene still way too dark. You can't just power the light intensity up. Brighter lights will have more effect in the lightmap, but surfaces close to lightsources will geet "over brighten". Instead, I measure the brightest pixel in the 16x16 texture. I compare the average luminance with the brightest pixel luminance. scale = sqrt(sqrt( averageLumi / brightestLumi )); lightMap.pixelColor = lerp( averageColor, brightestColor, scale ); Now relative weak or very small lights have a chance to spread out through the scene as well. [ Performance ] Currently I update 15 patches per frame. Since I have 5 bounces, there are actually 75 patches updated. The more we can update the better. Not only for the performance itself, but also because the lightMap should refresh relative quickly. If I switch on a light, it should not take minutes before the ambient lighting there has been adjusted. The scene, a relative simple one (4 chambers, a sphere, pillar, and corridors between the chambers) uses 1842 patches right now. With 30 frames per second, it would take ~4 seconds to update the entire lightMap. I use a low-resolution lightMap of course, but as the scene increases, the lightMap will increase as well. Therefore the next thing I'll have to do is to make a smarter update system. Instead of just updating all the patches, only update nearby/visible patches. Another trick I'd like to implement is rendering to 2 lightmaps, and blend between them. Now you can see which points are updated too obvious. Maybe its a good idea to use the previous completed lightmap, and blend to the new one for a smoother transition. Anyway, the performance isn't that bad at all. The scene you see is not that beautifull, but quite a package of techniques are used here: - Realtime Ambient - Screen Space Ambient Occlusion for darkening the corners - Realtime reflections (cubemap updated every frame) - 2 lights with shadowMaps that are updated every frame - HDR & ToneMapping - DoF - Surfaces use normalmapping, parallax, etc. - 1600 x 1003 resolution (although the speed is limited by switching FBO's here, not drawing pixels) The framerate is ~28 to 33 here. Not bad for a GeForce 8800 GTS I guess... Greetings, Rick

Share this post


Link to post
Share on other sites
Nice to see you back working on your radiosity processor since 3 yrs ago.

Can you post some screens without any textures/HDR/effects, so that we can just see the pure light ? I mean sth like
Screen 1
Screen 2

Quote:

Currently I update 15 patches per frame. Since I have 5 bounces, there are actually 75 patches updated

Why only so little patches ?
Quote:

The scene, a relative simple one (4 chambers, a sphere, pillar, and corridors between the chambers) uses 1842 patches right now. With 30 frames per second, it would take ~4 seconds to update the entire lightMap.
Last time I played with real-time radiosity - 2 yrs ago, I got a scene with 30.079 patches and made a full radiosity solution (i.e. 30079x30079), which took 586 seconds on a lowly 1.8GHz single core CPU. This is 1.5M FFs per second or, under you desired framerate 51.456 FFs per frame. And that`s without any major optimizations- I`d love to see SSE count this, but there`s no time :-(
So, you must be doing sth fundamentally wrong or different. I assume you don`t recompute visiblity for FF calculation since you use Shadowmaps (i.e. don`t use the H i,j term of the FF equation.

Quote:

If the distance between patch(camera) and the litten/emissive surface increases, less 'bright' pixels will be in the result.
Of course, that`s effect of the (PI*r2 )) term in classical equation
(F i,j = ((cosQ * cosR ) / (PI*r2 )) * H i,j * dA j)

You could use an ambient term hack to brighten the patches with low gathered energy and save up to 95% of calculations.
Consider following screen: Passes comparison

Difference between 50 and 1000 passes is visually just in brightness of the ambient patches, so it can easily be tweaked by the ambient hack and player won`t notice anything.

Share this post


Link to post
Share on other sites
@Dave

I'm not sure what you exactly mean with an operator for direct lighting, but I think the answer is 'no'. In the past I made completely pre-rendered lightmaps. I also tried pre-calculated ambient occlusion maps. These work for outdoor scenes, but in my case there is lot of indoor scenery, so only ambient occlusion is not enough since I need to know information about multiple lights (their colors, occlusions, and positions to do normalMapping on the ambient portion as well).

I tried realtime lighting with tiny cubemaps per vertex, and later on Spherical Harmonics like one of the ATI demo's does. Unfortunately, each approach has downsides. Measuring incoming light at a certain point is not so difficult, but you also need to assign that data. In most approaches you need a lot of probes. Either placed in an uniform grid, or placed manually at 'tactical points'.

Updating that big amount is 1 problem (especially if alot of them aren't even used all the time), but you also need to know which data to pick when rendering pixel X in the final pass. This is easy with a 3D texture that holds all the final data in a grid that. But that also requires alot of memory for big scenes, and most of the points are not used. Assigning 1 or more probes per vertex is much more efficient, BUT, I also want dynamic objects to use ambient lighting. How to pick the right probes for them, since they can move around all the time? Besides, large polygons don't have enough information with only a few manually assigned probes. 'Vertex lighting' becomes painfully visible.

Another problem with nodes is that I had to render 6 faces per probe, or 2 with dual paraboloid mapping (which requires a high-tesselated scene, which is not very practical as well). Converting these coordinates to spherical harmonic coefficients costs even more energy.

Before the radiosity maps, I also tried to project a dynamic grid on the screen. 9 probes(or more) are spread over the screen in a simple grid. The probe positions depend on the intersection points. So, the topleft probe first shoots a ray in the topleft corner of the screen. This ray will collide somewhere, and that would be the position of the probe. When rendering the final pass, I can blend between the 9 probes, based on the pixel screen xy position. A huge advantage is that I only need a small amount of probes, no matter how big the scene is. Unfortunately, the camera moves and rotates all the time, which means the probes will get new positions every frame as well. It was not a real problem to update all of them each frame, but the lighting continuously changes when rotating the camera. Another big problem was the lack of multiple-bounce support. Since ambient information is only available for what's on the screen, you can't let the light reflect multiple times.




I think I tackled most of the problems with realtime radiosity. Updating patches is way faster than rendering cubeMaps or dual paraboloid maps. If you want to do it really good, you should render 1 paraboloid map, or half a cubemap for each patch(hemicube). But I found out that just 1 texture with a camera FoV of 90 degrees works as well. The shaders that convert the small textures to 1 average (or 3, for the final pass) color is way faster than generating shperical harmonic coefficients as well. I can update ~75 patches per frame without a problem, while I could only do ~8 or ~10 cubemaps with SH lighting. Because I can do so many patches per frame, I can put more effort in multiple bounces, which makes the results really better than just 1 bounce.

Furthermore, a lightmap is more efficient than a 3D texture that covers the entire world. This is because all the patches are used, while in a 3D texture most of the points are floating in the void. You can skip these points of course, but they will cost memory anyway. For example, if you have a big outdoor scene (1000 x 1000 meters, 100 meters height difference) with a probe for each square meter == 762 MB. And you need more than 1 3D texture for most of the approaches... Of course, you don't need so much detail for an outdoor scene. 1 probe for each 10 m3 would probably be fine as well. But how to mix that with a small indoor part in the same scene? A lightMap is much more flexible. Big outdoor surfaces can get a low detail, while indoor surfaces can get more points on the map. And in the end, much less points will be needed, making the update cycle time shorter and cost (way) less memory.

Another problem with placing probes on a grid is the chance that a probe is behind a wall, instead in front of it. Your pixels could pick the wrong probe, especially when the grid resolution is relative low. You never have this problem with a lightMap, all the patches are exactly on the right location.

Remaining problems are normalMapping and doing dynamic objects. NormalMapping is done by rendering 3 lightmaps, 1 for light coming from above, 1 for light coming from the bottom-left, and another one for the bottom-right. This is less accurate than using a cubemap or spherical harmonics. But... you won't be able to tell if the lighting is correct when it comes from all directions after a few bounces, in most cases.

The final problem are dynamic objects. A big plus when using a 3D grid, is that an object can directly pick the proper data on that certain point. Just convert the world coordinates to 3D texture coordinates, and that's it. This is not really possible with manually placed probes, a lightmap, or the dynamic grid projected on the screen. I thought this would be a killer for the radiosity lightmap approach, but maybe there is still a way though. I could render cubemaps(eventually with SH) at the points where dynamic objects are, however, I could also try to make good use of the lightMap. Imagine an X,Y,Z axis at the center of an object. Each axis will collide somewhere with a wall, floor or ceiling. These collision coordinates can be converted to lightMap coordinates. When rendering the dynamic object, I can pick 6 colors, 1 for +X, one for -X, etcetera. Based on the normal, I can blend between these 6 values.

This allows to make good use of the already calculated lightMaps. It's not really accurate of course, and I'm a little bit afraid that the lighting can change too much on a moving object in some situations. Nevertheless, its worth a try.

Greetings,
Rick

Share this post


Link to post
Share on other sites
@VladR

It has been a while indeed. I switched to direct lighting with shadowMaps, and though I wouldn't need lightmaps anymore... until the lack of ambient light became too much, Doom3, Quake4 and F.E.A.R. came away with it, but I feel it's time for realtime ambient now, one way or the other.

>> Can you post some screens without any textures/HDR/effects
That will be quite difficult, since most of the lighting exceeds the 0..1 range. Probably it will turn light. However, in the fourth screenshot you can see the lightmap in the right-bottom corner, and I have a shot here
that only shows the ambient lighting (no SSAO, no DoF, no cubeMap reflections, no direct lighting, no albedo used in the ambient portion). You still see some textures, that is because normalMapping has been enabled.

http://img392.imageshack.us/my.php?image=realtimegi9ye5.jpg
http://img528.imageshack.us/my.php?image=realtimegi10dz9.jpg

The small red and green dot are the light positions. Keep in mind that the map is very low-res, so there won't be little details in the corners for example.


>> Why only so little patches ?
Well, if I do more, there is no more speed left for the other techniques. These are already quite heavy to do,so... I can't compare the results with other projects though. Most probably I can double or even triple the update count when disabling all other techniques. But then again, its supposed to be a realtime radiosity solution used in combination with all the other stuff going on in a game, not a lightmap generator purely focussed on making that map.

Maybe I can do more after some optimizations. If I understand you right, you did radiosity on the CPU (correct me if I'm wrong!). I do everything on the GPU. No raytracing or anything, just rendering to small textures and pick the average color with a shader. For each bounce I must change to another render target, each patch could have another set of lights visible for the patch, and I need to switch slice in a big 3D texture for each patch (each patch gets 1 slice in that texture). So, its quite a complicated process. I think I can win some extra speed by simplifying the "patch environment render" pass. Now the geometry is rendered with
color = emissiveTexture + albedoTexture * ( lights[x] * shadowMap[x] )

The emissiveTexture and albedoTexture can be replaced with a simple emissive/reflectance color per vertex. The patch textures are too small to see
texture details anyway, plus its a more flexible system. Another bottleneck could be calculating the average colors. For each point, my shader must loop through 256 pixels (16x16 textures are used). For each pixel I must also lookup in a cosine texture. So, that means 512 texture lookups for each point on the lightmap that is updated. Maybe I can push out some patches with geometry shaders, using an even smaller target texture to capture the environment, and do less bounces by compromising it with a stronger 'ambient term' like you said.

On the other hand, I expect other processes to take more energy in the future as well, (transparent surfaces, dynamic objects, more lights, point/cascaded lights, and so on). So I don't think I can really boost up the patch count per frame, unless I did something stupid, or new fast hardware will come out. But who knows, maybe there is another bottleneck in my rendering pipeline:

0.- Update shadowMaps for moved lights (render new depth maps)
1.- Update screen maps with albedo, worldnormals, specular, and emissive data.
This is somewhat similiar to deferred rendering. Upcoming passes don't have to recalculate
the world normal, parallax effect, specular or albedo again. Handy if the albedo color
is quite complex (mix between alot of textures, such as a terrain). This data is
later on used in the ambient final pass, and all direct lights.
2.- Update reflection cubeMap (256xz256 faces, downscaled to smaller blurred variants)
3.- Update SSAO and DoF depth buffers
4.- Update Ambient Patches:
For each bounce:
1.- Switch to 3D texture that holds all updated patch textures.
If we update 15 patches per second, this 3D textuer has 15 slices.
2.- Loop through the 15 to-be-updated patches, for each patch
A.- Set the camera, pass some shader parameters
B.- Switch to the proper target slice in the 3D texture
C.- Render environment with direct lighting and eventually
previous result lightMap to 16x16 texture. With portal culling
the visible geometry is set, plus a limited list of the
most nearby/strongest lights are activated
2 spotlights with shadowmap, 2 without
1 pointlight with shadowmap, x without (dunno how many yet)
1 cascaded light with shadowmap (the sun, moon or other strong source)
* For bounce 2 and bigger, the results from the first pass are used (texture overlay)
so we don't have to redo this for every bounce.
3.- Switch to lightMap target texture, and loop again through
the to-be-updated (15) points. For each point:
A.- Render 1 single point on the right position in the lightMap.
Shader loops through the 16x16 texture to get the average
and apply the "ambient hack". In the final pass we render
to 3 lightMaps at the same time.
4.- Blur the final 3 lightmaps lightly to remove artifacts and thicken the edges
to make sure polygons are not picking black pixels outside the polygons.

5.- Render ambient + cubeMap reflections. Ambient shaders uses the
3 final (blurred) lightMaps. Reflection amount depends on material Fresnel settings.
6.- Render screen quad with SSAO on top of ambient portion
7.- Add direct lighting with shadowMaps. Render the affected geometry for each light
on top with additive blending.
8.- Transparent surfaces. Not implemented yet
9.- Measure screen luminance for toneMapping
10.- Disable HDR, render to screen with toneMapping
11.- Apply DoF

I also have to put a mirror pass somewhere (for water reflections or a mirror requiring correct reflections). I'm not sure if its a very good pipeline, but at least it supports a very wide range of effects, and as you can see, a lot more than just updating patches is going on.


Just as important as updating as much patches as possible, is generating good atlas texture coordinates. Since the direct lighting is done with shadowmapping instead of a lightmap, I don't need high detailed lightmaps. The less patches needed, the better. But my atlas coordinate generator is pretty dumb. Sometimes small unimportant surfaces get too much patches while others get almost nothing.

I could also consider making the entire lighting with a realtime lightmap. That means the direct lighting should be stored in the map as well. It will save quite alot of speed, which can be used to update more patches. But in that case I need much more patches to get sharp shadows + proper per pixel normalMapping gets more difficult.


>> Ambient Hack
My 'hack' is to lerp towards the brightest pixel measured in the captured snapshot. It works pretty well, although there are probably better methods.
First I tried to multiply pixel colors, based on the distance. For example:
pixel[x,y] *= sqrt( 1+distance(pixel[x,y] * <factor> )
It helps making the dark patches more bright, and being able to catch light from bigger distances. But adjusting that "factor" is too much work and differs for each situation. Probably because this formula just sucks :) The lerping method works better, and indeed, its pretty easy to bright up the
entire scene without needing alot of passes. And also without nasty side-effects, such as overbrighting in corners close to a lightsource.

Greetings,
Rick

Share this post


Link to post
Share on other sites
Quote:
Original post by spek
http://img392.imageshack.us/my.php?image=realtimegi9ye5.jpg
http://img528.imageshack.us/my.php?image=realtimegi10dz9.jpg

The small red and green dot are the light positions. Keep in mind that the map is very low-res, so there won't be little details in the corners for example.
Very low-res indeed. I wonder, what`s the point of having the radiosity compute it (other than it being cool, of course), if it`s so low-res that the nicest benefits of radiosity are not visible at all.

Quote:
Original post by spek
>> Why only so little patches ?
Well, if I do more, there is no more speed left for the other techniques. These are already quite heavy to do,so...
I see. So you`re using radiosity just as the additional effect, sort-of.

Quote:
Original post by spek
Maybe I can do more after some optimizations. If I understand you right, you did radiosity on the CPU (correct me if I'm wrong!).
Correct. I calculate it on CPU.

Quote:
Original post by spek
On the other hand, I expect other processes to take more energy in the future as well, (transparent surfaces, dynamic objects, more lights, point/cascaded lights, and so on). So I don't think I can really boost up the patch count per frame
The question then is - why bother with radiosity if its only benefit is the ambient lighting which is nonrecognizable from a regular point light with fall-off, which is an order of magnitude cheaper to compute ?

My main point is, that with Radiosity, you can get beautiful boundary of the light, which is impossible to get by other means (other than area lights). Also multiple lights blend together very well (obvious, due to light physics taken into account during computing).

What is the difference between SSAO and Radiosity ambient ? Do you have some comparison shots ? I wonder if your current radiosity implementation adds something visible on top of SSAO.

Share this post


Link to post
Share on other sites
The goal was to make a non-static ambient lighting technique, on top of all the other "normal" techniques such as direct lighting with shadowmaps. So far, most games are using pre calculated ambient data, very simple/predictable('fake') ambient lighting (outdoor scenes), or no ambient light at all (everybody complained about the pitch dark scenes in Doom3).

So, I was looking to create something dynamic/realtime, but yet fast and flexible enough to run in a game on nowadays hardware. There are certainly much better (realtime) ambient lighting methods out there, but the papers/demo's often focus on that technique only. That often delivers beautifull results, but not practical (yet) for games because all kind of reasons. Too slow, too much memory usage, very limited light count, not for moving dynamic objects, can't do normalMapping, not suitable for huge scenes, etcetera.

I'm not an expert either, but from the methods I tried so far, this was the most practical one so far. Relative fast, memory friendly, flexible for huge scenes, can be combined with normalMapping, can be used for dynamic objects with some tricks, and... it looks quiete nice. Not superb, but good enough for now in my opinion. Well, there is always a sacrifice on either speed, flexibility or memory. This method is well balanced, which makes it usefull for games and such. As you can see, my focus is not on creating a very good/new radiosity lighting engine, its on creating ambient lighting suitable for games somehow.




The point here is that ambient lighting is not that important (yet). That sounds crazy, but the casual user won't see if the lighting is correct or not, especially not indirect lighting. Of course, this will change in the future, but for now relative simple ambient is fine. Maybe I'm wrong, but I can't mention 1 game that has proper lighting. Neither GTA IV or even Crysis. However, people won't accept the lack of ambient light anymore. Doom3 barely got away with it, but now it would get lynched I guess.

Low-res radiosity maps won't give you subtle smooth details, such as nice dark corners, or slightly color bleeding. But at least it spreads out the light through the scene in quite a good way. I use Screen Space Ambient Occlusion on top of it to 'fake' the dark corners/nearby occlusion. You can't see it in the shots I gave (well, almost not), since the scene is too simple. But more complex scenes (I tried a house with a kitchen) shows dark regions beneath tables, in corners, etcetera. Not really correct again, but it does not look bad either. I can post a shot of that tomorrow... if I have some time left. Tomorow I'll have to pick up my girl and our new born baby from the hospital :)

I don't know if radiosity is the best way to do ambient lighting for game purposes. But at least it works, and when faster hardware comes (or very good optimizations can be done), higher resolution maps will become available as well. That is another nice thing about this technique; increasing the quality along with the graphic card capacity is very simple. In theory a monster card could even produce high-res maps, enabling all the good stuff you mentioned, and making extra techniques such as SSAO obsolete.

Greetings,
Rick

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this