Radiosity in Hardware

Started by
5 comments, last by uutee 19 years, 7 months ago
Radiosity calculations in Hardware I'm working on a project (who isn't) with some guys, and I'm in charge of the lighting section. I've come up with some fairly decent ideas, and wanted some input though, from some people who have implemented lighting systems. We're planning (for the first run) to do vertex lighting, but precalculate it and bake it in (lightmaps, but no map, just vertex color). All textures will then be drawn with Modulate (communicative property of multiplication and all that jazz) We wish to support some high quality lighting/hopefully a global illumination solution. Here's the question: Will I be able to use the videocard + RenderToTexture to attain the values for incoming light, or will the lack of precision slaughter the quality? I've seen hugo elias' article, and his suggestions. The main issue is that I hope to use full color lighting, so I can't just hijack all the argb data for brightness. What I need to know is, where has this been done so I may see the result -or- have you written it up before, if so, (screenshot?) what was your impression? Note: I plan to use either 3 or 4 passes (I'm not going for 100% convergence, or even 99.8% ;-) I just want it to have a realistic feel, and help light up some corners fairly realistically.) While I'm at it, I'll also ask - I'm planning to use RenderToTexture with AutoGenMipMaps, taking the lowest (1x1) resolution mipmap for the final "sum" of the pixels. Is there any faster way to calculate the sum than creating the whole chain of mipmaps? (I'm planning to use 32x32 resolution source maps) Is mipmap generation done in Hardware? If not, can I see the routine somewhere? For this project I'm ok with using vertex or pixel shaders to speed up anything possible. This is a precomputed step, so I can impose unrealistic requirements on the videocard since this is just supposed to be exposed to the dev team. Thanks guys, -Michael P.S. Yeah, I'm going to take Lambert's Cosine law, and the ratio of Angle Size over CubeMap Texel Size into the calculations, but that can be done with (I'm guessing) alpha blending. I'd love to post a screenshot or two once it's up and going.
Advertisement
Maybe I'm getting something wrong,but are you seriously talking about per-vertex lighting,Radiosity and global illumination(!) at the same time?That's...weird,to say the least.
You might want to take a look at this paper on using graphics hardware to accelerate radiosity. If you can set your own requirements then precision shouldn't be an issue - just use SM 3.0 hardware and you can have 32 bit precision everywhere.

Game Programming Blog: www.mattnewport.com/blog

Hello there.

This has been done before, so its definitely possible. I don't remember which exact article talked about it, but the guy recommended to disable mip mapping and all, to get more accurary.

I do believe the precision will be fine for radiosity. There shouldn't be a problem with that... And about shaders, perhaps you could possibly apply your cosine law using them... That could save you some speed. Mipmaps? You may not want to enable them. In any case, in OpenGL, I'm quite sure the glu mip map generating function generates them in software. You can even generate your own if you want. If you want a final sum very fast, you should probably sum the pixels yourself using a customly written ASM optimized routine (MMX/SSE optimizations, perhaps?). But, even there, perhaps you could use a shader to do this for you as well (but I can't tell, since I don't know shaders too well either).

I don't believe you could do dynamic lighting at decent speed this way (unless your scene is extremely simple), but you should be able to do radiosity, for even a complex scene, at very fast speeds (the scene could then be rendered in realtime, using lightmaps). I still think its a good idea. It will definitely be very fast in hardware. Could do a very good lightmap compiler.

I wish you good luck on your project ;)

Looking for a serious game project?
www.xgameproject.com
@mikeman

Sorry, I wasn't very specific about the whole thing.
Here's the deal:
If I referenced GI I meant soley the (slightly modified) Radiosity solution we plan to implement.
The Radiosity solution is the same technology used in quake2, except instead of calculating lightmaps, I'm calculating the lighting at each vertex in the scene. The resulting value is then assigned to the vertex's diffuse value. Later on, when textures are applied, the default source (or you can specify it explicitly) for colorarg2 is the vertex diffuse value, all you have to do is set the texture's (colorarg1) colorop to Modulate, which multiplies the texture and the light value together (the way real light works... except we're limited to the rgb colorspace)
So, anyway, all these vertex colors are to be precomputed for the static geometry in the scene. Models get their lighting differently... we won't go into that here, check out my post in the DirectX/Direct3d forum for info on that project.

There are a few other details, to help make the solution work with any environment map, but that's irrelevant until the rest of the radiosity system is in place.

As for the per-vertex lighting... hmm, yeah, we hope to apply another layer for local light sources, and have hopes of implementing shadowmaps, but that's in the future and doesn't pertain to this topic.

Thanks to mattnewport for the link. I checked it out, (downloaded it to stick to read it fully a little later) unfortunately they seem to use the color registers for only brightness, which I wanted to move away from - I'd like to keep the color solution. (You'll notice his solutions don't involve any sort of color bleeding - check out the cornell box especially. It has a tendency to make the scene feel very cold and hard.)

I thought I recalled someone doing rad in hardware about 1.5 years ago, on flipcode's iotd. I recall not being greatly impressed, but it may be encouraging/enlightening (pun not intended) nonetheless... I'll went and checked. I didn't find it, but found another person working on some realtime rad, which actually looked 1/2way decent. (Just really blurry) I'm encouraged. His precision didn't seem to require the higher bit depth for brightness.

@Max_Payne - that was probably Hugo Elias' article his site here
As I mention above though, I'd like to not hijack the argb format to do just brightness - I want color bleeding. Thanks for the encouragement - and I think I've seen enough (from flipcode's screenies) I don't see any problems with moving ahead. And as you mentioned, this won't work well for dynamic lighting, but for a lightmap (or in this case, baked in vertex colors for lighting) it's obviously feasible to perform in realtime - after the solution has been calculated.

I'd encourage anyone who's interested in my ideas for the realtime radiosity solution (for dynamic models) to check out my other thread:

Here
and check out a kinda recent screenshot
Here (car)
Skull (1)
Skull (2)
Skull (3)
Ignore the fps, these calculations are meant to be done in a seperate thread, and not per frame, thus the fps is lower than it could be because of stalling from switching the rendertarget many times per frame.

Thanks for the articles, etc. I'll see if I can't have a decent product within the next few weeks.

-Michael
MIP map generation is done in hardware on most new hardware.

If you don't have hardware, MIP map generation typically just uses a box filter, which sums each component of a block of 4 pixels (2x2) and divides by 4, to generate the new color.

In the ideal case, you'd actually gamma correct the values before you sum them, and then gamma-un-correct the result before writing it back to the MIP map, but life's too short to do everything correctly if you're working alone.
enum Bool { True, False, FileNotFound };
>>Will I be able to use the videocard + RenderToTexture to
>>attain the values for incoming light, or will the lack of
>>precision slaughter the quality?

Depends on the deicision! Normally, videocard floats only have 8bits, so they are rather bad at handling strong differences in lighting (for example, a bright white light is usually a *lot* brighter than a wall it reflects from, to the viewer.)

Luckily, modern hardware supports floating point pixel arithmetic - this should be well-enough accuracy.

- Mikko

This topic is closed to new replies.

Advertisement