Jump to content
  • Advertisement
Sign in to follow this  
Sk8ash

DX11 Beginning Radiosity

This topic is 2579 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi I'm currently a games development student about to go into my third and final year and for my project I'm writing a global illumination renderer in DirectX10 using HLSL, the past couple weeks I've been stuck deciding what to use for indirect illumination, I've read countless articles on the radiosity method as well as the different techniques (PRT, instant radiosity, radiosity normal mapping, irradiance volumes etc...). I've read every single forum post on here that mentions radiosity and also MJP's DX11 radiosity method.

I have no idea which technique would be best to use, and when i decide on one I get very confused about where to start, would anybody care to offer help and suggestions ? Cheers

My renderer needs to be able to run at interactive speeds (for games)

Share this post


Link to post
Share on other sites
Advertisement
Well the first thing you'll need to decide whether you're looking to use precomputed global illumination, or something that works in real time. Also if you want to do the latter there are techniques that work with static geometry but dynamic lighting, as well as techniques that allow for both to be fully dynamic. Deciding on this will narrow down the field considerably.

Share this post


Link to post
Share on other sites
I'm hoping to do static geometry with dynamic lights and then if I get that done maybe check out dynamic geometry. I've been leaning more towards the instant radiosity method with the VPL's.

Share this post


Link to post
Share on other sites
If you happen to go with instant radiosity with VPLs, I've got an example of that with Nvidia's ray-tracing library Optix and DirectX: http://graphicsrunner.blogspot.com/2011/03/instant-radiosity-using-optix-and.html

Share this post


Link to post
Share on other sites
Cheers, obviously this technique works well with deferred shading, but I'm not looking to build a deferred renderer so does anyone recommend a better technique to use with forward rendering ?

Share this post


Link to post
Share on other sites
You could do some research into what Geomerics does for Enlighten. As far as I know they precompute form factors for static geometry at regularly-sampled points on a mesh surface (basically light a lightmap), then at runtime they compute the lighting at those points and solve the system of equations using Gauss-Seidel (or something similar). They do that on the CPU, but it's possible to do it on the GPU as well.

Another interesting approach is appoximating surfaces as discs, which was pioneered by Michael Bunnel. There's an older GPU Gems article about it, and there's some descriptions of his updated algorithm from his SIGGRAPH talk last year. It's intended for use with dynamic geometry, but I did some experiments with precomputing visibility for static geometry and the results were promising.

There's also the paper from SIGGRAPH this year about cone-tracing through a voxelized scene. That definitely looked pretty neat, and with static geo you could pre-compute the voxelization.

Share this post


Link to post
Share on other sites
My personal favorite is the dynamic lightmap generation from Lionhead (a cancelled game called Milo) and Battlefield 3. Lionhead's used spherical harmonics and there presentation was at GDC earlier this year. "Geomerics" and DICE use a not too dissimilar approach (at least in some respects) for Battlefield 3. It gets you dynamic objects lit by "partially" dynamic geometry (you can remove or add the light bouncing geometry, but a bunch of extra stuff has to be calculated if you do).

Battlefield 3 has presentations... everywhere. Just go to DICE's site and you'll find stuff.

Share this post


Link to post
Share on other sites

You could do some research into what Geomerics does for Enlighten. As far as I know they precompute form factors for static geometry at regularly-sampled points on a mesh surface (basically light a lightmap), then at runtime they compute the lighting at those points and solve the system of equations using Gauss-Seidel (or something similar). They do that on the CPU, but it's possible to do it on the GPU as well.

Another interesting approach is appoximating surfaces as discs, which was pioneered by Michael Bunnel. There's an older GPU Gems article about it, and there's some descriptions of his updated algorithm from his SIGGRAPH talk last year. It's intended for use with dynamic geometry, but I did some experiments with precomputing visibility for static geometry and the results were promising.

There's also the paper from SIGGRAPH this year about cone-tracing through a voxelized scene. That definitely looked pretty neat, and with static geo you could pre-compute the voxelization.


I took quite a detailed look into the voxel cone tracing method, was extremely impressed and interested by it but im afraid its much too complicated for me, I dont quite understand the Geomerics approach, the only info i could find on it was their talk with Crytek and its quite brief, also I don't know what the system of equations is or Gauss-Seidel. Took a look at the Bunnel approach an understood it, couldn't find much documentation on it past the GPU Gems article though.

Would probably help if I had a really good book on GI, I have one small book but its pretty rubbish, do you know of any really good books ?

Cheers

Share this post


Link to post
Share on other sites
Well if you haven't already, you definitely want to download the GI total compendium. The basic idea behind using a system of equations is that for each sample point, you can compute the lighting of that point as the sum of the lighting at all other points multiplied with the form factor. Together these form a large system of equations that looks like this:

A = B * FFab + C * FFac + D * FFad ...
B = A * FFba + C * FFbc + D * FFbd ...
C = A * FFca + B * FFcb + D * FFcd ...
etc.

That forms a matrix, which you can use to solve that system of equations to get the lighting at each sample point (you probably learned about how to do that in algebra class). Gauss-Seidel is just a method for solving such a system of equations. Altogether that matrix might be very large for a complex scene, and Geomerics deals with that by breaking up the scene into different "zones" where a sample point in one zone is only assumed to be affected by a sample point from within the same zone. They can also compress the matrices because they end up being sparse (lots of zeros due to one sample point not affecting another sample point due to no visibility).

Bunnel gave a talk about his GI tech last year at siggraph, and you get the PDF here: http://cgg.mff.cuni.cz/~jaroslav/gicourse2010/index.htm

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!