Jump to content
  • Advertisement
Sign in to follow this  
Bombshell93

[Idea] Voxel based indirect illumination

This topic is 2613 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

[size="1"]just a thought for indirect illumination / radiosity.
in most ambient occlusion techniques the exposure of the pixels is calculated relative to a light source (quite often assuming the sun). using the same technique, though it would be too expensive, the occlusion relative to all lights could be calculated.
Obviously it would need to be scaled down to run in real-time so I propose we run such calculation relative to major light sources in an octree in multiple levels.
Each element of the octree containing 6 very basic directional lights. The occlusion of higher level octree elements will be calculated relative to major light sources assuming up, down, north, east, south, west are normals. The values of the higher levels are used to determine the lower levels. The lower levels are then used to estimate the light bounces based on surface colour and normals. bounces are diffused the same way the higher levels calculated the occlusion relative to major light sources, only now its smaller light sources.
as long as you don't have the octree divide ridiculously small it should run well.

using a simple normal and depth G-buffer apply the higher level lights, then the lower level lights, etc. etc.
recalculation is only needed when a major light changes or a large scene element moves, movable objects don't need to be taken into account when calculating the light bounces.

EDIT: I HAVE RETHOUGHT MY APPROACH:
new plan! I notice quite a few implementations of real-time radiosity using micro-facets (termonology taken from the hedgehog engine, correct me if its wrong), my main inspiration at the moment being from the hedgehog engine. but I notice a recurring issue with this and its scalability of facets. on smaller faces the division used to create the micro-facets leads to too many micro-facets for that surface alone. (this is often resolved by dividing by a distant factor but this leads to an inconsistent density of micro-facets.

I'm planning on using similar techniques to calculate light bounces, but with better consistency and scalability towards different shapes and densities of geometry via voxelization algorithms. After voxelization the a structure made for holding the surface and lighting information will be made for the mesh, deforming objects will not go through this, instead they will have 6 directional lights who's values will be determined similar to surfaces.

I'm working on the voxelization algorithm now. I'll update when I've made progress / thought through the way I'll calculate the light bounces between the surfaces.

I'm going to be experimenting with this in the next few days, if anyone has any input (criticisms, papers where its been done before, suggestions) please let me know.
Thanks for reading,
Bombshell.

Share this post


Link to post
Share on other sites
Advertisement

in most ambient occlusion techniques the exposure of the pixels is calculated relative to a light source (quite often assuming the sun). using the same technique, though it would be too expensive, the occlusion relative to all lights could be calculated.
For your information.
Yes, it has been done for ages. That's why radio took 30 minutes on a P-133 and a couple hours on an AMD K5 to run. The poor thing had to compute occlusion for all the patches (although calling it "occlusion" is fairly improper) at the very least, and then bounce. By the way, no radiosity solution I recall "assumes" something. Light and sun position were given as parameters.

Each element of the octree containing 6 very basic directional lights.[/quote]It rings me a bell: Lighting volumes. I think I also read something about an RGB cube and Valve. nVIDIA also presented an hierarchical algorithm in GPU Gems 2 involving a tree of disks (it has been on my to do list for a few years). The algorithm nVIDIA proposed effectively ran in real time on midrange hardware several years ago... on a really simple example. I'd be interesting in seeing some performance scaling.

as long as you don't have the octree divide ridiculously small it should run well.[/quote]The problem is definition of the term "ridiculously small". For a Quake2/3 map you could probably make it run in real time, for everything more complicated... uh, I'm not so sure. How small is your "ridiculously small"? What hardware do you plan to use? What is your asset's complexity level?

recalculation is only needed when a major light changes or a large scene element moves, movable objects don't need to be taken into account when calculating the light bounces.[/quote]Uhm, take care, bounces often propagate much more information than you could expect... in today's HDR world those bounces can be really long. I suggest to schedule much more than "few days"... at least a few weeks bare minimum. Edited by Krohm

Share this post


Link to post
Share on other sites
the "few days" was a bit vague but I meant along the lines of 2 weeks.
The scene complexity isn't as important when the method I'm considering is aimed around deferred rendering. the light bounces would be calculated given a low poly scene representing the world.
"ridiculously small" refers to division down to roughly 16x16 pixel levels and lower, lowering the LOD of calculations / lighting passes further from the camera would help the octree elements avoid dividing down to such levels when further away.
I don't have any major target hardware, I guess at the moment I'm looking for something that would run 60 - 80 fps+ (when including other parts of my rendering pipeline too of-coarse) on a system running:
E2500 @ 3Ghz
4GB DDR2
896MB GTX260 (the only system I have running atm so the only system I can test on, nothing I can do about that though, I don't have any money to spare)

Share this post


Link to post
Share on other sites
EDIT: I HAVE RETHOUGHT MY APPROACH:
new plan! I notice quite a few implementations of real-time radiosity using micro-facets (termonology taken from the hedgehog engine, correct me if its wrong), my main inspiration at the moment being from the hedgehog engine. but I notice a recurring issue with this and its scalability of facets. on smaller faces the division used to create the micro-facets leads to too many micro-facets for that surface alone. (this is often resolved by dividing by a distant factor but this leads to an inconsistent density of micro-facets.

I'm planning on using similar techniques to calculate light bounces, but with better consistency and scalability towards different shapes and densities of geometry via voxelization algorithms. After voxelization the a structure made for holding the surface and lighting information will be made for the mesh, deforming objects will not go through this, instead they will have 6 directional lights who's values will be determined similar to surfaces.

I'm working on the voxelization algorithm now. I'll update when I've made progress / thought through the way I'll calculate the light bounces between the surfaces.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!