Jump to content
  • Advertisement
Michael Davidog

Radiosity C++ Directx 11 implementation

This topic is 375 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
17 minutes ago, Michael Davidog said:

directx 11/OpenGL

The visualization/rendering part (special purpose GPU programming) is IMHO the easiest part.

Iteratively solving your system of equations efficiently in parallel (general purpose GPU programming) is a harder part. You can try to use the compute pipelines (e.g. DirectCompute, etc.) or some higher level GPGPU programming languages like CUDA which you can use in combination with OpenGL and Direct3D (though for the latter I only find info from NVidia itself instead of other tutorials). The benefit of CUDA are the many and large math APIs available.

On the other hand, I have not really an idea about the "real-time" aspect of radiosity (how much trade offs does this impose on the indirect illumination?).

Edited by matt77hias

Share this post


Link to post
Share on other sites

The algorithm described in the linked paper requires that all geometry (and local light emitters) are static in order for the light transfer approximation to be pre-computable. If the geometry changes, then so does each transfer function of the probes that "see" the changed geometry either directly or indirectly. The environment light map (infinitely far skybox, essentially) does not need to be static, as the spherical harmonic functions are evaluated against its contribution at runtime.

The algorithm allows for dynamic meshes (as in moving characters and objects) be lit with the existing radiosity data by taking n nearest probes of the mesh and evaluating the lighting of the object from them, but the dynamic objects cannot easily contribute to the radiosity solution so shadows and light bleeding of said dynamic objects would not work without further processing.

Edited by Nik02

Share this post


Link to post
Share on other sites
7 hours ago, matt77hias said:

The visualization/rendering part (special purpose GPU programming) is IMHO the easiest part.

I don't have practical expirience and I don't know how to make all this things work in a code, so maybe I need to start from graphics programming (using directx 11/OpenGL)?
 

 

7 hours ago, matt77hias said:

Iteratively solving your system of equations efficiently in parallel (general purpose GPU programming) is a harder part.

Using hybrid approach (cpu + gpu) will give most benefits than pure cpu or pure gpu solution. So I want to make architecture like Enlighten. 
 

 

7 hours ago, matt77hias said:

You can try to use the compute pipelines (e.g. DirectCompute, etc.) or some higher level GPGPU programming languages like CUDA which you can use in combination with OpenGL and Direct3D (though for the latter I only find info from NVidia itself instead of other tutorials). The benefit of CUDA are the many and large math APIs available.

I want to make hardware- and API-agnostic system
 

 

6 hours ago, Nik02 said:

If the geometry changes, then so does each transfer function of the probes that "see" the changed geometry either directly or indirectly.

This is indirect lighting cache that interpolate indirect light on dynamic (e.g. animated)/small/complex objects using light probes?
So this is spherical harmonic function (irradiance volumes)?
 

 

6 hours ago, Nik02 said:

If the geometry changes, then so does each transfer function of the probes that "see" the changed geometry either directly or indirectly.

but how ray-tracing used to make new lightmaps?
Or lightmaps get information (normal and shadow mapping) from SH - one light probe per pixel?
"The radiosity technique uses scalar form factors to describe the mutual influence of patches. In our solution, these form factors are replaced. For every light probe, each surface group as seen from the position of the probe is projected to SH coefficients and stored. This directional information allows to shade surfaces with normal mapping and dynamic objects whose normals are unknown during precomputation. For the shading of dynamic objects, light probes are placed in a regular 3D lightgrid. For static geometry, it is preferable to use lightmaps in order to avoid light bleeding, i.e. light being interpolated through solid objects. A lightmap is a texture that stores some kind of lighting information, in our case one light probe per pixel. To use lightmaps, a special set of UV coordinate is created for the scene. The surfaces are unwrapped in range [0, 1]^2 of the UV space without overlaps. The result is that every surface point references a unique position on the texture. Traditionally, lightmaps have been used to precalculate high-frequency static lighting and save it into an RGB texture. In our approach, a set of three textures for the tristimulus values are used to store spherical SH information about low-frequency indirect lighting only. The resolution of these textures is thus considerably lower than the resolution of commonly used lightmaps."

 

 

6 hours ago, Nik02 said:

but the dynamic objects cannot easily contribute to the lighting so shadows and light bleeding of said dynamic objects would not work

Yes, this approach and Enlighten approach don't allow that.
 

 

6 hours ago, Nik02 said:

further processing.

Is it possible somehow?

Edited by Michael Davidog

Share this post


Link to post
Share on other sites

Here is a similar technique: http://copypastepixel.blogspot.co.at/2017/04/real-time-global-illumination.html

1 hour ago, Michael Davidog said:

Is it possible somehow?

Some techniques designed for dynamic scenes:

Light Propagation Volumes (very approximating)

Voxel Cone Tracing (more accurate, but still very limited as voxels can not represent scenes at meaningful resolution)

Many LoDs (very accurate, but micro frame buffer too low resolution to cover direct lighting)

Reflective shadow maps in combination with SDF occlusion (actual CryEngine approach - limited accuracy but very practicable performance)

... to name just a few.

Paper comparing many of them: https://people.mpi-inf.mpg.de/~ritschel/Papers/GISTAR.pdf

 

Personally i work on an algorithm to overcome all those limitations for ten years. (So i would be disappointed if you get it to work in short time :D )

 

 

Share this post


Link to post
Share on other sites
26 minutes ago, JoeJ said:

Personally i work on an algorithm to overcome all those limitations for ten years. (So i would be disappointed if you get it to work in short time

I just want to start making things by mysefl=) I need practical expirience, there so much papers around but I can't find any "paper to code" guide. So I don't know how to make existing solutions to work (in a code that I can use futher in an engine).

 

 

29 minutes ago, JoeJ said:

Some techniques designed for dynamic scenes:

Light Propagation Volumes (very approximating)

Voxel Cone Tracing (more accurate, but still very limited as voxels can not represent scenes at meaningful resolution)

Many LoDs (very accurate, but micro frame buffer too low resolution to cover direct lighting)

Reflective shadow maps in combination with SDF occlusion (actual CryEngine approach - limited accuracy but very practicable performance)

... to name just a few.

Paper comparing many of them: https://people.mpi-inf.mpg.de/~ritschel/Papers/GISTAR.pdf

I know there a lot of techniques but I can't implement any of them. Personally I would like to realize some Enlighten/Kuri ray-tracing+radiosity hybrid method.
Here something similar
http://www.nada.kth.se/~burenius/BureniusExjobb2009.pdf
This guy also figured out how to implement it somehow
https://blog.molecular-matters.com/2012/05/04/real-time-radiosity/

Share this post


Link to post
Share on other sites

I found it a lot easier to do all on CPU for research. Basically i tried pretty any idea that was around and GPU is too cumbersome to get things going quickly. You also need some experience with GPU to know if your algorithm can utilize it properly, but the first thing to do is to learn how the math works.

The methods you are interested in work by calculating a form factor between two surfaces, and the form factor tells how much light interacts between them. I suggest you begin with this and get a simulation to work that converges to ground truth. That's easy - radiosity math is very easy - the only hard part is performance.

Personally i divide any surface into small disks and interreflect light as said. Enlighten does the same using polygons - they are fast because they precalculate form factor times visibilty.

The simulation works like this:

while (1)

  foreach (surfaceSample receiver)

    receiver->receivedLight = (0,0,0)

    foreach (surfaceSample emitter)

        receiver->receivedLight += emitter.outgoingLight * visibility(receiver,emitter) * formFactor(receiver,emitter)

    receiver->outgoingLight = receiver.color * receiver->receivedLight + receiver emittingLight

 

Note that due the visibility term runtime becomes O(n^3), but for a very small scene (cornell box with 500 samples) this is fast enough to learn the math. After that you can focus on optimizations and precalculating things.

I can dig up the exact math for you if you want, but first let's see if you want to go this way...

 

 

 

 

Share this post


Link to post
Share on other sites
21 hours ago, JoeJ said:

I found it a lot easier to do all on CPU for research. Basically i tried pretty any idea that was around and GPU is too cumbersome to get things going quickly.

Enlighten split direct lights to GPU and indirect to CPU (Enlighten is for indirect lights only on CPU):
1. Point sampling geometry surface - done via Final gathering (without any irradiance caching) with denoising - VPL/Instant radiosity method - I guess there no surfels/disks (GPU)
2. Project on low detail environment mesh - 
"The framebuffer of the previous frame is sampled sparsely on the GPU and subsequently those samples are transferred to CPU memory as input for the radiosity algorithm. The samples are projected on the low resolution proxy mesh which initializes the radiosity algorithm. Only one iteration of radiosity propagation is executed per frame. Multiple light bounces are simulated by using the previous frame as light input for the computation."
3. Compute radiosity - 
"The radiosity algorithm itself is executed on the CPU using a low resolution proxy mesh of the original mesh which resides in GPU memory."
4. Transfer radiosity textures to the GPU -
"After the radiosity solution has been computed, the solution is transferred back to the GPU in a lightmap format." - I guess using wavelet compression to make data flow faster.
5. Sample radiosity on high detail mesh - 
"The lightmap is sampled on the high resolution mesh with the use of a smart upsampling technique." - I guess using UV unwrapper.
 

 

21 hours ago, JoeJ said:

You also need some experience with GPU to know if your algorithm can utilize it properly, but the first thing to do is to learn how the math works.

Can you please advice me some good books/resources?
 

 

21 hours ago, JoeJ said:

The methods you are interested in work by calculating a form factor between two surfaces, and the form factor tells how much light interacts between them

This is the same principle as "patches" but clusters are hierachical and bigger?
 

 

21 hours ago, JoeJ said:

I suggest you begin with this and get a simulation to work that converges to ground truth. That's easy - radiosity math is very easy - the only hard part is performance.

How can I to begin? 
 

 

21 hours ago, JoeJ said:

Personally i divide any surface into small disks and interreflect light as said. Enlighten does the same using polygons - they are fast because they precalculate form factor times visibilty.

So it is possible to divide surface into surfels and polygons? What is better? And how I can make it with polygons?
 

 

21 hours ago, JoeJ said:

The simulation works like this:

while (1)

  foreach (surfaceSample receiver)

    receiver->receivedLight = (0,0,0)

    foreach (surfaceSample emitter)

        receiver->receivedLight += emitter.outgoingLight * visibility(receiver,emitter) * formFactor(receiver,emitter)

    receiver->outgoingLight = receiver.color * receiver->receivedLight + receiver emittingLight

Thank! But I still don't understand it.
That's will looks like this:
 

#include <iostream>
Using namespace std;
  Struct receiver {
  vector receivedLight;
  vector outgoingLight;
  vector color;
};

  Struct emitter {
  vector outgoingLight;
};

  void surfaceSample receiver {
  vector formFactor;
  vector visibility;
}

  void surfaceSample emitter {
  vector formFactor;
  vector visibility;
}

Int main (){
receiver * r;
r = new receiver;
emitter * e;
e = new emitter;

while (1) // What means "1" - one bounce?

  foreach (surfaceSample receiver)

    r->receivedLight = (0,0,0)

    foreach (surfaceSample emitter)

        r->receivedLight += emitter.outgoingLight * visibility(receiver,emitter) * formFactor(receiver,emitter)

    r->outgoingLight = receiver.color * r->receivedLight + receiver emittingLight // I don't understand this line
delete r;
delete e;
}

?
 

 

21 hours ago, JoeJ said:

I can dig up the exact math for you if you want, but first let's see if you want to go this way...

Yes, I want! I still don't understand all this code.
I found some implementation:
https://github.com/AlexanderTolmachev/radiosity-engine
https://github.com/ands/lightmapper
http://www.cs.cmu.edu/~./radiosity/dist/
http://dudka.cz/rrv

but I don't understand all this code - I want to understand=)

Radiosity: A Programmer's Perspective by Ian Ashdown (1994) - what do you think about this book (it's 23 years old, hehe)?

Edited by Michael Davidog

Share this post


Link to post
Share on other sites
5 minutes ago, Michael Davidog said:

Radiosity: A Programmer's Perspective by Ian Ashdown (1994) - what do you think about this book (it's 20 years old, hehe)?

Just found about that reference myself in the "Further Reading and Resources" section of the "Global Illumination" chapter of the "Real-Time Rendering" book. The authors of the latter state: "Implementing radiosity algorithms is not for the faint of heart. A good practical introduction is Ashdown's book, sadly now out of print.

8 minutes ago, Michael Davidog said:

Seems to me for offline (i.e. not real-time) rendering.

9 minutes ago, Michael Davidog said:

Are you sure this uses the radiosity algorithm?

Share this post


Link to post
Share on other sites
1 hour ago, matt77hias said:

Just found about that reference myself in the "Further Reading and Resources" section of the "Global Illumination" chapter of the "Real-Time Rendering" book. The authors of the latter state: "Implementing radiosity algorithms is not for the faint of heart. A good practical introduction is Ashdown's book, sadly now out of print.

You can download it from here
http://www.helios32.com/resources.htm
But if I will follow this book I will get modern knowledge how to implement things like Enlighten and plug into an engine?
Enlighten using radiosity with Final Gathering (i.e. photon mapping?)?
https://twitter.com/pigselated/status/676711568990711808

Edited by Michael Davidog

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!