• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
solenoidz

Global illumination techniques

61 posts in this topic

Guys, thanks for the explanations.

I have a bit more questions, though...

 

Let's say I have built the SH harmonics world grid of probes. I can understand the part of rendering a cube map at every probe position and converting each cube map to a SH coefficient for storage and bandwidth reduction.

What I don't really get is how am I supposed to render level geometry with that grid if SH probes. The papers I read only explain how to render movable objects and characters.

I can understand that - they calculate the closest probe to that object and shade it with that probe. But they seams to make it per object, because the movable objects and characters are relatively small to the grid.

But I want to render level geometry and be able to beatufuly illuminate indoor scenes. In my test room, I have many light probes - the room is big and I don't think I should search for the closest light probe to the room - after all they are all contained in the room.

I need to make it per vertex , I guess. I'm I supposed to find the closest probe to every vertex and calculate lighting for that vertex based on that probes SH coefficients ? But I render a whole room with one draw call - how to pass so many ambient probes to vertex shader and in thye shader for every vertex find the closest probe ?

And what about if I want to make it per pixel ? There is definitely  something I'm missing here..

 

Thanks in advance.

Edited by solenoidz
0

Share this post


Link to post
Share on other sites

Ok, thanks. 

If I'm forced to choose the method you propose and the volume texture, covering the entire level. What you people would suggest ? Volume texture could be a huge memory hog. For a big level should be pretty high resolution in order to prevent noticeable errors in ambient illumination, especially ambient light(or shadow) bleeding through thin level geometry. I'm not really sure how to build that volume texture, because Direct3D 9.0 doesn't support rendering to volume texture. I could probably unwrap it to single 2d texture, but then I need to interpolate manually in the shaders between different "depth" slices, I guess. 

Edited by solenoidz
0

Share this post


Link to post
Share on other sites

If you're using deferred shading, you can render a sphere volume in the location of your probes and pass the data in and shade the volume using the g-buffer. You can even merge the closer ones together and pass them in groups into the shader to save on fill rate if it's a problem.

 

Thank you, but I still have some uncertainties. 

Yes, i'm using a deferred renderer, so do Crytek, but their method is much more complex after generating SH harmonics data for every light probe in space. The finally end up with 3 volume textures and sample from those per pixel.

 

In my case, i'm not sure how big this sphere volume should be. In case of my point light, I make it as big, as the light radius, but here the light probe do not store distance from the surface, just the incomming radiance. Also, as far as I get it, I need to shade every pixel with it's the closest probe. If I simply draw shpere volumes as you suggested, I caould shade pixels that are far behind that current probe I'm rendering (in screen space) that should be rendered with a different probe behind the current one. Should I pass all the probes positions and other data, that fall in the view frustum, and in the shader, for every pixel, get it's world space position, find the closest probe and shade it with that ?

That seems a bit limiting and slow also..

 

Thanks.

Edited by solenoidz
0

Share this post


Link to post
Share on other sites

I think you have LPV and environment probes confused. ;)

 

Environment probes follow the same principles as deferred lights (well.. they basically are deferred lights): you render a volume shape (quad, sphere, etc) at the position of your "probe" and render it (with culling for depth and such, to avoid shading unnecessary spaces). The size of this object is based on the radius of your probe. The shader reads the g-buffer and accesses a cubemap based on the normal and reflection vectors (for diffuse and specular shading, respectively). These objects are blended over the ambient lighting results (before lights). You need to sort them though to avoid popping from the alpha-blending, or you can come up with some form of neat blending approach that handles it (there are a few online, I think Guerilla had some ideas?).

 

The data for the probes is stored as cubemaps like I mentioned. A 1x1x1 cubemap stores irradiance at every direction (you can use the lowest mip-map of your specular cubemap to approximate) and an NxNxN cubemap for specular data (depends on the scene, scenes with low-glosiness objects can use a lower res cubemap for memory savings). The specular probes mip-maps contain a convolved version of the highest mip level (usually some gaussian function that matches your lighting BRDF). You index into this mip-level based on the glossiness of the pixel you're shading. smile.png

 

For LPV, Crytek's using a 64x64x64 volume (IIRC). They place it over the camera and shift it towards the look direction to get the maximum amount of GI around the camera (while still keeping GI from behind the camera). It's shifted by something like 20% distance. As you expect, leaking can be a problem, but there are ways around it (you really just need some occlusion information for the propagation portion of the technique). You can cascade it as well (mentioned in the paper) to get better results. The final lighting is done only once over the entire screen using a full-screen quad.

Edited by Styves
1

Share this post


Link to post
Share on other sites

Environment probes follow the same principles as deferred lights (well.. they basically are deferred lights): you render a volume shape (quad, sphere, etc) at the position of your "probe" and render it (with culling for depth and such, to avoid shading unnecessary spaces). The size of this object is based on the radius of your probe. The shader reads the g-buffer and accesses a cubemap based on the normal and reflection vectors (for diffuse and specular shading, respectively). These objects are blended over the ambient lighting results (before lights). You need to sort them though to avoid popping from the alpha-blending, or you can come up with some form of neat blending approach that handles it (there are a few online, I think Guerilla had some ideas?).

 

Environment probes do not work like deferred lights. They are points in the scene where you calculate the incoming lighting in a pre-processing step.

In runtime, you apply them to an object by finding the nearest probes to that object and interpolate between them accordingly. If I'm not mistaken this is usually done in a forward rendering pass because you need to find which are the nearest probes to the object and pass them to the shader.

 

 

 


The data for the probes is stored as cubemaps like I mentioned. A 1x1x1 cubemap stores irradiance at every direction (you can use the lowest mip-map of your specular cubemap to approximate) and an NxNxN cubemap for specular data (depends on the scene, scenes with low-glosiness objects can use a lower res cubemap for memory savings)

 

A 1x1x1 cubemap will give you a very poor irradiance representation, you'll you need more resolution than that. And yes, you can use the lower mipmaps of the specular cubemap to approximate diffuse but once again the quality will be poor. The best way is to generate a cubemap for diffuse with a diffuse convolution and a separate cubemap with higher resolution for specular generated with a specular convolution. The mipmaps of the specular map can then be sampled to approximate different specular roughness as you said.

Edited by jcabeleira
0

Share this post


Link to post
Share on other sites

Environment probes do not work like deferred lights. They are points in the scene where you calculate the incoming lighting in a pre-processing step.
In runtime, you apply them to an object by finding the nearest probes to that object and interpolate between them accordingly. If I'm not mistaken this is usually done in a forward rendering pass because you need to find which are the nearest probes to the object and pass them to the shader.

 
 
Read that last line again, because you just defeated your own argument tongue.png.
 
What I described is an alternative to picking the nearest cubemap in a forward pass, like you mentioned. The environment probes that I'm talking about are exactly like deferred lights except they don't do point-lighting in the pixel shader and instead do cubemap lighting. This is the exact same technique Crytek uses for environment lighting (in fact, much of the code is shared between them and regular lights). So, in short, they do work like deferred lights. Just not like deferred point lights.
 
 

A 1x1x1 cubemap will give you a very poor irradiance representation, you'll you need more resolution than that. And yes, you can use the lower mipmaps of the specular cubemap to approximate diffuse but once again the quality will be poor. The best way is to generate a cubemap for diffuse with a diffuse convolution and a separate cubemap with higher resolution for specular generated with a specular convolution. The mipmaps of the specular map can then be sampled to approximate different specular roughness as you said.

 

 

A diffuse convolution into a separate texture is, of course, better. I never said it wasn't (but I did forget to mention it). But those who can't/don't want to/don't know how to do this can use the lowest mip-level of their specular texture as a rough approximation, or for testing. This works well in practice even if it's not accurate. It really comes down to what kind of approximations you are willing to accept.

 

 

 

1

Share this post


Link to post
Share on other sites


What I described is an alternative to picking the nearest cubemap in a forward pass, like you mentioned. The environment probes that I'm talking about are exactly like deferred lights except they don't do point-lighting in the pixel shader and instead do cubemap lighting. This is the exact same technique Crytek uses for environment lighting (in fact, much of the code is shared between them and regular lights). So, in short, they do work like deferred lights. Just not like deferred point lights.

 

You're right, I didn't know they used light probes like that. It's a neat trick, useful when you want to give GI to some key areas of the scene. 

1

Share this post


Link to post
Share on other sites

Thank you, people. The floating point texture that stores color and number of time a pixel has beed lit + selecting closest pixels to each probe by this kind of rendering were interesting things to know.

0

Share this post


Link to post
Share on other sites

Would you guys be interested if I polished the source code up a little (practically no comments anywhere in there) and published it online along with the paper?

 

Of course! Code is always great to have.

0

Share this post


Link to post
Share on other sites

 

Would you guys be interested if I polished the source code up a little (practically no comments anywhere in there) and published it online along with the paper?

 

Of course! Code is always great to have.

 

 

 

is the paper already online? would like to read it.

 

Good to hear. The paper isn't uploaded anywhere yet, but I'll get around to it as soon as possible. I want to write an accompanying blog post, to go into a little bit more detail in certain sections (I cut a lot from the thesis original form to stay concise and a bit more implementation-neutral). No promises on delivery date yet, I still have to prepare and give a presentation about the thesis, but it's high on my priority list now :)

0

Share this post


Link to post
Share on other sites

Thanks for that link.Their higher order spherical harmonics probably do a lot to fix the light bleeding (due to light propagation in the wrong direction) present in the original algorithm, which was pretty much by far the biggest issue left. If you disregard that it's a discretization of continuous light floating around in space and thus cannot represent physically accurate GI, I think it's pretty much a perfect environment as a first step to introduce GI for completely dynamic scenes into games before we get the computing power to use ray tracing, at least in small to medium sized scenes, which makes up the vast majority of games around today. 

 

Otherwise even the unoptimized version I implemented was pretty fast. I'm looking forward to getting my hands on the UE4 codebase to take a look at how they optimized the different stages using compute shaders. I implemented compute shader versions as well, but they just ended up being a tad slower than the vertex/geom/fragment shader based implementations, because I can't into compute shader optimization (yet), so I just left compute shaders out of the thesis entirely.

0

Share this post


Link to post
Share on other sites

Voxel cone-tracing was mentioned as the state-of-the-art, but I think real-time photon mapping with GPU-based final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU, instead of CPU like in the original paper.

0

Share this post


Link to post
Share on other sites

Voxel cone-tracing was mentioned as the state-of-the-art for RTGI, but I think real-time photon mapping with GPU-based initial bounce and final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU as well, instead of CPU like in the original paper.

Edited by Prune
0

Share this post


Link to post
Share on other sites

Voxel cone-tracing was mentioned as the state-of-the-art for RTGI, but I think real-time photon mapping with GPU-based initial bounce and final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU as well, instead of CPU like in the original paper.

 

RTGI has plenty of issue, e.g. when the camera clips the photon volumes, instability when lights move around, limited to diffuse GI.

0

Share this post


Link to post
Share on other sites

 

Voxel cone-tracing was mentioned as the state-of-the-art for RTGI, but I think real-time photon mapping with GPU-based initial bounce and final gather (by rasterizing volumes representing the photons) is still the state-of-the-art, even though it's now four years old: http://graphics.cs.williams.edu/papers/PhotonHPG09/

I especially like that you don't have to voxelize your scene. In a modern implementation, you'd use NVIDIA's OptiX to do the incoherent phase of the photon tracing on the GPU as well, instead of CPU like in the original paper.

 

RTGI has plenty of issue, e.g. when the camera clips the photon volumes, instability when lights move around, limited to diffuse GI.

 

Did you even look at the paper I linked to? It is not limited to diffuse GI; it handles specular and any other BSDF, including in the intermediate bounces (as a cursory glance at the images in the paper would have shown you that--what's more obvious non-diffuse GI than caustics?)

Edited by Prune
1

Share this post


Link to post
Share on other sites


Did you even look at the paper I linked to? It is not limited to diffuse GI; it handles specular and any other BSDF, including in the intermediate bounces (as a cursory glance at the images in the paper would have shown you that--what's more obvious non-diffuse GI than caustics?)

 

Didn't read the paper. Caustics are fine and dandy but I see no examples of glossy reflections, which is something voxel cone tracing is pretty good at.

0

Share this post


Link to post
Share on other sites

They explicitly state they support arbitraty BSDFs, and that includes glossy reflections. In any case, other than the first bounce (in which the rays are coherent), and the final gather, which are both done by rasterization, the intermediate bounces are all computed on the CPU, not GPU, so there is no limitation whatsoever.

 

The real downside of this method is actually something else entirely--it's the general limitation of all screen-space methods: stuff beyond the edges of the screen. It's the same thing with SSAO when objects move in/out of the screen, so I always use an extended border with such methods.

 

The biggest benefit is that it's very general, everything completely dynamic, and requires zero precomputation of anything.

Edited by Prune
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0