• Advertisement
Sign in to follow this  

Using a 2D Texture for Point-Light Shadows

This topic is 1059 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I’m about to shift our pipeline over to using a single 2D texture instead of a cube texture for point-light shadows.

The only area worth noting is manually sampling the texture given 3D coordinates.

 

Since I know this has been explored plenty of times there is no reason for me to try to reinvent the wheel, but I absolutely cannot find any mention of this in any papers, tutorials, etc.

 

Anyone have any references for this?

 

 

L. Spiro

Share this post


Link to post
Share on other sites
Advertisement

Those will be useful for creating my own solution from scratch if it comes to that.  For whatever reason, I wasn’t getting anything like this via my own searches.

 

 

L. Spiro

Share this post


Link to post
Share on other sites

Is it one large 2d texture and then you split it into 6 parts and render depth to each one of them? It sounds strictly better than cubemapping, is there any cons to it?

Share this post


Link to post
Share on other sites
Is it one large 2d texture and then you split it into 6 parts and render depth to each one of them? It sounds strictly better than cubemapping, is there any cons to it?"

I'm assuming what hes doing is still rendering the scene 6 times and 6 views, just changing the glViewport to write to a different portion of the texture.

 

What you (Johan) have suggested can't be done, if you are implying that you render the scene once and just have it all map to several parts of a texture. The problem you have is if you have 6 squares randomly packed into a single 2D texture, when you generate the vertex coordinates, any given triangle if only rendered once, can obviously span multiple "squares". There is no way to get one part of the triangle to go to square1 and another part to go to square 2. If you could write in the pixel shader the exact pixel location you want a fragment to end up at, then it would work. But the pixel shader runs after a triangle has been made in screen space and chosen which pixels locations need to be run on the pixel shader.

 

My question to the original post is, whether you render 6 times to a cube map or render 6 times to a single texture acting as a cube-map, why not just use the cubemap. Not to mention that cube maps are hardware supported and when you want to sample on the edge of two of the images, it will sample them properly. In order to do bi-linear filtering across your "faces", you would have to waste pixel shader time to resample the texture instead of letting the hardware do it.

 

Of course there is dual parabaloid maps.

Edited by dpadam450

Share this post


Link to post
Share on other sites

why not just use the cubemap.

Bi-linear filtering should not be used on shadow textures, so being able to automatically sample neighbor textures is not really a benefit.

Instead, the benefits are that you can apply all the same soft-shadow techniques, such as PCF, to your point lights rather than just to your spot lights and directional lights.


L. Spiro

Share this post


Link to post
Share on other sites

The benefits are that you can apply all the same soft-shadow techniques, such as PCF, to your point lights rather than just to your spot lights and directional lights.L. Spiro

You can't use PCF/comparison-sanpling on cube-maps? Is that a restriction on certain platforms?

What you (Johan) have suggested can't be done, if you are implying that you render the scene once and just have it all map to several parts of a texture. The problem you have is if you have 6 squares randomly packed into a single 2D texture, when you generate the vertex coordinates, any given triangle if only rendered once, can obviously span multiple "squares". There is no way to get one part of the triangle to go to square1 and another part to go to square 2.

You can do this on modern hardware (with either manually packed 2d target, or a cube-target) by binding 6 viewport and using a geometry shader to duplicate triangles and output them to specific viewports.

Share this post


Link to post
Share on other sites

You can't use PCF/comparison-sanpling on cube-maps? Is that a restriction on certain platforms?

[EDIT]Do PlayStation 4 and Xbox One have hardware support for more than PCF1 (not really PCF but we just call it that) on cube textures?[/EDIT]

Texture atlases for shadow-mapping is the norm in major studios these days for several of the other benefits they bring, but right now I need to move my desk to the 19th floor.


L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites

You can do this on modern hardware (with either manually packed 2d target, or a cube-target) by binding 6 viewport and using a geometry shader to duplicate triangles and output them to specific viewports.

 

links to this technique?

Share this post


Link to post
Share on other sites

You can do this on modern hardware (with either manually packed 2d target, or a cube-target) by binding 6 viewport and using a geometry shader to duplicate triangles and output them to specific viewports.

links to this technique?

http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/cube-map-rendering-techniques-for-direct3d10-r2735

Share this post


Link to post
Share on other sites

I once used projections on 4 sides tetrahedron for my shadow mapping needs (one shadow map for all cases for simplicity reasons).

This, of course, requires 4 projection matrices that target specific part of the texture + clipping planes. PCF works if you use rays (and dinamically determine which triangle is affected). Hope this helps.

Share this post


Link to post
Share on other sites

Why not use two parabolic shadow maps for point lights? You can use the filtering methods you would normally be able to use for directional shadows. 

 

http://graphicsrunner.blogspot.com/2008/07/dual-paraboloid-shadow-maps.html

 

I imagine you might even be able to render to it with one pass using a geometry shader too.

 

Thoughts?

Share this post


Link to post
Share on other sites

PCF filtering over dual paraboloids is non-trivial to say the least.  Although the curvature of the results could yield softening with fewer block-shaped artifacts (Poisson sampling is often used to get around this), the density of the pixels changes depending on how far away from the center you are, so you would effectively be sampling close-together shadow points near the center and farther-away points near the edges.

 

 

Ultimately I was wrong about the type of filtering we wanted to do to the shadows—we did at one point want to use anti-aliasing with VSM shadows, but because of the large gap between our near and far planes there were too many artifacts.

We are sticking to PCF, which has hardware support for blending 4 shadow samples at a time but we need more than that.

 

Since this is a run-time sampling set we need to be able to manually sample left, right, up, and down from the center texel, so we need to sample into the cube texture manually.  Otherwise we would need to calculate in spherical coordinates where the neighboring texels are which would be incredibly slow, especially since PlayStation 4 and Xbox One have intrinsics to calculate where to sample on cube textures.  Sampling a cube texture natively outputs these same intrinsics once for each sample, whereas by manually using the intrinsics we can reuse their results and create the absolute minimal code for the multiple samples we need.

 

 

Simply drawing the cube texture’s faces to a single 2D texture would not help reduce this complexity, but it helps us shift towards texture atlases for shadows which will in the end allow us to apply all the shadows in our scene in one pass.

 

 

L. Spiro

Share this post


Link to post
Share on other sites

Ultimately I was wrong about the type of filtering we wanted to do to the shadows—we did at one point want to use anti-aliasing with VSM shadows, but because of the large gap between our near and far planes there were too many artifacts.

You can try MSM shadow mapping which is the last tech on variance shadow mapping. But it still have bleeding on certains cases.

 

Simply drawing the cube texture’s faces to a single 2D texture would not help reduce this complexity, but it helps us shift towards texture atlases for shadows which will in the end allow us to apply all the shadows in our scene in one pass.

You will render all shadow map in a texture2D atlas, uses it to draw deferred shadow using a fullscreen quad then render deferred lighting on it ?

Edited by Alundra

Share this post


Link to post
Share on other sites

You can try MSM shadow mapping which is the last tech on variance shadow mapping. But it still have bleeding on certains cases.

My coworker showed me a video on it yesterday. That will be a goal for the future. According to the video he showed me, PCF was running at 400 FPS and MSM was running at 500-550 (and with fewer artifacts). That’s a separate task I will propose in the future.


You will render all shadow map in a texture2D atlas, uses it to draw deferred shadow using a fullscreen quad then render deferred lighting on it ?

Our current implementation has to make 2 passes to render shadowed lights (and my description of it may be wrong—I only know roughly how that part of the engine works): One to apply shadows to the stencil buffer and then again to add lighting. Yes, deferred. Yes, we will be able to merge all of the shadows into the same pass as lighting if all the shadows are in a single texture.

Even in forward rendering this would be a huge benefit. Typical implementations use one pass per shadow also, and in that case you have to submit all the geometry again for each pass.


L. Spiro

Share this post


Link to post
Share on other sites


My coworker showed me a video on it yesterday. That will be a goal for the future. According to the video he showed me, PCF was running at 400 FPS and MSM was running at 500-550 (and with fewer artifacts). That’s a separate task I will propose in the future.

According to MJP, moving to MSM is very straightforward, should not take long time.

Share this post


Link to post
Share on other sites

According to MJP, moving to MSM is very straightforward, should not take long time.

That’s fine.
I will bring it up in the future, when we’ve stabilized the current set of changes and are ready for the next set of optimizations.


L. Spiro

Share this post


Link to post
Share on other sites

There's an upcoming article on this topic in GPU Pro 6 btw: http://hd-prg.com/tileBasedShadows.html

(Code included...if you want to dig into that)

 

Looks very interesting. Would be nice to see a comparison between this and http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=clustered_with_shadows_siggraph_2014

Share this post


Link to post
Share on other sites

There's an upcoming article on this topic in GPU Pro 6 btw: http://hd-prg.com/tileBasedShadows.html
(Code included...if you want to dig into that)

 
Looks very interesting. Would be nice to see a comparison between this and http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=clustered_with_shadows_siggraph_2014

My coworker and a guy from AMD are going to release a paper soon which I is a significant improvement on—I believe—this. Many times higher quality and many times faster.
I can’t say much about it right now, except you’d be better waiting for it (assuming I am correct that this is the technique it is replacing) than implementing this.


L. Spiro

Share this post


Link to post
Share on other sites

 

 

There's an upcoming article on this topic in GPU Pro 6 btw: http://hd-prg.com/tileBasedShadows.html
(Code included...if you want to dig into that)

 
Looks very interesting. Would be nice to see a comparison between this and http://www.cse.chalmers.se/~olaolss/main_frame.php?contents=publication&id=clustered_with_shadows_siggraph_2014

 

My coworker and a guy from AMD are going to release a paper soon which I is a significant improvement on—I believe—this. Many times higher quality and many times faster.
I can’t say much about it right now, except you’d be better waiting for it (assuming I am correct that this is the technique it is replacing) than implementing this.


L. Spiro

 

 

Thanks for the heads up, that's very useful to know. I'm about to go into research mode on many-light shadows for our next project. 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement