Using a 2D Texture for Point-Light Shadows

Started by
19 comments, last by agleed 9 years, 1 month ago

I’m about to shift our pipeline over to using a single 2D texture instead of a cube texture for point-light shadows.

The only area worth noting is manually sampling the texture given 3D coordinates.

Since I know this has been explored plenty of times there is no reason for me to try to reinvent the wheel, but I absolutely cannot find any mention of this in any papers, tutorials, etc.

Anyone have any references for this?

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Advertisement

Haven't tried it but should work: http://www.nvidia.com/object/cube_map_ogl_tutorial.html Near the end search for the "Mapping Texture Coordinates to Cube Map Faces" section

EDIT: Btw you can also google "Mapping Texture Coordinates to Cube Map Faces" to find more info

Those will be useful for creating my own solution from scratch if it comes to that. For whatever reason, I wasn’t getting anything like this via my own searches.

L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Is it one large 2d texture and then you split it into 6 parts and render depth to each one of them? It sounds strictly better than cubemapping, is there any cons to it?

Is it one large 2d texture and then you split it into 6 parts and render depth to each one of them? It sounds strictly better than cubemapping, is there any cons to it?"

I'm assuming what hes doing is still rendering the scene 6 times and 6 views, just changing the glViewport to write to a different portion of the texture.

What you (Johan) have suggested can't be done, if you are implying that you render the scene once and just have it all map to several parts of a texture. The problem you have is if you have 6 squares randomly packed into a single 2D texture, when you generate the vertex coordinates, any given triangle if only rendered once, can obviously span multiple "squares". There is no way to get one part of the triangle to go to square1 and another part to go to square 2. If you could write in the pixel shader the exact pixel location you want a fragment to end up at, then it would work. But the pixel shader runs after a triangle has been made in screen space and chosen which pixels locations need to be run on the pixel shader.

My question to the original post is, whether you render 6 times to a cube map or render 6 times to a single texture acting as a cube-map, why not just use the cubemap. Not to mention that cube maps are hardware supported and when you want to sample on the edge of two of the images, it will sample them properly. In order to do bi-linear filtering across your "faces", you would have to waste pixel shader time to resample the texture instead of letting the hardware do it.

Of course there is dual parabaloid maps.

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

why not just use the cubemap.

Bi-linear filtering should not be used on shadow textures, so being able to automatically sample neighbor textures is not really a benefit.

Instead, the benefits are that you can apply all the same soft-shadow techniques, such as PCF, to your point lights rather than just to your spot lights and directional lights.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

The benefits are that you can apply all the same soft-shadow techniques, such as PCF, to your point lights rather than just to your spot lights and directional lights.L. Spiro

You can't use PCF/comparison-sanpling on cube-maps? Is that a restriction on certain platforms?

What you (Johan) have suggested can't be done, if you are implying that you render the scene once and just have it all map to several parts of a texture. The problem you have is if you have 6 squares randomly packed into a single 2D texture, when you generate the vertex coordinates, any given triangle if only rendered once, can obviously span multiple "squares". There is no way to get one part of the triangle to go to square1 and another part to go to square 2.

You can do this on modern hardware (with either manually packed 2d target, or a cube-target) by binding 6 viewport and using a geometry shader to duplicate triangles and output them to specific viewports.

You can't use PCF/comparison-sanpling on cube-maps? Is that a restriction on certain platforms?

[EDIT]Do PlayStation 4 and Xbox One have hardware support for more than PCF1 (not really PCF but we just call it that) on cube textures?[/EDIT]

Texture atlases for shadow-mapping is the norm in major studios these days for several of the other benefits they bring, but right now I need to move my desk to the 19th floor.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

You can do this on modern hardware (with either manually packed 2d target, or a cube-target) by binding 6 viewport and using a geometry shader to duplicate triangles and output them to specific viewports.

links to this technique?

NBA2K, Madden, Maneater, Killing Floor, Sims http://www.pawlowskipinball.com/pinballeternal

You can do this on modern hardware (with either manually packed 2d target, or a cube-target) by binding 6 viewport and using a geometry shader to duplicate triangles and output them to specific viewports.

links to this technique?

http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/cube-map-rendering-techniques-for-direct3d10-r2735

This topic is closed to new replies.

Advertisement