• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
Hodgman

Screen-space shadowing

15 posts in this topic

How about two level DSSDO. One for small level detail and one for larger. Then picking better value per pixel?

0

Share this post


Link to post
Share on other sites

You could just use imperfect shadow maps http://www.mpi-inf.mpg.de/~ritschel/Papers/ISM.pdf

 

But any screenspace technique is just asking to be unstable, wide search areas for SSAO just end looking like a weird kind of unsharp mask as it is. Not too mention you'd just get light bleeding everywhere since you've only screenspace to work off of.

 

I mean, it's a neat idea for some sort of "better than SSAO" or directional SSAO kind of idea. But I'm afraid I'd be skeptical of doing any more in screen space that's not inherently screenspace than is already done. Even SSR looks weird and kind of wonky in practice, EG Crysis 3 and Killzone Shadowfall.

1

Share this post


Link to post
Share on other sites

The 'problem' with SSAO is imo not a problem with the technique at all but how it is commonly used in current games (e.g. the FarCry 3 case).

I think it all started when some modders exaggerated the subtle SSAO in Crysis 1 and all world was crying at first how awesome this looks.

It can work very well for short-range AO.

0

Share this post


Link to post
Share on other sites

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

 

I like this line of thinking.

 

You probably don't need 1024 vertical voxels, so it's possible to spend more on horizontal ones, or to store, say a 8-bit distance instead of a 1 bit solid/empty flag.

 

You could keep two separate voxel structures, the static one, then a dynamic one that is based on slices.  You could sample both of them when required ( when a surface is near a light and a dynamic object ).

 

You could also do some tricks so that highly dense but noisy things like leaves could be faked and not traced directly, just with an appropriate noise function, say.

0

Share this post


Link to post
Share on other sites

I've thought about the voxel approaches. Even with an octree they are extremely complicated, especially if you want self shadowing. Lot of ideas to speed it up like varying quality based on distance to the camera, but even then you end up having to voxelize geometry for occluders or find other methods to mark which voxels are shaded and which aren't.

 

That said I think a really fast theoretical (as in I made this up a while ago) approach would be to use RTW in a single pass with a low resolution texture (like 64x64). You find all the objects within the radius of your light source then generate a frustum at your point light. Point the frustum at (1, 0, 0) and cut the world into 8 quadrants. Now for all the objects in front of the near plane of the frustum do nothing. For the 4 quadrants behind the near plane assign them to a quadrant. If they overlap two (or 4) quadrants duplicate the objects into both all quadrants. In 2D:

pointshadowmap.png

 

Now render all the geometry in each quadrant passing into a shader the center of the point of the light and transforming the vertices into world space for each quadrant. Then normalize the vertices in the angle 0 to 180 into 0 to 45 degrees so they're all inside of the frustum. If your triangles are small enough there should be no artifacts really. Here's a 2D example of what I mean by artifacts. The red line represents our geometry and we normalize the angle so it's squished into the frustum. This distorts the line (if we look at all the points along it) into the blue line. If we only look at the vertices we see the magenta line. You then render a depth map using RTW. If you're good with math you can probably create a fragment shader that correctly interpolates the vertices and calculates the correct depth (removing the artifact). What you'd end up with would be a RTW'ed low resolution spherical map. When you sample to see if a shadow exists you'd need to perform a look-up on the texture for each light source.

 

You'd only need a texture for lights that collide with geometry and you can choose a texture size based on the distance to the camera. (RTW will correctly warp to give a higher resolution closer to the camera also). I hope that makes sense. I made it up mostly on paper a few months ago and haven't been able to tell anyone to see if it's viable.

0

Share this post


Link to post
Share on other sites

I've been meaning to come back to this, but have been working full time on stuff that pays the bills dry.png

 

Here's some gifs that I actually produced months ago. Most of the lighting in the scene is from a cube-map, with a few (green) dynamic lights in there too. There's no shadows, so the "SSSVRT" adds all of the shadowing seen in the odd frames of the gifs:

http://imgur.com/a/k3L78

 

Seems to work really well on shiny env-map-lit (IBL) objects to 'ground' them.

 

 

Re voxels - that's a challenge for another day biggrin.png

I imagine you could use both. Screen-space stuff like this is great for capturing the really fine details (which would require an insane amount of memory in a voxel system), so you could combine this with a voxel method for more coarse-detail / longer-distance rays, and/or image probes for really long-range rays.

Edited by Hodgman
0

Share this post


Link to post
Share on other sites

That's awesome biggrin.png

Is all the shadowing done in screen-space, or are there traditional techniques used as well?

 

Typical cascades shadow map are maybe showing from moon light but I can't be sure because those point lights are so much brighter than anything else. There is also temporallly smoothed SAO variation with multi bounce lighting that contributes fully shadowed areas quite well. https://www.dropbox.com/s/x7tvd8bags5x3pj/GI.png

2

Share this post


Link to post
Share on other sites

  I thinks it's best to combine screen space shadow tracing with tiled shading . For directional lights surely we need shadow maps but usually point and spot lights have small ranges. And we can assume most of the shadow casters that are going to affect the final result are in G-Buffer .

 

  In this way we can create raytace jobs for each light and each pixel . Let's consider pixel at 0,0 . We know from tiled shading that 16 lights may light this pixel. we create 16 trace jobs ( the direction and start point) , And now we can dispatch a thread for any trace job in compute shader and write the results in a buffer and when sahding using this data.

 

Take a look at AMD Leo Demo . I think they used somehow similar approach.

0

Share this post


Link to post
Share on other sites

I also believe a combination is best. In games, usually there are many static light sources, which can be treated efficiently by CSM that are updated only very n'th frame, so wont consume much time. The animated objects like players and enemies are usually small and completely visible in the scene. There, screen space shadows could be efficient, rather than having another render pass. 

0

Share this post


Link to post
Share on other sites

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

 

You could use a variant of the KinectFusion algorithm to build a volumetric representation of the scene.  The basic idea is to get a depth image (or a depth buffer in the rendering case) and then you find the camera location relative to your volume representation.  Then for each pixel of the depth image you trace through the volume, updating each voxel as you go with the distance information you have from the depth image.  The volume representation is the signed distance from a surface at each voxel.  For the next frame, the volume representation is used to find out where the Kinect moved to and the process is repeated.  The distances are updated over a time constant to eliminate the noise from the sensor and to allow for moving objects.

 

This is a little bit of a heavy algorithm to do in addition to all of the other stuff you do to render a scene, but there are key parts of the algorithm that wouldn't be needed anymore.  For example, you don't need to solve for the camera location, but instead you already have it.  That signed distance voxel representation could easily be modified and/or used to calculate occlusion.  That might be worth investigating further to see if it could be used in realtime...

1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0