• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
rocklobster

Deferred Shading lighting stage

6 posts in this topic

Hey guys,

 

I've attempted to implement deffered shading lately and I'm a bit confused about the lighting stage.

 

This is the algorithm for what I do at the moment:

FirstPass

- Bind render target
- bind my G-buffer shader program
- render the scene

SecondPass

- Bind my normal shader for rendering the quad 
(fragment shader takes Position, Normal and Diffuse samplers as uniforms)
- Render full screen quad
- swap buffers

So am I right to assume that to light the scene I should also add this change to the second pass:

SecondPass

- Bind my normal shader for rendering the quad 
(fragment shader takes Position, Normal and Diffuse samplers as uniforms)
- FOR EACH LIGHT IN THE SCENE
    - Set this light as uniform for the shader
    - Render full screen quad 
- END FOR
- swap buffers

And in my fragment shader I'll have the code which shades all the pixels based on the type of light passed in (directional, point or spot).

 

Cheers for any help

1

Share this post


Link to post
Share on other sites

That seems more or less correct to me. At some point you'll want to implement some sort of light culling so that you're not shading every pixel with every light. But yeah, typically you'll have one shader for every material type, and one shader for every light type (directional, point, ambient, etc.). In your first pass, you bind the G-buffer as your render target. You render the scene using the material shaders, which output normals, diffuse, etc., to their respective textures in the G-buffer. In the second stage, you go one light at a time, and you render a full-screen quad using the correct light shader for each light. The light shader samples from each of the G-buffer textures, and applies the correct lighting equations to get the final shading value for that light. Make sure to additively blend the light fragments at this stage.

 

What a lot of people do, in order to cut down on the number of pixels being shaded with a given light, is to have some geometric representation for the light (a sphere for a point light, for example). Before rendering the full-screen quad with the point light shader, they'll stencil the sphere in so that they can be sure to only shade pixels within that sphere. Some even go as far as to treat the sphere almost like a shadow volume, stenciling in only the portions where scene geometry intersects the sphere. This gives you near pixel-perfect accuracy, but it might be overkill. I've been reading lately that some people just approximate the range of the point light using a billboarded quad, because the overhead in rendering a sphere into the stencil buffer (let alone doing the shadow volume thing) is greater than the time spent unnecessarily shading pixels inside the quad that the light can't reach.

 

Of course, a real point light can reach anywhere. If you were to use a sphere to approximate the extent of the point light, you'd have to use a somewhat phony falloff function so that the light attenuates to zero at the edge of the sphere.

2

Share this post


Link to post
Share on other sites

Thanks. Right now I'm currently outputting Position, Normal, Diffuse, Tangent and BiTangent. Should I also output specular and ambient properties? Or will this be too much memory usage?

1

Share this post


Link to post
Share on other sites

What I would recommend doing is outputing your positions and normals in view space. If you have any tangent-space normals, transform them in your material shader before outputting them to your g-buffer. That way, you don't need to store tangents or bitangents. I do think it would be a good idea to output specular and ambient properties. If you find that memory bandwidth becomes a problem, then there are some optimizations you could try. For instance, you could reconstruct the position from the depth buffer, thus getting rid of the position texture in your g-buffer. You could also store just two components of each normal (say, x and y) and then use math to reconstruct the third in your light shaders. Even though these reconstructions take time, it's often worth it because of the memory bandwidth savings. Also, if you haven't already done so, you can try using a 16-bit half-float format, instead of a full 32-bit floating point format.

0

Share this post


Link to post
Share on other sites

Just one last thing, what do you mean when you say "one shader for every material type"?

 

My definition of a material is just something like:

class Material 
{
    Vec3 diffuse
    Vec3 ambient
    Vec3 specular

    Tex2 diffuse_Map
    Tex2 normal_Map
    Tex2 specular_Map
}

so I don't know what you mean.

Edited by rocklobster
0

Share this post


Link to post
Share on other sites

That material class probably covers all that you need at this stage in terms of material options. However, you might find it useful to have separate shaders to cover the cases where a) you have untextured geometry, b) you have geometry with a diffuse texture, but no specular or normal map, c) you have a diffuse texture and a normal map, but no specular, d) etc. If you handle all of these cases with the same shader, you end up sampling textures that aren't there (or else introducing conditional branching into your shader code). Of course, if everything in your game has all of these textures, then it isn't a problem. Unfortunately, I am not that lucky because I have to render some legacy models, some of which have untextured geometry.

 

As you start working with more advanced materials, you may find that your shader inputs grow in number and become more specialized, and so the number of material shaders you use will grow as well.

1

Share this post


Link to post
Share on other sites

Ah ok, I get it thanks. I was wondering about this also, because not all of my geometry has Tangents and BiTangents also, which is a bit annoying... Thanks for all your help.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0