Sign in to follow this  
BionicBytes

OpenGL Deferred Shading Issues

Recommended Posts

Hi, I'm having a spot of trouble with the lighting stage of my simple deferred renderer using OpenGL. I've created a simple (mostly working) deferred renderer based on reading various articles etc on the web. For reference my chosen RenderTargets are all RGBA_F16 formats which seems to give good performance whilst offering good precision. The geometry pass works and I can view as debug textures the output of each stage. The specific problem I am having is with the lighting stage. According to (different) resources available on the web lights can be split into: Those that operate globally - such as directional lighting and the global lighting such as the Sun and those that are spotlight and point light based. For global lighting I use a 2D orthographic view and perform a full 2D pass over each screen pixel - modifying the RT target as appropriate to the 'global/directional' light source. This seems to do the right thing. However, the second type of light is the point and spot light. Right now I've focused on the SpotLight. What is the correct approach here? The articles/web references are not all that clear and seem to omit some key details. For example do I: a) Perform a 2D pass - over a sub-range of the 2D screen space. The spot light bounding volume can be computed to the screen. I have tried this and although I have my 'clipped screen space rect' I do not think this is entirely correct - what happens to shadows? b) An NVidia paper 6800 leagues... talks about first rendering a Cone using the stencil buffer - in effect creating a volume mask area. This of course I can do. But... so I then only need a 2D pass - or do I still require a 3D pass using the material setup associated with the spotlight ? Within this spot light pass I can also attempting to incroporate shadows using a shadow-map. Does any one have any real practical advice they can share.

Share this post


Link to post
Share on other sites
Quote:
Original post by BionicBytes

a) Perform a 2D pass - over a sub-range of the 2D screen space.
The spot light bounding volume can be computed to the screen. I have tried this and although I have my 'clipped screen space rect' I do not think this is entirely correct - what happens to shadows?

b) An NVidia paper 6800 leagues... talks about first rendering a Cone using the stencil buffer - in effect creating a volume mask area. This of course I can do.
But... so I then only need a 2D pass - or do I still require a 3D pass using the material setup associated with the spotlight ?
Within this spot light pass I can also attempting to incroporate shadows using a shadow-map.



Both are valid approaches, and have different advantages. The first method, simply rendering a quad big enough to cover the region of the screen affected by the light source, has the advantages of

-Least vertex work
-Easy batching
-Coherent rasterization and g-buffer memory access

With the major disadvantage of

-Pixels are wasted in areas of the quad that are outside the light's affected region and also where there are no lit pixels (IE the light is behind a wall, or floating in the air)


The second method, using a multi-pass approach with light volumes, has the exact opposite pros and cons. You shade the least pixels since you do a check to see if the light's volume is visible and since you also use a 3D volume for a "tighter fit" with the affected region. However there's more vertex work, and you can't easily batch since this method requires a state change between passes.

So the key here is that neither of these techniques, and probably any other technique you could come up with, is going to be perfect for all situations. What gives you the best performance is going to depend on the shape of the light, the screen-space size of the light, how many lights you're rendering of a type, what your scene geometry is like (are there lots of occluders that could bury the light, could the light easily be floating in the air?), and what sort of hardware your renderer is working with. You may want to use some sort of heuristic to determine which method to use (if lightVolumeRadius > x, do this), this is what I used to do in my old deferred renderer. I also tried playing around with different variations of techniques: rendering a volume with stencil then rendering a full-screen quad, marking the stencil in the initial geometry pass in order to mark off areas where lights would be "floating in the sky", batching large amounts of simple cube volumes with z-test for particles...there's lot's of possibilities. Ultimately you'll want to minimize your per-pixel work if you want maximum performance, so try to pick your methods with that in mind.

Oh and about that first method and shadows...any of these method should work work fine with shadow maps, it's pretty much a completely orthogonal problem. All you need to do is make sure that the area you're rendering covers the full volume of the light source.

[Edited by - MJP on April 22, 2008 9:42:01 AM]

Share this post


Link to post
Share on other sites
Okay - thanks for the info. Good to know both are valid techniques.

So - currently I'm trying to do SpotLights using a volume screen aligned clipped 2D rect rendered as a 2D orthographic projection.
My G-buffer's vertex position is stored in Eye-Space in a 3-component f16 RT.

But, isn't there some problem here. The vertex data being send from the video card to the pixel shader is now of course a 2D quad. The rasterizer is interpolating across the entire (or clipped) region of the screen.
The real vertex data is inside the G-buffer - so we must first get the real vertex data from the g-buffer's vertex-position-RT from the incoming fragments 2D interloperated position.

Question:
As you render the 2D quad do you also generate a texture coordinate (containing unprojected 2D screen space coordinates) and pass this to the shader?

Secondquestion: What is your apps VertexToLightSpaceProjectionMatrix look like. Here is mine:
Gl.glMatrixMode (Gl.GL_TEXTURE);
Gl.glLoadIdentity();
Gl.glTranslatef (0.5f,0.5f,0.5f); // modify clip space to texture space range
Gl.glScalef (0.5f, 0.5f, 0.5f);
Gl.glMultMatrixf (lightCam.fp_glProjectionMatrix);
Gl.glMultMatrixf (lightCam.fp_glModelViewMatrix);
Gl.glMultMatrixf (aCam.fp_glInvModelViewMatrix);

again - I'm assuming you pass a matrix like this to the pixel shader, so that you can then compute a LightSpaceVertexPos using this matrix

Share this post


Link to post
Share on other sites
Quote:
Original post by BionicBytes

But, isn't there some problem here. The vertex data being send from the video card to the pixel shader is now of course a 2D quad. The rasterizer is interpolating across the entire (or clipped) region of the screen.
The real vertex data is inside the G-buffer - so we must first get the real vertex data from the g-buffer's vertex-position-RT from the incoming fragments 2D interloperated position.

Question:
As you render the 2D quad do you also generate a texture coordinate (containing unprojected 2D screen space coordinates) and pass this to the shader?



Sure, you can send texture coordinates if you'd like. You can also calculate texture coordinates from your post-projection 2D coordinates too if you'd like, I do that when rendering light volumes. All you have to do is pass the projected position of your vertex to your pixel shader, then in the pixel shader divide by w and convert x and y from the range [-1,1] to [0,1] (like you do for your shadow map projection). I've also seen implementations that always render full-screen quads, but use scissor rectangles to limit rendering to the affected region. This would let you set your texture coordinates once and then forget about it.


Quote:
Original post by BionicBytes

Secondquestion: What is your apps VertexToLightSpaceProjectionMatrix look like. Here is mine:
Gl.glMatrixMode (Gl.GL_TEXTURE);
Gl.glLoadIdentity();
Gl.glTranslatef (0.5f,0.5f,0.5f); // modify clip space to texture space range
Gl.glScalef (0.5f, 0.5f, 0.5f);
Gl.glMultMatrixf (lightCam.fp_glProjectionMatrix);
Gl.glMultMatrixf (lightCam.fp_glModelViewMatrix);
Gl.glMultMatrixf (aCam.fp_glInvModelViewMatrix);

again - I'm assuming you pass a matrix like this to the pixel shader, so that you can then compute a LightSpaceVertexPos using this matrix


Yeah I do something similar for lights that have shadow-maps, except using D3DX matrix functions. I also had the code for converting to texture space in my shader, since in HLSL any code that will evaulate the same for every vertex or pixel gets pulled out and run as a pre-shader on the CPU.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628356
    • Total Posts
      2982251
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now