Jump to content

  • Log In with Google      Sign In   
  • Create Account


Deferred shading and High Range Definition


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 26 August 2012 - 04:05 AM

I am looking for advice on how to best set this up. I already have an implementation for deferred lighting, but I want to separate the various deferred effects into several stages (shader programs). As I am using HDR, the idea is to use a color image of GL_RGBA16F rather than GL_RGBA8. There are two alternative ways I see:
  • Let each stage use the same color image as the target, and using blending functions to add effects.
  • Let each stage use the color image as a texture image, and produce a new color image as output.
The disadvantage of the first solution is that manipulation functions are limited to what can be selected with glBlendFunc() and glBlendEquation(). If I understand the documentation correctly, "The results of these equations are clamped to the range [0, 1]." also means that the whole idea of HDR is lost.

The disadvantage of the second solution is that I need at least two images. One for source (texture image), and one for destination (render target). Maybe it is possible to use the same image, though it is generally not recommended.
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

Sponsor:

#2 Yours3!f   Members   -  Reputation: 1252

Like
1Likes
Like

Posted 26 August 2012 - 04:22 AM

since you're doing deferred LIGHTING - and I assume you know the difference between deferred lighting and deferred shading - therefore you have a nice solution by Cryengine 3. In the CE3 slides it is explained how they did this.
Essentially it goes like this:
1. render opaque geometry to the G-Buffer ( 24 bit depth + 8 bit stencil, RGBA8 rgb for normals, a for specular exponent )
2. do the shading pass (ie. read in the 2 textures, do Blinn-Phong shading based on them, and save the result to 2 textures, RGBA16F for diffuse, RGBA16F for specular )
3. re-render the opaque geometry, read in the 2 RGBA16F textures, use early-z using the depth buffer, and combine the shading result with the material properties (ie. surface diffuse color and surface specular color), plus add the ambient term, save the result into a RGBA16F texture.

use this RGBA16F texture for input for further post-processing.

you usually use this resulting texture for depth of field and bloom/hdr/tonemapping. These are usually dependent on each other, so you don't need to blend.

Edited by Yours3!f, 26 August 2012 - 04:27 AM.


#3 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 26 August 2012 - 12:01 PM

since you're doing deferred LIGHTING - and I assume you know the difference between deferred lighting and deferred shading - therefore you have a nice solution by Cryengine 3. In the CE3 slides it is explained how they did this.
Essentially it goes like this:
1. render opaque geometry to the G-Buffer ( 24 bit depth + 8 bit stencil, RGBA8 rgb for normals, a for specular exponent )
2. do the shading pass (ie. read in the 2 textures, do Blinn-Phong shading based on them, and save the result to 2 textures, RGBA16F for diffuse, RGBA16F for specular )
3. re-render the opaque geometry, read in the 2 RGBA16F textures, use early-z using the depth buffer, and combine the shading result with the material properties (ie. surface diffuse color and surface specular color), plus add the ambient term, save the result into a RGBA16F texture.

They didn't save the diffuse color from step 1, and so they can have a smaller G-buffer? But the disadvantage is that you need to render the geometry twice? Maybe it is more efficient the second time, given the use of "early-z" culling.

use this RGBA16F texture for input for further post-processing.

you usually use this resulting texture for depth of field and bloom/hdr/tonemapping. These are usually dependent on each other, so you don't need to blend.


Thanks for the comment, it was interesting. I didn't know about the distinction between "deferred shading" and "deferred lighting". I suppose the key point is that "deferred shader" renders the geometry twice. Though I think it should be feasible to save the diffuse color from the first phase these days, shouldn't it?

In my case, I have a case with having a large voxel based geometry that is difficult to cut down. But on the other hand, being a voxel based world, I don't need that much GPU memory for pre defined textures.

You say the cryengine 3 uses an 8-bit stencil with the depth buffer. What is that used for?
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#4 Yours3!f   Members   -  Reputation: 1252

Like
0Likes
Like

Posted 26 August 2012 - 01:44 PM

ummm, please read at least the wikipedia article on the given topic BEFORE trying to implement something.

To answer your questions:

They didn't save the diffuse color from step 1, and so they can have a smaller G-buffer?

They didn't save it because the given technique (deferred lighting) is like that. Yes this way the G-buffer size is only 64 bits per pixel which is VERY friendly. And yes you do need to render the geometry twice, BUT:
-since you optimize rendering you'd only render the content of the view frustum twice
-this geometry would be perfectly occlusion culled because of early-z culling
-therefore you don't sample the material buffers too much, which makes this (3.) pass easy on the GPU

Thanks for the comment, it was interesting. I didn't know about the distinction between "deferred shading" and "deferred lighting". I suppose the key point is that "deferred shader" renders the geometry twice. Though I think it should be feasible to save the diffuse color from the first phase these days, shouldn't it?

see wiki. otherwise if you save the diffuse colors, then it would be feasible to save everything that belongs to shading right? then you'd be doing deferred shading. Therefore read the wiki article :D the point is if you do deferred lighting you'd be making the G-buffer 64 bpp, whereas if you did deferred shading then it'd be at least 128 bpp, BUT you don't need to re-render the geometry.

In my case, I have a case with having a large voxel based geometry that is difficult to cut down. But on the other hand, being a voxel based world, I don't need that much GPU memory for pre defined textures.

so comes the answer, when you ask which is better? well the one that suits your case. In your case I came to the conclusion that you're having trouble drawing lots of stuff. This means that deferred shading would suit your case better.

You say the cryengine 3 uses an 8-bit stencil with the depth buffer. What is that used for?

well it's rather 24 bits of depth with 8 bits of stencil. Depth is used for storing the distance between the viewer and the given pixel (this is why it is called depth buffer). Stencil is used for masking out unnecessary drawing area. This way you can optimize stuff out.

#5 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 27 August 2012 - 02:24 AM

I now found that the documentation of glBlendFunc() and glBlendEquation() was wrong. There is no clamping for floating-point or integral colors.

Edited by larspensjo, 27 August 2012 - 02:25 AM.

Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS