Deferred shading and objects without difuse component ?

Started by
15 comments, last by arkangel2803 14 years, 11 months ago
You don't multiply the contribution from different light sources, you add them. Adding a red light and a blue light should give you a purpleish color. This is why Guerilla's approach works: they add the contribution from the light maps first (you can consider the light maps to be one "light source"), and then the contribution from the sun, and then the contribution from local point/spot lights.
Advertisement
Hi

And if im correct, when you finished from 'adding' light information in this buffer, then you multiply it with albedo's result from G-Buffer isn't it ?

And if im correct too, whith these added data from all light sources, you multiply it by 'intensity' channel (see Guerrilla diagram) and you can have valors higer than [0.0f - 1.0f] range, obtaining an HDR effect isn't it ?

Thanks for your time and help :)

LLORENS
Quote:Original post by arkangel2803
And if im correct, when you finished from 'adding' light information in this buffer, then you multiply it with albedo's result from G-Buffer isn't it ?


Right. Well actually, there's two way to do it:

foreach light source   totalLightContribution += diffuseLight * diffuseAlbedo + specularLight * specularAlbedo;


or

foreach light source    totalDiffuse += diffuseLight;    totalSpecular += specularLight;totalLightContribution = totalDiffuse * diffuseAlbedo + specularLight * specularAlbedo


The latter will give you better precision, but requires you to keep track of diffuse + specular separately.


Quote:Original post by arkangel2803
And if im correct too, whith these added data from all light sources, you multiply it by 'intensity' channel (see Guerrilla diagram) and you can have valors higer than [0.0f - 1.0f] range, obtaining an HDR effect isn't it ?


Well that's how they do it...they didn't want to use floating point buffers (since they're expensive for the PS3) so that's what they came up with. If you wanted you could just accumulate HDR values in a floating point buffer or some encoded format.

What is wrong with just adding the lightingresult for each light to the backbuffer? That is the first method described by MJP, isn't it? In that case you don't need another rendertarget for the lightresults.
A question about the second method:
What kind of rendertarget would you need for totalLigt/totalSpcular. Assuming that the lighting is colored you would need 3x2 channels for this?
Hi MJP and thanks for your help.

I need help to clarify a concept. When we are talking about 8 bit Render Targets and 16 bits Render Targets, ofcourse we are talking about R8G8B8A8 and the same for R16G16B16A16, but whats happen with the output from a pixel shader, i explain better:

When we are 'outputting' a value like float4(1.0f,1.0f,1.0f,1.0f) from our shader to a R8G8B8A8 Render Target, the graphic card is writing a byte with 255 value on it. So i supose that the same happens with a R16G16B16A16 Render Target. The only difference is that we have a better interpolation between 0.0f to 1.0f range. So... if i have a R16G16B16A16 Render Target, how can i output higer values than 1.0f top of range ? Need i to make all calculations of lights and other stuff thinking that my range [0.0f - 1.0f] is now [0.0f - 0.5f] and multiply for 2.0f at HDR process to have a valid HDR information ??

I dont know if i was clear :S

Thanks a lot for your time :)

LLORENS
Your output is scaled to the range [0, 1] only for fixed-point surface formats. So this includes A8R8G8B8, A16B16G16R16, A2R10G10B10 ,etc. In these cases you would pick some "maximum value", and divide your output by this to scale it back to the [0, 1] range.

This does *not* apply to floating-point formats like A16B16G16R16F, R32F, and A32B32G32R32F. These surface formats can represent values that go well beyond 1...in fact they have a sign bit so you can even store negative values in them.
Hi MJP andthanks again :P

So, with a fixed point Render Target, you need to make an scalar 'methodology' to get values beyond 1.0f in a 16 bits per pixel format.

In other way, whats happen if you want to make a buffer distorsion effect on screen (imagine that we want to render somekind of water, liquid, glass object), you need first to make de G-Buffer, and then, in a later proces de lighting stage, but if then, you want to render a buffer distorsion effect of a water/liquid/glass, this effect needs to be iluminated too to reach specular and difuse iluminations. I only see the solution of making a two render G-Buffers, one without these objects, and another with buf. distorsion objects, and fusionate the information in a later stage. Ofcourse, this will eat a lot of memory of graphic card, but i think there will be another way that i miss it.

Thanks for your time

LLORENS

This topic is closed to new replies.

Advertisement