Deferred shading ugly Phong

Started by
6 comments, last by Hodgman 11 years, 8 months ago
Hello,
I am writting deferred shading (in XNA) with 3 buffers:
1. Albedo
2. Normals moved from range [-1,1] to [0,1]
3. Depth Buffer [0,1], with inverted ViewProj reconstruction into world space

After (finally) adding phong I notice it's looking ugly. I Noticed that there are some kind of square shapes near specular. I think I did something wrong with specular. Also it looks like as I am using to small precision for depth map.
I attach 2 screenshots. First is better, but with halfVector4 for Albedo and Normals, Vector2 for Depth(I did it for test). The correct surface format I want to use is Color for Albedo and Normals, Single for depth, but second screenshot shows what happenes.
Here's the function for phong:

float3 doPhong(float3 Normal, float3 Position)
{
float3 LightVector = normalize(Light - Position.xyz);
float3 CameraVector = normalize(Eye - Position.xyz);
float3 LightReflect = reflect(-LightVector, Normal);
float NdL = max(0.0f, dot(Normal, LightVector));
float specular = 0.0;
specular = pow(max(0.0, dot(LightReflect, CameraVector)), 20);
return saturate(NdL * LightColor)+specular;
}

For those who suspect I have bad precission my nearPlane is .1f and farPlane is 30.
I feel so stupid to post question like that since it's second DS engine I am wirtting...
Advertisement
Your normals are too quantized. Simplest solution would be to use a higher precision normal buffer or take a look in Crytek's "Reaching the speed of light" - paper where they explain how to efficiently use 3 x 8 bit to store normals with good precision. Also, please use Blinn-Phong. Phong is evil. ;D

Your normals are too quantized. Simplest solution would be to use a higher precision normal buffer or take a look in Crytek's "Reaching the speed of light" - paper where they explain how to efficiently use 3 x 8 bit to store normals with good precision.;D

ohmy.png gonna read this for sure :D I switched to HDRblendable (64 bit for pc), I think it's enough and not too much, since I saw recomendation for this surface format in the past smile.png
And about squares... I looked up different deferred shading implementations and notice them there too so i think it's ok...

Also, please use Blinn-Phong. Phong is evil. ;D

Phong was intentional, since it's more accurate (I am aware it's slower). I will switch to blinn later, since from my observation people that are not into graphics don't see the difference xD

Phong was intentional, since it's more accurate (I am aware it's slower).


Actually Blinn-Phong produces more accurate results (try both at glancing angles and you'll see how bad Phong looks!), to even better results try Enery conserving Blinn-Phong.

[quote name='Tasaq' timestamp='1344119704' post='4966199']
Phong was intentional, since it's more accurate (I am aware it's slower).


Actually Blinn-Phong produces more accurate results (try both at glancing angles and you'll see how bad Phong looks!), to even better results try Enery conserving Blinn-Phong.
[/quote]
QFE. The tl;dr version is that Blinn-Phong is actually an approximation to evaluating a Gaussian centered on the halfway vector.

In more plain English, you're using some statistics hacks to guess what fraction of the total surface of the area to be shaded is angled in such a way to bounce light towards you/give you laser eye surgery if it starts out coming from the light in question.

EDIT: And for extra credit, use Toksvig filtering to account for actual texture detail in the normal map!

EDIT 2: Also
float NdL = max(0.0f, dot(Normal, LightVector));
makes me really, really angry. You wouldn't like me when I'm angry. Do
float NdL = saturate(dot(Normal, LightVector)); instead to avoid my wrath.

For clarification, you're wasting precious GPU time with those max() operations that you could be getting for free with a saturate modifier. You might think that the compiler can optimize this. You'd be wrong, though-- remember that the dot product itself does not have a defined range and that the compiler generally lacks sufficient context to know that you're dotting normalized vectors.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

Do
float NdL = saturate(dot(Normal, LightVector)); instead to avoid my wrath.
For clarification, you're wasting precious GPU time with those max() operations that you could be getting for free with a saturate modifier.

I saw a lot of people doing this the way I did, but now when you mention it... I migrated to hlsl from glsl, and there you have no saturate, and you use clamp or max to do same thing so it's like habit. Thanks for the tip, I will keep this in mind :)
Yeah, reason #3289472 why OpenGL is a design trainwreck. Remember kiddos, adding a clamp instruction is fine, but adding a special-case, higher-performance one that can be implemented in terms of the former somehow breaks hardware compatibility(???)

Good job, Khronos. You make us all so very, very proud.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

Yeah, reason #3289472 why OpenGL is a design trainwreck. Remember kiddos, adding a clamp instruction is fine, but adding a special-case, higher-performance one that can be implemented in terms of the former somehow breaks hardware compatibility(???)
[font=courier new,courier,monospace]saturate[/font] is not a valid keyword / intrinsic-function in GLSL -- search the spec for the word "saturate"... However, it does (incorrectly) compile on nVidia drivers.

This isn't OpenGL's fault at all -- this is nVidia's OpenGL driver purposefully accepting invalid GLSL code, in order to make developers think that AMD's GL driver is broken (and hopefully even ship their product with invalid code, so that AMD's correct driver appears broken to the consumer). That's some damn evil Microsoft-esque behaviour on the part of nVidia, and the only fault OpenGL (Khronos) has in this particular matter is that they're unable to hold IHV's accountable for these harmful tactics.

These are the same dirty tactics that gave us the browser wars, and browser incompatibility issues of the 90's...

Yes, you shouldn't use [font=courier new,courier,monospace]max[/font] in GLSL for this purpose either, but should [font=courier new,courier,monospace]clamp[/font] with a hard-coded 0.0 and 1.0, which will compile down to the special-case free instruction-modifier, equivalent to [font=courier new,courier,monospace]saturate[/font] in HLSL.


[edit] Sorry, I down-voted your post where you recommend the use of saturate, because I misread this as advising the use of it in GLSL, rather than in HLSL. It was actually a very good post. Forgive me! wub.png

This topic is closed to new replies.

Advertisement