# OpenGL Which is better for lighting quality?

## Recommended Posts

I just got the Gems6 book this week... I read the chapter on Vertex texturing with PS3.0 for terrain rendering. The author states that lighting quality is better if you calculate a normal map and use that in the FS vs. calculating the normals in the VS and then sending them to the FS? Right now I calculate my normals and send them to OpenGL and in the vertex shader I
normal = normalize(gl_NormalMatrix * gl_Normal);


and send that normal to the FS. Is this true? Thanks

##### Share on other sites
The way you do it, you have one normal per vertex.
Those are interpolated across each fragment.. it looks good for spheres and, such things, but you get no details.
A normal-map can make it appear like you have loads more vertices than you actually have, just look at Doom3. :D

Another problem you have is you normalize your normals in the vertex program.
When these are later interpolated and sent to the fragment program, some of these will NOT be normalized anymore, and the lightning quality will be bad...
Normalize in your fragment program, or use a normal-texture for kick-ass lighting. :D

##### Share on other sites
Ok, so this will still be the case even if I renormalize the normal in the FS, that the normalmap will be better quality still. Thanks for the info.

##### Share on other sites
Quote:
 Original post by gulgiNormalize in your fragment program, or use a normal-texture for kick-ass lighting. :D

and you should prefer to use normilisation in the FS over a texture for newer hardware (9700 and up and GF6 and up) as using a texture wastes bandwidth and cache.

##### Share on other sites
Quote:
Original post by phantom
Quote:
 Original post by gulgiNormalize in your fragment program, or use a normal-texture for kick-ass lighting. :D

and you should prefer to use normilisation in the FS over a texture for newer hardware (9700 and up and GF6 and up) as using a texture wastes bandwidth and cache.
..if we were talking about a normalization cubemap that is. :D
I just meant use a normalmap OR interpolated renormalized vertexnormals. And yes, on newer gfx cards, normalize with math, and on something old as my GeForce5800, a a cubemap is way faster.. :)

##### Share on other sites
Ok, I am trying to implement a normalmap for my PPL lighting. I am having a issue with my lighting moving around on me when I move my camera and must have to do something with the way I am calculating my lightDir? I am using a directional light.

//vs lightDir = normalize(vec3(gl_LightSource[0].position));//fsvec3 n = texture2D(normalmap, gl_TexCoord[0].xy);   n = vec3(2.0) * (n - vec3(.5));   vec4 lightColor = ambient;   float NdotL = max(dot(n, lightDir), 0.0);

thanks for any help...

##### Share on other sites
Quote:
 Original post by MARS_999Ok, I am trying to implement a normalmap for my PPL lighting. I am having a issue with my lighting moving around on me when I move my camera and must have to do something with the way I am calculating my lightDir? I am using a directional light.*** Source Snippet Removed ***thanks for any help...

Right now it seems that you're calculating lightDir as the vector from the eye point to the light source. (Assuming gl_LightSource[0].position is specified in view-space, I always forget).

Anyway, you need to calculate the vector from the vertex to the light source.

##### Share on other sites
Quote:
Original post by mhamlin
Quote:
 Original post by MARS_999Ok, I am trying to implement a normalmap for my PPL lighting. I am having a issue with my lighting moving around on me when I move my camera and must have to do something with the way I am calculating my lightDir? I am using a directional light.*** Source Snippet Removed ***thanks for any help...

Right now it seems that you're calculating lightDir as the vector from the eye point to the light source. (Assuming gl_LightSource[0].position is specified in view-space, I always forget).

Anyway, you need to calculate the vector from the vertex to the light source.

So this then? If so that isn't working either... :(
lightDir = normalize(vec3(gl_LightSource[0].position) - vec3(gl_Vertex));

##### Share on other sites
Well I think I got it working but I am doing bumpmapping with tangent space and as of now I am thinking the Lighting looks a lot better than the VS/FS lighting... I thought doing the lighting in the FS was supposed to be PPL... But it doesn't look this good... Anyone care to comment on this, to help me understand why. Thanks

##### Share on other sites
It is difficult to give you a definitive answer whithout the complete VS / FS.

Therefore, I will just suggest to have a look at Lighthouse3D GLSL tutorial which explains how to perform PPL for directional / spot / omni lights and compare PPL to vertex lighting.

##### Share on other sites
I was wondering if YannL would care to comment on why the lighting quality is better? Thanks

##### Share on other sites
Quote:
 Original post by MARS_999Well I think I got it working but I am doing bumpmapping with tangent space and as of now I am thinking the Lighting looks a lot better than the VS/FS lighting... I thought doing the lighting in the FS was supposed to be PPL... But it doesn't look this good... Anyone care to comment on this, to help me understand why. Thanks

When you do the lighting 'per pixel' but only passing down the normals from a vertex you are still only working with an interpolated normal at each pixel.

When you use a normal map you have a 'true' normal at each pixel, as such your lighting better reflects the surface you are modelling and looks much much better [smile]

vertex lighting with a per pixel shader
normal mapped per pixel lighting

##### Share on other sites
Hi Phantom. thanks for the help. I though it may be like that but wasn't sure, and makes sense that the normals are spread across each surface... So will newer hardware allow use to calculate the normal for each pixel instead of using a normalmap? One thing is now my diffuse lighting is a bit darker than it was with the VS/FS PPL... So now with my directional light my polygons that are on the back side look darker than I would like. I have tried to bump up the diffuse but that ends up looking like crap... Thanks for the help. ;)

##### Share on other sites
Quote:
 Original post by MARS_999So will newer hardware allow use to calculate the normal for each pixel instead of using a normalmap?

Well, you can now, but they won't really look like anything because you don't have the surface details to work with at that point, thus why normal maps are used.

Newer hardware, D3D10 style, will have a geometry shader unit init, which will allow you to generate vertices in the pipeline, this could be used for adding extra details with vertices, at which point the per vertex normals might be enuff... how you generate that detail however is another matter [smile]

##### Share on other sites
but how does the driver know where/how to generate the new geometry, most likely from a texture! so it wont by u anything (though of course a heavyly tesseleated obj will look better than normalmaps)
mars_999 i find your question strange (youve been doing gl for at least 3-6 months)

##### Share on other sites
Quote:
 Original post by zedzeekbut how does the driver know where/how to generate the new geometry, most likely from a texture! so it wont by u anything (though of course a heavyly tesseleated obj will look better than normalmaps)

Well, it wont be the driver, it'll be your geometry shader code which does the grunt work, so you could for example feed in 4 vertices and use a geometry shader to generate a gride which forms half a sphere from it. [grin]

##### Share on other sites
Quote:
Original post by phantom
Quote:
 Original post by zedzeekbut how does the driver know where/how to generate the new geometry, most likely from a texture! so it wont by u anything (though of course a heavyly tesseleated obj will look better than normalmaps)

Well, it wont be the driver, it'll be your geometry shader code which does the grunt work, so you could for example feed in 4 vertices and use a geometry shader to generate a gride which forms half a sphere from it. [grin]

Yep! That's what I have been hearing lately, and can't wait. Bring on the DX10 hardware I want some. Zedzeek I been coding GL for longer than that, it's been the last year I been getting better at it due to I have had more time to sit down and work with it a lot and stick with it to keep my mind fresh... Now off to itch my head some more....

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627674
• Total Posts
2978558
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
Thank you in advance!
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 11
• 11
• 10
• 12
• 22