Which is better for lighting quality?

Started by
15 comments, last by MARS_999 17 years, 11 months ago
I was wondering if YannL would care to comment on why the lighting quality is better? Thanks
Advertisement
Quote:Original post by MARS_999
Well I think I got it working but I am doing bumpmapping with tangent space and as of now I am thinking the Lighting looks a lot better than the VS/FS lighting... I thought doing the lighting in the FS was supposed to be PPL... But it doesn't look this good... Anyone care to comment on this, to help me understand why. Thanks


When you do the lighting 'per pixel' but only passing down the normals from a vertex you are still only working with an interpolated normal at each pixel.

When you use a normal map you have a 'true' normal at each pixel, as such your lighting better reflects the surface you are modelling and looks much much better [smile]

vertex lighting with a per pixel shader
normal mapped per pixel lighting
Hi Phantom. thanks for the help. I though it may be like that but wasn't sure, and makes sense that the normals are spread across each surface... So will newer hardware allow use to calculate the normal for each pixel instead of using a normalmap? One thing is now my diffuse lighting is a bit darker than it was with the VS/FS PPL... So now with my directional light my polygons that are on the back side look darker than I would like. I have tried to bump up the diffuse but that ends up looking like crap... Thanks for the help. ;)
Quote:Original post by MARS_999
So will newer hardware allow use to calculate the normal for each pixel instead of using a normalmap?


Well, you can now, but they won't really look like anything because you don't have the surface details to work with at that point, thus why normal maps are used.

Newer hardware, D3D10 style, will have a geometry shader unit init, which will allow you to generate vertices in the pipeline, this could be used for adding extra details with vertices, at which point the per vertex normals might be enuff... how you generate that detail however is another matter [smile]

but how does the driver know where/how to generate the new geometry, most likely from a texture! so it wont by u anything (though of course a heavyly tesseleated obj will look better than normalmaps)
mars_999 i find your question strange (youve been doing gl for at least 3-6 months)
Quote:Original post by zedzeek
but how does the driver know where/how to generate the new geometry, most likely from a texture! so it wont by u anything (though of course a heavyly tesseleated obj will look better than normalmaps)


Well, it wont be the driver, it'll be your geometry shader code which does the grunt work, so you could for example feed in 4 vertices and use a geometry shader to generate a gride which forms half a sphere from it. [grin]

Quote:Original post by phantom
Quote:Original post by zedzeek
but how does the driver know where/how to generate the new geometry, most likely from a texture! so it wont by u anything (though of course a heavyly tesseleated obj will look better than normalmaps)


Well, it wont be the driver, it'll be your geometry shader code which does the grunt work, so you could for example feed in 4 vertices and use a geometry shader to generate a gride which forms half a sphere from it. [grin]


Yep! That's what I have been hearing lately, and can't wait. Bring on the DX10 hardware I want some. Zedzeek I been coding GL for longer than that, it's been the last year I been getting better at it due to I have had more time to sit down and work with it a lot and stick with it to keep my mind fresh... Now off to itch my head some more....

This topic is closed to new replies.

Advertisement