Sign in to follow this  

per pixel normals

This topic is 3292 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

You could pass the vertex normal from the vertex shader to the fragment shader, it will interpolate across the primitive much like vertex colors do.
You could get it from a texture, bump mapping gets its per pixel normal from a normal map, these can be generated from hi poly models, or directly from textures with programs like crazy bump, or nvidias photoshop filter, although crazy bump is better, but nvidias plugin is free.
You could probably generate tangents and binormals on the shader, but usually both are passed in as vertex attributes, or one is, and the other is generated from the vertex normal and binormal (or tangent) cross product.
You need textured models to generate them offline, because they are related to the direction of the texture coords.

Share this post


Link to post
Share on other sites
Quote:
Original post by NumberXaero
You could pass the vertex normal from the vertex shader to the fragment shader, it will interpolate across the primitive much like vertex colors do.


I currently do this, but the output is not what I what I am expecting..

Here is what im doing

[vertex]
Normal = normalize( gl_NormalMatrix * gl_Normal );

[fragment]
vec3 n = normalize( Normal );

gl_FragData[0] = vec4( n, 1.0 );


I expect this to draw a output like the one found here
http://www.filterforge.com/filters/6568-normal.jpg
(btw how do I use urls? )

but dont get anything like that..

Share this post


Link to post
Share on other sites
[Vertex]

varying vec3 vv_Normal;

void main(void)
{
gl_Position = ftransform();

vv_Normal = gl_NormalMatrix * gl_Normal;
}

[Fragment]

varying vec3 vv_Normal;

void main(void)
{
gl_FragColor = vec4(vv_Normal, 1.0);
}

Try ATi's RenderMonkey, great for quick tests like this.

from faq link top right of page

Links can be added to your posts using regular HTML. For example:
your_link_text
Don't forget the http://, or the link won't work correctly!

Share this post


Link to post
Share on other sites
If you are render to your backbuffer, which is probably a RGBA8 format, it will clamp values [0..1]
You need to rescale values

varying vec3 vv_Normal;

void main(void)
{
vec3 normal = normalize(vv_Normal);
normal = normal * 0.5 + 0.5;
gl_FragColor = vec4(normal, 1.0);
}

Share this post


Link to post
Share on other sites
Quote:
Original post by V-man
If you are render to your backbuffer, which is probably a RGBA8 format, it will clamp values [0..1]
You need to rescale values

varying vec3 vv_Normal;

void main(void)
{
vec3 normal = normalize(vv_Normal);
normal = normal * 0.5 + 0.5;
gl_FragColor = vec4(normal, 1.0);
}


Thanks for the help ^^
Im rendering to a MRT using Shader Maker ( renderman is windows only :[). The code you gave me looks MUCH better. Only diffrent is the colors are slightly off from what im expecting. See above for a link the the torus im expecting and im looking for this with a sphere
sphere

btw

[vertex]
varying vec3 Normal;

void main(){
Normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}

[frag]
varying vec3 Normal;

void main(){
vec3 n = normalize( Normal );
n = n * 0.5 + 0.5;

gl_FragData[0] = vec4( n, 1.0 );
}

Share this post


Link to post
Share on other sites
Hi, cherryyosh.

I'm a little bit curious. Why are you saying the screen shot you posted is better?

Per pixel normal itself is not something interesting for the end users to see. So, although it helps debugging application, visualizing them is probably not the final outcome. Didn't you try to plug your per pixel normals into the actual application?

I think you are probably working on the BRDF. So...

If you are trying to implement Phong shading (i.e. per pixel lighting), the way you calculating per pixel normal is correct already.

If you are trying to do deferred rendering (i.e. saving the attributes in some screen space textures) for more complicated effect, you'll need to render your per pixel normals to a floating point texture using FBO as render target. And, the way you calculating per pixel normal is still correct in this case.

By the way, tangent calculation had been discussed thoroughly many times in this forum. Just search this forum, you will get the answer you needed and the solution. (including why it is not useful to generate vertex tangent on the gpu)

Share this post


Link to post
Share on other sites
Quote:
Original post by ma_hty
Hi, cherryyosh.

I'm a little bit curious. Why are you saying the screen shot you posted is better?

Per pixel normal itself is not something interesting for the end users to see. So, although it helps debugging application, visualizing them is probably not the final outcome. Didn't you try to plug your per pixel normals into the actual application?

I think you are probably working on the BRDF. So...

If you are trying to implement Phong shading (i.e. per pixel lighting), the way you calculating per pixel normal is correct already.

If you are trying to do deferred rendering (i.e. saving the attributes in some screen space textures) for more complicated effect, you'll need to render your per pixel normals to a floating point texture using FBO as render target. And, the way you calculating per pixel normal is still correct in this case.

By the way, tangent calculation had been discussed thoroughly many times in this forum. Just search this forum, you will get the answer you needed and the solution. (including why it is not useful to generate vertex tangent on the gpu)



Thank you for the help, I am working on a deferred rendering, using the normal for the map, had found out earlly that that normals where correct, its just the pictures didnt show them head on. now im trying to figure out how to properly do the position map for it. sadly, it doesnt seem to be as easy as Position = gl_ModelViewMatrix * gl_Vertex; ( thought that would be nice if I need to know what quad it is in :P ).

Ill look on how to find the tangent, thanks for the information.

Share this post


Link to post
Share on other sites
Quote:
Original post by cherryyosh
... using the normal for the map, had found out earlly that that normals where correct, its just the pictures didnt show them head on. ...


Okay, I understood why you thought the image you posted is correct. However, it is considered correct because it is used for a while before. It can also be wrong already since it is written. By the way, it is probably wrong in the first place.

Share this post


Link to post
Share on other sites
Quote:
Original post by ma_hty
Quote:
Original post by cherryyosh
... using the normal for the map, had found out earlly that that normals where correct, its just the pictures didnt show them head on. ...


Okay, I understood why you thought the image you posted is correct. However, it is considered correct because it is used for a while before. It can also be wrong already since it is written. By the way, it is probably wrong in the first place.


Yeah, basicly that is the reason I belived it to be right. I mean it made it on blender site, has to be right?? lol, also the per pixel normal map doesnt come out looking like a standard normal map.. But then I guess every normal on the per pixel is facing one point, where as on a picture they're all just facing up..

Anyways, anyone got idea on the position map? :P

Share this post


Link to post
Share on other sites
I'm sorry. I don't quite understand what you meant.

Anyway, if you render the interpolated normals of a sphere to a screen space texture and display it on an ordinary monitor, it should look like...



For a normal vector, each of the coordinate is of range [-1,1]. You SHOULD NOT modify the negative part if you are actually using the normal for some evaluations.

Sometimes people will map the range [-1,1] to [0,1] for visualization because an ordinary monitor can only display color of range [0,1] logically. However, that is just for visualization. And, the normal vector with its coordinates mapped to range [0,1] is simply useless for the other applications.

By the way, beside the programmer, no one is going to interested in visualizing the interpolated normals. Why bother?

[Edited by - ma_hty on December 7, 2008 12:02:33 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by ma_hty
I'm sorry. I don't quite understand what you meant.

Anyway, if you render the interpolated normals of a sphere to a screen space texture and display it on an ordinary monitor, it should look like...



For a normal vector, each of the coordinate is of range [-1,1]. You SHOULD NOT modify the negative part if you are actually using the normal for some evaluations.

Sometimes people will map the range [-1,1] to [0,1] for visualization because an ordinary monitor can only display color of range [0,1] logically. However, that is just for visualization. And, the normal vector with its coordinates mapped to range [0,1] is simply useless for the other applications.

By the way, beside the programmer, no one is going to interested in visualizing the interpolated normals. Why bother?


Man your great, thats the picture im getting! That put all my doubts to rest. Thanks.

Also got the position map working, just gotta figure out why gl_DepthRange.far isnt right. ( I think I read somewhere that it is on a value of 0~1.. but then how would I get it to the real size so I can divide the depth by it??.. might just wind up passing in a 'far' value )

Also one more question. maybe you can save me from making ANOTHER post on this.. Would it be better to store full position ( x y and z ) in a position map. Or should I just store the Z and get the x and y from that ( I think I read somewhere that you can do that ). Doing that would coust me some clocks, but would allow me to compact the MRT's to two. saving a LOT of space.

Also should I compress the diffuse to take up 32bits, then expand specular to 24 (makeing them fit in the same MRT, with some left :P)? Rather then having diffuse take up 64bit and only haveing a specular power? Whats your take?

Share this post


Link to post
Share on other sites
You better store the position as 3-components vector for the moment. Optimizations are less important until you have something working.

I'm sorry. I have no idea what you are doing. So, if you want my options, please describe in detail what are you doing first.

Just to remind you, the rendered textures you used must be floating point textures. Otherwise, the negative values will be truncated.

Share this post


Link to post
Share on other sites
Quote:
Original post by ma_hty
You better store the position as 3-components vector for the moment. Optimizations are less important until you have something working.

I'm sorry. I have no idea what you are doing. So, if you want my options, please describe in detail what are you doing first.

Just to remind you, the rendered textures you used must be floating point textures. Otherwise, the negative values will be truncated.


Yeah, the values im using are GL_RGBA16F_ARB and GL_FLOAT for the type. I figured otu last night when I tryed to type out what the packing would look like I'd Would still need 3 textures as long as I kept a spec and emissive.. The only thing that would change by me packing the diffuse color into 24bits would be that I could then have full spec and emissive, rather then just power varibles. But then I got to thinking, do these really need to be there? Could I just do spec and emissive via a light?

Btw, I am creating a deference renderer. The first demo ( that im working on right now ) will beable to suppport multipul lights of diffrent types ( point, spotlight.. etc ) And will also support diffrent shaders, IDK how will do that, but want to plug in my paralex bump map shader into it.. Later I plan to add shadows and such to it.

Share this post


Link to post
Share on other sites

This topic is 3292 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this