# OpenGL Normal Interpolations?

## Recommended Posts

Hey so I recently decided to start playing around with DirectX 11. I have some previous experience with fixed-function OpenGL. Right now I'm having some trouble getting DirectX to interpolate my model's (a low poly sphere) normals. In a fixed function pipeline i remember you just called ShadeMode(smooth) and it would automatically interpolate your normals. I read around and saw that DirectX is able to automatically interpolate your normals between the vertex and pixel shaders using interpolation modifiers. However despite hours of messing around and searching i have yet to find a solution.

My set up is pretty simple:

A low poly sphere lit with a basic directional light with only an ambient and diffuse component. I have set up some buffers to include specular light but i don't want to set that up yet without figuring out how to interpolate the model's normals.

[code]
SamplerState sampleType;

cbuffer LightBuffer
{
float4 ambientColor;
float4 diffuseColor;
float4 specularColor;
float specularPower;
float3 lightDirection;
};

struct PixelInput
{
float4 position: SV_POSITION;
float2 tex: TEXCOORD0;
float3 viewDirection: TEXCOORD1;
float3 normal: NORMAL;
};

{
float4 finalColor;
float4 textureColor;
float3 lightDir;
float lightIntensity;

textureColor = textureColor * textureColor;

finalColor = ambientColor;

lightDir = -lightDirection;
lightIntensity = saturate(dot(input.normal, lightDir));

if(lightIntensity > 0.0f)
{
finalColor += (diffuseColor * lightIntensity);
}

return finalColor;
}
[/code]

##### Share on other sites
You'd want to take a look at the HLSL [url="http://msdn.microsoft.com/en-us/library/bb509668%28v=vs.85%29.aspx"]interpolation modifiers[/url] introduced in SM 4.0. However, by default normals (and vertex shader outputs in general) are linearly interpolated, so really for basic lightning you don't have to do anything - just pass your vertex normal as output from the vertex shader. So most likely something else is going on - maybe include the complete effect (the vertex shader), a picture of the problem, or run the app through PIX - maybe your normals are just messed up to begin with?

##### Share on other sites
Normals passed through an interpolator technically need to be normalized again before use in the pixel shader, but depending on the art and how far they deviate from the correct value, it can be skipped as an optimization.

In either case, if for some reason you are have mixed positive and negative z values for you surface-local vertex normals, you can end up with a singularity case for a pixel or two where the pixel interpolated value hits 0,0,0 and cannot be normalized. This turns out to be a very rare case unless your artists are a bit strange, and we all know, some of them are.

##### Share on other sites
just thought i would also mention something (i'm not sure if this is the problem), since normals can automatically be interpolated, if your having problems where it looks like they're not, you might be using face normals, where every vertices normal is actually the face normal. You will want to average your vertices normals, so that the normals of all the faces sharing a vertex will be averaged together, then normalized. That will give you nice smooth lighting across faces.

##### Share on other sites
Thanks for the quick replies, PIX revealed the problem almost instantly. The issue appears when parsing a .obj model to the engine's model format. Every vertex of a given face is assigned the FACE normal and not it's own normal. So if every vertex of a face has the same normal...........that means interpolating the normals has no effect. This would also explain some very weird bug I'm having creating a simple silhouette edge.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627664
• Total Posts
2978522
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 10
• 12
• 22
• 13