• Create Account

## Introducing Normals make objects "twisted".

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

11 replies to this topic

### #1korvax  Members

309
Like
0Likes
Like

Posted 13 January 2013 - 03:05 PM

Hi,

I have a irrigating problem in my hobby engine witch I had have for a while now, I'm not sure what I'm doing wrong and hopping that some one can explain to me whats going in. In short: I'm trying to introduce Normals to my small project, currently I'm not trying to do anything with them just be able to display an object correctly with Normals in my Vertex.  Without the "Normal" part of the code everything is displaying fine, but as soon as i have Normal "support" in my vertex everything is getting twisted.

This is what I'm doing with Normals.

struct Vertex
{
XMFLOAT3 pos;
XMFLOAT2 tex;
XMFLOAT3 normal;
};

and creating an D3D11_INPUT_ELEMENT_DESC with normals "support"

D3D11_INPUT_ELEMENT_DESC layout[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "NORMAL", 0,   DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};

and Im creating the object with Normals (and the usual indices as well, not included here though). This object displays fine we im deleting all new "normal" code

UINT fsize =20;
UINT nPlane_Vertices=4;
Vertex plane_vertice[] =
{
{ XMFLOAT3( -1.0f*fsize,0.0f, 1.0f*fsize ), XMFLOAT2( 0.0f, 0.0f )			,  XMFLOAT3(0.0f,1.0f,0.0f)},
{ XMFLOAT3( 1.0f*fsize, 0.0f,1.0f*fsize), XMFLOAT2( 1.0f*fsize, 0.0f )		, XMFLOAT3(0.0f,1.0f,0.0f)},
{ XMFLOAT3( -1.0f*fsize,0.0f, -1.0f*fsize), XMFLOAT2( 0.0f, 1.0f*fsize )	, XMFLOAT3(0.0f,1.0f,0.0f)},
{ XMFLOAT3( 1.0f*fsize, 0.0f,-1.0f*fsize ), XMFLOAT2( 1.0f*fsize, 1.0f*fsize ),XMFLOAT3(0.0f,1.0f,0.0f)},
};

last the shaders. VS:

cbuffer VSConstants : register(cb0 )
{
matrix World;
matrix View;
matrix Projection;
}
//--------------------------------------------------------------------------------------
struct VS_INPUT
{
float4 Pos	: POSITION;
float2 Tex	: TEXCOORD0;
float3 Normal	: NORMAL;
};

struct PS_INPUT
{
float4 Pos	 : SV_POSITION;
float2 Tex	 : TEXCOORD0;
float3 Normal	: NORMAL;
};

//--------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------
PS_INPUT main( VS_INPUT input )
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = mul( input.Pos, World );
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
output.Tex = input.Tex;
output.Normal = mul( input.Normal, World );
output.Normal = normalize(output.Normal);
return output;
}


and ps.

//--------------------------------------------------------------------------------------
// Constant Buffer Variables
//--------------------------------------------------------------------------------------
Texture2D txDiffuse : register( t0 );
SamplerState samLinear : register( s0 );

cbuffer cbPerObject : register(b0)
{
float4 diffuse;
bool isTextured;
}

//--------------------------------------------------------------------------------------

struct PS_INPUT
{
float4 Pos	 : SV_POSITION;
float2 Tex	 : TEXCOORD0;
float3 Normal	: NORMAL;
};

//--------------------------------------------------------------------------------------
//--------------------------------------------------------------------------------------
float4 main(PS_INPUT input) : SV_Target
{
if (isTextured==true)
return  txDiffuse.Sample( samLinear, input.Tex );
else
return diffuse;
}


My feeling is that it might be a variable size between, application space and gpu space in the shader on the vertex but I'm not sure.. Out of ideas right now. Anyone pls?

### #2TiagoCosta  Members

3598
Like
0Likes
Like

Posted 13 January 2013 - 04:00 PM

Post the function call that binds the vertex buffer. Are you using the correct stride? The vertex shader is correct so the problem is most likely on the CPU side...

Edited by TiagoCosta, 13 January 2013 - 04:01 PM.

### #3Jason Z  Members

6417
Like
0Likes
Like

Posted 13 January 2013 - 08:54 PM

Your vertex input structure is declared with float4 for the position, but your input layout (and your intended input vertex structure too) are using a float3 for position.  This should probably cause errors or warnings when you bind the vertex shader and the input layout together - have you seen any error messages in the console output window?

Anyhow, if you are getting some texture coordinate data as a w-coordinate instead of always having a 1.0 as your starting w-coordinate, then it is probably that you would get some twisting like you are seeing now.  You should change the shader declaration to match the input data, and then use something like the following for your transformation instructions:

output.Pos = mul( float4(input.Pos,1.0f), World );

That will properly expand the float3 input data into a float4 for your mul instruction.

Jason Zink :: DirectX MVP

Direct3D 11 engine on CodePlex: Hieroglyph 3

Games: Lunar Rift

### #4MJP  Moderators

18218
Like
0Likes
Like

Posted 14 January 2013 - 12:15 AM

Your vertex input structure is declared with float4 for the position, but your input layout (and your intended input vertex structure too) are using a float3 for position.  This should probably cause errors or warnings when you bind the vertex shader and the input layout together - have you seen any error messages in the console output window?

That's actually allowed. The IA will append a 1.0 to a float3 if the shader exects a float4, so it will do the right thing in this case. Personally I'd rather be explicit and add the 1.0 in the shader, but that's just me.

Anyway I'm with Tiago on this one, it's probably an incorrect stride passed to IASetVertexBuffers.

Edited by MJP, 14 January 2013 - 12:16 AM.

### #5korvax  Members

309
Like
0Likes
Like

Posted 14 January 2013 - 12:54 PM

HI all and thx for you help so far. Im setting the stride to the sizeof(Vertex) just like the examples in the MSDN. Shouldn't sizeof calculate the size in bytes of a datastructure or im i missing something here?

SetVertexBuffer(UINT uIDBuffer)
{
UINT stride = sizeof(Vertex);
UINT offset = 0;
m_VB = m_pResourceManager->GetVertexBuffer(uIDBuffer);
m_pContext->IASetVertexBuffers( 0, 1, &m_VB->m_pBuffer, &stride, &offset );
}



Your vertex input structure is declared with float4 for the position, but your input layout (and your intended input vertex structure too) are using a float3 for position.  This should probably cause errors or warnings when you bind the vertex shader and the input layout together - have you seen any error messages in the console output window?

That's actually allowed. The IA will append a 1.0 to a float3 if the shader exects a float4, so it will do the right thing in this case. Personally I'd rather be explicit and add the 1.0 in the shader, but that's just me.

Anyway I'm with Tiago on this one, it's probably an incorrect stride passed to IASetVertexBuffers.

Do i understand this correct DX11 is using with a float4 instead of a float3 by default and "filling out" the float3?

### #6cozzie  Members

4631
Like
0Likes
Like

Posted 14 January 2013 - 02:03 PM

From what I know it's best most of the time to use float4's, also because of easier multiplications etc. Unless there's a specific reason not to. It alos helps in distinguishing vectors and points (w=0 being a vector and w=1 being a point). Not sure if this will solve your problem though. Can you test it out without using the texture and texcoords?

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me

### #7TiagoCosta  Members

3598
Like
0Likes
Like

Posted 14 January 2013 - 03:42 PM

Check in PIX what is happening... You can see which values D3D is using in each vertex which might help debug this error.

### #8korvax  Members

309
Like
0Likes
Like

Posted 16 January 2013 - 07:06 AM

Check in PIX what is happening... You can see which values D3D is using in each vertex which might help debug this error.

I will do this but in i the mean time.

what is the correct way to calculate the .ByteWidth on a buffer read somewhere that it should be sizeof(vertex)* componets but hasnt be able to verrifie this..

bufferDesc.ByteWidth        = sizeof(Vertex) * number of elements in the vertex? That would be 8 in my case.. 3positions+ 2texcord+3normals?

and is my stride correct pls?


### #9kauna  Members

2918
Like
0Likes
Like

Posted 16 January 2013 - 09:07 AM

The ByteWidth should be sizeof(Vertex) * number of elements. In your case the sizeof(Vertex) is 32 (3+2+3 floats * 4 bytes / float).

Enable debug from D3D control panel. It should provide you some insight about the problem.

Cheers!

### #10TiagoCosta  Members

3598
Like
1Likes
Like

Posted 16 January 2013 - 09:23 AM

Check in PIX what is happening... You can see which values D3D is using in each vertex which might help debug this error.

I will do this but in i the mean time.
what is the correct way to calculate the .ByteWidth on a buffer read somewhere that it should be sizeof(vertex)* componets but hasnt be able to verrifie this..
bufferDesc.ByteWidth        = sizeof(Vertex) * number of elements in the vertex? That would be 8 in my case.. 3positions+ 2texcord+3normals?
ByteWidth is the size in bytes of the buffer so if the mesh as n vertices, the ByteWidth of the vertex buffer would be sizeof(Vertex)*n.

The same goes for the index buffer, if you're storing the indices in a DWORD array, the ByteWidth of the index buffer is sizeof(DWORD)*numIndices.

and is my stride correct pls?
Yes it is. The strides is the size in bytes of a single vertex so sizeof(Vertex).

Edited by TiagoCosta, 16 January 2013 - 09:23 AM.

### #11MJP  Moderators

18218
Like
0Likes
Like

Posted 16 January 2013 - 01:58 PM

Enable debug from D3D control panel. It should provide you some insight about the problem.

You don't do it from the control panel in D3D11*, you do it by passing D3D11_CREATE_DEVICE_DEBUG when creating the device.

*Technically you can still do it by forcing the debug layer to be enabled for a given process, but it's way easier and more robust to just do it with code. Plus you probably only want it on for debug builds.

Edited by MJP, 16 January 2013 - 02:03 PM.

### #12korvax  Members

309
Like
0Likes
Like

Posted 18 January 2013 - 04:33 AM

HI all just go give some feedback on this. It all works now i dint do any meaning full changes to the code, but it seems that VS was building different version of the libraries, i dint see that warning before. Not sure why this happen, but i have had this problem before. Anyhow no when all version are in lined it works this. Thx all for the help.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.