• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
korvax

Introducing Normals make objects "twisted".

11 posts in this topic

Hi,

 I have a irrigating problem in my hobby engine witch I had have for a while now, I'm not sure what I'm doing wrong and hopping that some one can explain to me whats going in. In short: I'm trying to introduce Normals to my small project, currently I'm not trying to do anything with them just be able to display an object correctly with Normals in my Vertex.  Without the "Normal" part of the code everything is displaying fine, but as soon as i have Normal "support" in my vertex everything is getting twisted.

 

This is what I'm doing with Normals.

 

struct Vertex
{
	 XMFLOAT3 pos;
	XMFLOAT2 tex;
	XMFLOAT3 normal;
};

and creating an D3D11_INPUT_ELEMENT_DESC with normals "support"

 

D3D11_INPUT_ELEMENT_DESC layout[] =
  {
      { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
      { "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
      { "NORMAL", 0,   DXGI_FORMAT_R32G32B32_FLOAT, 0, D3D11_APPEND_ALIGNED_ELEMENT, D3D11_INPUT_PER_VERTEX_DATA, 0 },
  };

and Im creating the object with Normals (and the usual indices as well, not included here though). This object displays fine we im deleting all new "normal" code

UINT fsize =20;
UINT nPlane_Vertices=4;
Vertex plane_vertice[] =
{
    { XMFLOAT3( -1.0f*fsize,0.0f, 1.0f*fsize ), XMFLOAT2( 0.0f, 0.0f )			,  XMFLOAT3(0.0f,1.0f,0.0f)},
    { XMFLOAT3( 1.0f*fsize, 0.0f,1.0f*fsize), XMFLOAT2( 1.0f*fsize, 0.0f )		, XMFLOAT3(0.0f,1.0f,0.0f)},
    { XMFLOAT3( -1.0f*fsize,0.0f, -1.0f*fsize), XMFLOAT2( 0.0f, 1.0f*fsize )	, XMFLOAT3(0.0f,1.0f,0.0f)},
    { XMFLOAT3( 1.0f*fsize, 0.0f,-1.0f*fsize ), XMFLOAT2( 1.0f*fsize, 1.0f*fsize ),XMFLOAT3(0.0f,1.0f,0.0f)},
};

last the shaders. VS:

cbuffer VSConstants : register(cb0 )
{
	matrix World;
	matrix View;
	matrix Projection;	
}
//--------------------------------------------------------------------------------------
struct VS_INPUT
{
    float4 Pos	: POSITION;
    float2 Tex	: TEXCOORD0;
    float3 Normal	: NORMAL;
};

struct PS_INPUT
{
    float4 Pos	 : SV_POSITION;
    float2 Tex	 : TEXCOORD0;		
   float3 Normal	: NORMAL;
};


//--------------------------------------------------------------------------------------
// Vertex Shader
//--------------------------------------------------------------------------------------
PS_INPUT main( VS_INPUT input )
{
    PS_INPUT output = (PS_INPUT)0;
    output.Pos = mul( input.Pos, World );
    output.Pos = mul( output.Pos, View );
    output.Pos = mul( output.Pos, Projection );
    output.Tex = input.Tex;	    
    output.Normal = mul( input.Normal, World );
   output.Normal = normalize(output.Normal);
    return output;
}

and ps.

//--------------------------------------------------------------------------------------
// Constant Buffer Variables
//--------------------------------------------------------------------------------------
Texture2D txDiffuse : register( t0 );
SamplerState samLinear : register( s0 );

cbuffer cbPerObject : register(b0)
{
	float4 diffuse;
	bool isTextured;	
}

//--------------------------------------------------------------------------------------

struct PS_INPUT
{
    float4 Pos	 : SV_POSITION;
    float2 Tex	 : TEXCOORD0;	
   float3 Normal	: NORMAL;
};


//--------------------------------------------------------------------------------------
// Pixel Shader
//--------------------------------------------------------------------------------------
float4 main(PS_INPUT input) : SV_Target
{
    if (isTextured==true)
	   return  txDiffuse.Sample( samLinear, input.Tex ); 
    else
    	return diffuse;
}

 

My feeling is that it might be a variable size between, application space and gpu space in the shader on the vertex but I'm not sure.. Out of ideas right now. Anyone pls?

0

Share this post


Link to post
Share on other sites
Post the function call that binds the vertex buffer. Are you using the correct stride? The vertex shader is correct so the problem is most likely on the CPU side... Edited by TiagoCosta
0

Share this post


Link to post
Share on other sites

Your vertex input structure is declared with float4 for the position, but your input layout (and your intended input vertex structure too) are using a float3 for position.  This should probably cause errors or warnings when you bind the vertex shader and the input layout together - have you seen any error messages in the console output window?

 

Anyhow, if you are getting some texture coordinate data as a w-coordinate instead of always having a 1.0 as your starting w-coordinate, then it is probably that you would get some twisting like you are seeing now.  You should change the shader declaration to match the input data, and then use something like the following for your transformation instructions:

output.Pos = mul( float4(input.Pos,1.0f), World );

That will properly expand the float3 input data into a float4 for your mul instruction.

0

Share this post


Link to post
Share on other sites
Your vertex input structure is declared with float4 for the position, but your input layout (and your intended input vertex structure too) are using a float3 for position.  This should probably cause errors or warnings when you bind the vertex shader and the input layout together - have you seen any error messages in the console output window?

 

That's actually allowed. The IA will append a 1.0 to a float3 if the shader exects a float4, so it will do the right thing in this case. Personally I'd rather be explicit and add the 1.0 in the shader, but that's just me.

Anyway I'm with Tiago on this one, it's probably an incorrect stride passed to IASetVertexBuffers.

Edited by MJP
0

Share this post


Link to post
Share on other sites

HI all and thx for you help so far. Im setting the stride to the sizeof(Vertex) just like the examples in the MSDN. Shouldn't sizeof calculate the size in bytes of a datastructure or im i missing something here?

SetVertexBuffer(UINT uIDBuffer)
{
	UINT stride = sizeof(Vertex);
        UINT offset = 0;
	m_VB = m_pResourceManager->GetVertexBuffer(uIDBuffer);
	m_pContext->IASetVertexBuffers( 0, 1, &m_VB->m_pBuffer, &stride, &offset );
}

 

Your vertex input structure is declared with float4 for the position, but your input layout (and your intended input vertex structure too) are using a float3 for position.  This should probably cause errors or warnings when you bind the vertex shader and the input layout together - have you seen any error messages in the console output window?

 

That's actually allowed. The IA will append a 1.0 to a float3 if the shader exects a float4, so it will do the right thing in this case. Personally I'd rather be explicit and add the 1.0 in the shader, but that's just me.

Anyway I'm with Tiago on this one, it's probably an incorrect stride passed to IASetVertexBuffers.

 

Do i understand this correct DX11 is using with a float4 instead of a float3 by default and "filling out" the float3?

0

Share this post


Link to post
Share on other sites
From what I know it's best most of the time to use float4's, also because of easier multiplications etc. Unless there's a specific reason not to. It alos helps in distinguishing vectors and points (w=0 being a vector and w=1 being a point). Not sure if this will solve your problem though. Can you test it out without using the texture and texcoords?
0

Share this post


Link to post
Share on other sites
Check in PIX what is happening... You can see which values D3D is using in each vertex which might help debug this error.

I will do this but in i the mean time.

what is the correct way to calculate the .ByteWidth on a buffer read somewhere that it should be sizeof(vertex)* componets but hasnt be able to verrifie this..

bufferDesc.ByteWidth        = sizeof(Vertex) * number of elements in the vertex? That would be 8 in my case.. 3positions+ 2texcord+3normals?

and is my stride correct pls?
0

Share this post


Link to post
Share on other sites

The ByteWidth should be sizeof(Vertex) * number of elements. In your case the sizeof(Vertex) is 32 (3+2+3 floats * 4 bytes / float). 

 

Enable debug from D3D control panel. It should provide you some insight about the problem.

 

Cheers!

0

Share this post


Link to post
Share on other sites


Check in PIX what is happening... You can see which values D3D is using in each vertex which might help debug this error.

I will do this but in i the mean time.
what is the correct way to calculate the .ByteWidth on a buffer read somewhere that it should be sizeof(vertex)* componets but hasnt be able to verrifie this..
bufferDesc.ByteWidth        = sizeof(Vertex) * number of elements in the vertex? That would be 8 in my case.. 3positions+ 2texcord+3normals?
ByteWidth is the size in bytes of the buffer so if the mesh as n vertices, the ByteWidth of the vertex buffer would be sizeof(Vertex)*n.

The same goes for the index buffer, if you're storing the indices in a DWORD array, the ByteWidth of the index buffer is sizeof(DWORD)*numIndices.

and is my stride correct pls?
Yes it is. The strides is the size in bytes of a single vertex so sizeof(Vertex). Edited by TiagoCosta
1

Share this post


Link to post
Share on other sites

Enable debug from D3D control panel. It should provide you some insight about the problem.

 

You don't do it from the control panel in D3D11*, you do it by passing D3D11_CREATE_DEVICE_DEBUG when creating the device.

 

*Technically you can still do it by forcing the debug layer to be enabled for a given process, but it's way easier and more robust to just do it with code. Plus you probably only want it on for debug builds.

Edited by MJP
0

Share this post


Link to post
Share on other sites

HI all just go give some feedback on this. It all works now i dint do any meaning full changes to the code, but it seems that VS was building different version of the libraries, i dint see that warning before. Not sure why this happen, but i have had this problem before. Anyhow no when all version are in lined it works this. Thx all for the help.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0