Sign in to follow this  
PlaydohLegs

Please can someone help a beginner out. (Input Assembler - Vertex Shader linkage error)

Recommended Posts

I have been struggle with this problem for a while and I can't figure it out.

I'm getting the below error a number of times:

 

D3D11 ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX]

 

//Vertex shader input

struct VS_NORMALMAP_INPUT
{
float3 Pos     : POSITION;
float3 Normal  : NORMAL;
float3 Tangent : TANGENT;
float2 UV      : TEXCOORD0;
};

 

//Vertex shader output

struct VS_LIGHTING_OUTPUT
{
float4 ProjPos       : SV_POSITION;  // 2D "projected" position for vertex (required output for vertex shader)
float3 WorldPos : POSITION;
float3 WorldNormal : NORMAL;
float3 Tangent : TANGENT;
float2 UV            : TEXCOORD;
};

 

//Vertex shader

VS_BASIC_OUTPUT VS_PlainTexture(VS_BASIC_INPUT vIn)
{
VS_BASIC_OUTPUT vOut;
 
float4 modelPos = float4(vIn.Pos, 1.0f);
float4 worldPos = mul(modelPos, WorldMatrix);
float4 viewPos = mul(worldPos, ViewMatrix);
vOut.ProjPos = mul(viewPos, ProjMatrix);
 
vOut.UV = vIn.UV;
 
return vOut;
}
 
//Pixel Shader
float4 ShadowMapTex(VS_LIGHTING_OUTPUT vOut) : SV_Target
{
float3 modelNormal = normalize(vOut.WorldNormal);
float3 modelTangent = normalize(vOut.Tangent);
 
float3 modelBiTangent = cross(modelNormal, modelTangent);
float3x3 invTangentMatrix = float3x3(modelTangent, modelBiTangent, modelNormal);
 
float3 CameraDir = normalize(CameraPos - vOut.WorldPos.xyz);
 
float3x3 invWorldMatrix = transpose(WorldMatrix);
float3 cameraModelDir = normalize(mul(CameraDir, invWorldMatrix));
 
float3x3 tangentMatrix = transpose(invTangentMatrix);
float2 textureOffsetDir = mul(cameraModelDir, tangentMatrix);
 
float texDepth = ParallaxDepth * (NormalMap.Sample(Trilinear, vOut.UV).a - 0.5f);
 
float2 offsetTexCoord = vOut.UV + texDepth * textureOffsetDir;
 
float3 textureNormal = 2.0f * NormalMap.Sample(Trilinear, offsetTexCoord) - 1.0f;
 
float3 worldNormal = normalize(mul(mul(textureNormal, invTangentMatrix), WorldMatrix));
 
float3 Light1Dir = normalize(LightPos1 - vOut.WorldPos.xyz);
float3 Light1Dist = length(LightPos1 - vOut.WorldPos.xyz);
float3 DiffuseLight1 = LightColour1 * max(dot(worldNormal.xyz, Light1Dir), 0) / Light1Dist;
float3 halfway = normalize(Light1Dir + CameraDir);
float3 SpecularLight1 = DiffuseLight1 * pow(max(dot(worldNormal.xyz, halfway), 0), SpecularPower);
 
float3 Light2Dir = normalize(LightPos2 - vOut.WorldPos.xyz);
float3 Light2Dist = length(LightPos2 - vOut.WorldPos.xyz);
float3 DiffuseLight2 = LightColour2 * max(dot(worldNormal.xyz, Light2Dir), 0) / Light2Dist;
halfway = normalize(Light2Dir + CameraDir);
float3 SpecularLight2 = DiffuseLight2 * pow(max(dot(worldNormal.xyz, halfway), 0), SpecularPower);
 
float4 SpotlightViewPos = mul(float4(vOut.WorldPos, 1.0f), SpotlightViewMatrix);
float4 SpotlightProjPos = mul(SpotlightViewPos, SpotlightProjMatrix);
 
float3 SpotlightDir = normalize(SpotlightPos - vOut.WorldPos.xyz);
 
if (dot(SpotlightFacing, -SpotlightDir) > SpotlightCosAngle) //**** This condition needs to be written as the first exercise to get spotlights working
{
float2 shadowUV = 0.5f * SpotlightProjPos.xy / SpotlightProjPos.w + float2(0.5f, 0.5f);
shadowUV.y = 1.0f - shadowUV.y;
 
float depthFromLight = SpotlightProjPos.z / SpotlightProjPos.w;// - DepthAdjust; //*** Adjustment so polygons don't shadow themselves
 
if (depthFromLight < ShadowMap1.Sample(PointClamp, shadowUV).r)
{
 
float3 SpotlightDist = length(SpotlightPos - vOut.WorldPos.xyz);
diffuseLight3 = SpotlightColour * max(dot(worldNormal.xyz, SpotlightDir), 0) / SpotlightDist;
float3 halfway = normalize(SpotlightDir + cameraDir);
specularLight3 = diffuseLight3 * pow(max(dot(worldNormal.xyz, halfway), 0), SpecularPower);
}
}
 
float3 DiffuseLight = AmbientColour + DiffuseLight1 + DiffuseLight2 + diffuseLight3;
float3 SpecularLight = SpecularLight1 + SpecularLight2 + specularLight3;
 
float4 DiffuseMaterial = DiffuseMap.Sample(Trilinear, offsetTexCoord);
float3 SpecularMaterial = DiffuseMaterial.a;
 
float4 combinedColour;
combinedColour.rgb = DiffuseMaterial * DiffuseLight + SpecularMaterial * SpecularLight;
combinedColour.a = 1.0f; // No alpha processing in this shader, so just set it to 1
 
return combinedColour;
}

 

Any advice to help diagnose this problem will be much appreciated.

I apologies if I provided to much information.

Share this post


Link to post
Share on other sites
You need to post your vertex layout code from cpu side, that's where the issue is likely to be.

Edit: On my mobile atm, looks like you use TEXCOORD0 for the input but just TEXCOORD for out, have you tried matching these up? I don't know which exists without seeing cpu code.

Also do you by any chance go to UCLAN? This code looks familiar.

Share this post


Link to post
Share on other sites

I removed the 0 at the end and the problem still persist. Yes, I am a student at UCLAN! Did you study there previously?

 

Here is the code on the CPU side:

bool CModel::Load( const string& fileName, ID3D10EffectTechnique* exampleTechnique, bool tangents /*= false*/ ) // The commented out bit is the default parameter (can't write it here, only in the declaration)
{
	// Release any existing geometry in this object
	ReleaseResources();

	// Use CImportXFile class (from another application) to load the given file. The import code is wrapped in the namespace 'gen'
	gen::CImportXFile mesh;
	if (mesh.ImportFile( fileName.c_str() ) != gen::kSuccess)
	{
		return false;
	}

	// Get first sub-mesh from loaded file
	gen::SSubMesh subMesh;
	if (mesh.GetSubMesh( 0, &subMesh, tangents ) != gen::kSuccess)
	{
		return false;
	}


	// Create vertex element list & layout. We need a vertex layout to say what data we have per vertex in this model (e.g. position, normal, uv, etc.)
	// In previous projects the element list was a manually typed in array as we knew what data we would provide. However, as we can load models with
	// different vertex data this time we need flexible code. The array is built up one element at a time: ask the import class if it loaded normals, 
	// if so then add a normal line to the array, then ask if it loaded UVS...etc
	unsigned int numElts = 0;
	unsigned int offset = 0;
	// Position is always required
	m_VertexElts[numElts].SemanticName = "POSITION";   // Semantic in HLSL (what is this data for)
	m_VertexElts[numElts].SemanticIndex = 0;           // Index to add to semantic (a count for this kind of data, when using multiple of the same type, e.g. TEXCOORD0, TEXCOORD1)
	m_VertexElts[numElts].Format = DXGI_FORMAT_R32G32B32_FLOAT; // Type of data - this one will be a float3 in the shader. Most data communicated as though it were colours
	m_VertexElts[numElts].AlignedByteOffset = offset;  // Offset of element from start of vertex data (e.g. if we have position (float3), uv (float2) then normal, the normal's offset is 5 floats = 5*4 = 20)
	m_VertexElts[numElts].InputSlot = 0;               // For when using multiple vertex buffers (e.g. instancing - an advanced topic)
	m_VertexElts[numElts].InputSlotClass = D3D10_INPUT_PER_VERTEX_DATA; // Use this value for most cases (only changed for instancing)
	m_VertexElts[numElts].InstanceDataStepRate = 0;                     // --"--
	offset += 12;
	++numElts;
	// Repeat for each kind of vertex data
	if (subMesh.hasNormals)
	{
		m_VertexElts[numElts].SemanticName = "NORMAL";
		m_VertexElts[numElts].SemanticIndex = 0;
		m_VertexElts[numElts].Format = DXGI_FORMAT_R32G32B32_FLOAT;
		m_VertexElts[numElts].AlignedByteOffset = offset;
		m_VertexElts[numElts].InputSlot = 0;
		m_VertexElts[numElts].InputSlotClass = D3D10_INPUT_PER_VERTEX_DATA;
		m_VertexElts[numElts].InstanceDataStepRate = 0;
		offset += 12;
		++numElts;
	}
	if (subMesh.hasTangents)
	{
		m_VertexElts[numElts].SemanticName = "TANGENT";
		m_VertexElts[numElts].SemanticIndex = 0;
		m_VertexElts[numElts].Format = DXGI_FORMAT_R32G32B32_FLOAT;
		m_VertexElts[numElts].AlignedByteOffset = offset;
		m_VertexElts[numElts].InputSlot = 0;
		m_VertexElts[numElts].InputSlotClass = D3D10_INPUT_PER_VERTEX_DATA;
		m_VertexElts[numElts].InstanceDataStepRate = 0;
		offset += 12;
		++numElts;
	}
	if (subMesh.hasTextureCoords)
	{
		m_VertexElts[numElts].SemanticName = "TEXCOORD";
		m_VertexElts[numElts].SemanticIndex = 0;
		m_VertexElts[numElts].Format = DXGI_FORMAT_R32G32_FLOAT;
		m_VertexElts[numElts].AlignedByteOffset = offset;
		m_VertexElts[numElts].InputSlot = 0;
		m_VertexElts[numElts].InputSlotClass = D3D10_INPUT_PER_VERTEX_DATA;
		m_VertexElts[numElts].InstanceDataStepRate = 0;
		offset += 8;
		++numElts;
	}
	if (subMesh.hasVertexColours)
	{
		m_VertexElts[numElts].SemanticName = "COLOR";
		m_VertexElts[numElts].SemanticIndex = 0;
		m_VertexElts[numElts].Format = DXGI_FORMAT_R8G8B8A8_UNORM; // A RGBA colour with 1 byte (0-255) per component
		m_VertexElts[numElts].AlignedByteOffset = offset;
		m_VertexElts[numElts].InputSlot = 0;
		m_VertexElts[numElts].InputSlotClass = D3D10_INPUT_PER_VERTEX_DATA;
		m_VertexElts[numElts].InstanceDataStepRate = 0;
		offset += 4;
		++numElts;
	}
	m_VertexSize = offset;

	// Given the vertex element list, pass it to DirectX to create a vertex layout. We also need to pass an example of a technique that will
	// render this model. We will only be able to render this model with techniques that have the same vertex input as the example we use here
	D3D10_PASS_DESC PassDesc;
	exampleTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc );
	Device->CreateInputLayout( m_VertexElts, numElts, PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_VertexLayout );


	// Create the vertex buffer and fill it with the loaded vertex data
	m_NumVertices = subMesh.numVertices;
	D3D10_BUFFER_DESC bufferDesc;
	bufferDesc.BindFlags = D3D10_BIND_VERTEX_BUFFER;
	bufferDesc.Usage = D3D10_USAGE_DEFAULT; // Not a dynamic buffer
	bufferDesc.ByteWidth = m_NumVertices * m_VertexSize; // Buffer size
	bufferDesc.CPUAccessFlags = 0;   // Indicates that CPU won't access this buffer at all after creation
	bufferDesc.MiscFlags = 0;
	D3D10_SUBRESOURCE_DATA initData; // Initial data
	initData.pSysMem = subMesh.vertices;   
	if (FAILED( Device->CreateBuffer( &bufferDesc, &initData, &m_VertexBuffer )))
	{
		return false;
	}


	// Create the index buffer - assuming 2-byte (WORD) index data
	m_NumIndices = static_cast<unsigned int>(subMesh.numFaces) * 3;
	bufferDesc.BindFlags = D3D10_BIND_INDEX_BUFFER;
	bufferDesc.Usage = D3D10_USAGE_DEFAULT;
	bufferDesc.ByteWidth = m_NumIndices * sizeof(WORD);
	bufferDesc.CPUAccessFlags = 0;
	bufferDesc.MiscFlags = 0;
	initData.pSysMem = subMesh.faces;   
	if (FAILED( Device->CreateBuffer( &bufferDesc, &initData, &m_IndexBuffer )))
	{
		return false;
	}

	m_HasGeometry = true;
	return true;
}
It is part of a model class which loads geometry from .x files.
Edited by PlaydohLegs

Share this post


Link to post
Share on other sites

If your subMesh returns false in at least one of the conditions, the input layout will not match whats defined in your hlsl code.

 

Edit: Easiest thing would be to stick with one input layout and make all vertex buffers match this.

Edited by BloodyEpi

Share this post


Link to post
Share on other sites

I ran the debugger through the load method for all of the models and the input layout matches the vertex shader input structure for all the models (even in the same order).

 

Could it be the data I'm passing to the vertex shader?

Another thing, the subMesh returns false every time on hasVertexColours. Although I never pass any colour to the shader, could this be the culprit?

 

I'll make one input layout and I'll let you know what happens.

Share this post


Link to post
Share on other sites

The vertex shader is outputting a VS_BASIC_OUTPUT while the pixel shader is expecting a VS_LIGHTING_OUTPUT as input.  The VS_BASIC_OUTPUT structure is not shown, but if it differs from VS_LIGHTING_OUTPUT that could be the problem.

Share this post


Link to post
Share on other sites

You need to post all pertinent parts of the shader. As lunkhound mentions, it may be that your structure definitions for vertex output and pixel shader input don't match up. However, there's no way to tell from what you posted. As posted, your shader code isn't compilable - VS_BASIC_xxx structures aren't defined, variables World-, View-, and ProjMatrix aren't to be found, etc.

Share this post


Link to post
Share on other sites

Input Structures:  

struct VS_BASIC_INPUT
{
	float3 Pos    : POSITION;
	float3 Normal : NORMAL;
	float2 UV     : TEXCOORD0;
};

struct VS_NORMALMAP_INPUT
{
	float3 Pos     : POSITION;
	float3 Normal  : NORMAL;
	float3 Tangent : TANGENT;
	float2 UV      : TEXCOORD0;
};

Output Structures:

struct VS_BASIC_OUTPUT
{
    float4 ProjPos : SV_POSITION;
    float2 UV      : TEXCOORD;
};

struct VS_LIGHTING_OUTPUT
{
	float4 ProjPos       : SV_POSITION;  // 2D "projected" position for vertex (required output for vertex shader)
	float3 WorldPos		 : POSITION;
	float3 WorldNormal	 : NORMAL;
	float3 Tangent		 : TANGENT;
	float2 UV            : TEXCOORD0;
};

struct VS_NORMALMAP_OUTPUT
{
	float4 ProjPos      : SV_POSITION;
	float3 WorldPos     : POSITION;
	float3 ModelNormal  : NORMAL;
	float3 ModelTangent : TANGENT;
	float2 UV           : TEXCOORD0;
}; 

Vertex Shaders:

VS_BASIC_OUTPUT BasicTransform( VS_BASIC_INPUT vIn )
{
	VS_BASIC_OUTPUT vOut;
	
	// Use world matrix passed from C++ to transform the input model vertex position into world space
	float4 modelPos = float4(vIn.Pos, 1.0f); // Promote to 1x4 so we can multiply by 4x4 matrix, put 1.0 in 4th element for a point (0.0 for a vector)
	float4 worldPos = mul( modelPos, WorldMatrix );
	float4 viewPos  = mul( worldPos, ViewMatrix );
	vOut.ProjPos    = mul( viewPos,  ProjMatrix );
	
	// Pass texture coordinates (UVs) on to the pixel shader
	vOut.UV = vIn.UV;

	return vOut;
}

//Basic Requirement No.1
VS_BASIC_OUTPUT VS_PlainTexture(VS_BASIC_INPUT vIn)
{
	VS_BASIC_OUTPUT vOut;

	// Use world matrix passed from C++ to transform the input model vertex position into world space
	float4 modelPos = float4(vIn.Pos, 1.0f); // Promote to 1x4 so we can multiply by 4x4 matrix, put 1.0 in 4th element for a point (0.0 for a vector)
	float4 worldPos = mul(modelPos, WorldMatrix);
	float4 viewPos = mul(worldPos, ViewMatrix);
	vOut.ProjPos = mul(viewPos, ProjMatrix);

	// Pass texture coordinates (UVs) on to the pixel shader
	vOut.UV = vIn.UV;

	return vOut;
}

//Basic Requirement No.2
VS_BASIC_OUTPUT VS_WiggleTexture(VS_BASIC_INPUT vIn)
{
	VS_BASIC_OUTPUT vOut;

	// Use world matrix passed from C++ to transform the input model vertex position into world space
	float4 modelPos = float4(vIn.Pos, 1.0f); // Promote to 1x4 so we can multiply by 4x4 matrix, put 1.0 in 4th element for a point (0.0 for a vector)
	float4 worldPos = mul(modelPos, WorldMatrix);
	float4 viewPos = mul(worldPos, ViewMatrix);
	vOut.ProjPos = mul(viewPos, ProjMatrix);

	// Pass texture coordinates (UVs) on to the pixel shader
	vOut.UV = vIn.UV;

	float SinY = sin(vOut.UV.y * radians(360.0f) + Wiggle);
	vOut.UV.x += 0.1f * SinY;

	float SinX = sin(vOut.UV.x * radians(360.0f) + Wiggle);
	vOut.UV.y += 0.1f * SinX;

	return vOut;
}

//Basic Requirement No.3
VS_LIGHTING_OUTPUT VS_DiffuseAndSpecular(VS_BASIC_INPUT vIn)
{
	VS_LIGHTING_OUTPUT vOut;

	// Transform vertices from model into world space and then into 2D
	float4 modelPos = float4(vIn.Pos, 1.0f);
	float4 worldPos = mul(modelPos, WorldMatrix);

	// Use camera matrices to further transform the vertex from world space into view space (camera's point of view) and finally into 2D "projection" space for rendering
	float4 viewPos = mul(worldPos, ViewMatrix);
	vOut.ProjPos = mul(viewPos, ProjMatrix);

	// Transform the vertex normal from model space into world space (almost same as first lines of code above)
	float4 modelNormal = float4(vIn.Normal, 0.0f); 
	float4 worldNormal = mul(modelNormal, WorldMatrix);

	worldNormal = normalize(worldNormal);

	// Pass data unused by vertex shader to remainder of pipeline
	vOut.WorldPos = (float3)worldPos;
	vOut.WorldNormal = (float3)worldNormal;

	// Pass texture coordinates (UVs) on to the pixel shader, the vertex shader doesn't need them
	vOut.UV = vIn.UV;

	return vOut;
}

//Advanced requirement No.1
VS_NORMALMAP_OUTPUT NormalMapTransform(VS_NORMALMAP_INPUT vIn)
{
	VS_NORMALMAP_OUTPUT vOut;

	// Use world matrix passed from C++ to transform the input model vertex position into world space
	float4 modelPos = float4(vIn.Pos, 1.0f); // Promote to 1x4 so we can multiply by 4x4 matrix, put 1.0 in 4th element for a point (0.0 for a vector)
	float4 worldPos = mul(modelPos, WorldMatrix);
	vOut.WorldPos = worldPos.xyz;

	// Use camera matrices to further transform the vertex from world space into view space (camera's point of view) and finally into 2D "projection" space for rendering
	float4 viewPos = mul(worldPos, ViewMatrix);
	vOut.ProjPos = mul(viewPos, ProjMatrix);

	// Just send the model's normal and tangent untransformed (in model space). The pixel shader will do the matrix work on normals
	vOut.ModelNormal = vIn.Normal;
	vOut.ModelTangent = vIn.Tangent;

	// Pass texture coordinates (UVs) on to the pixel shader, the vertex shader doesn't need them
	vOut.UV = vIn.UV;

	return vOut;
}

//Advance requirement No.2
VS_LIGHTING_OUTPUT LightingTransformTex(VS_NORMALMAP_INPUT vIn)
{
	VS_LIGHTING_OUTPUT vOut;

	// Use world matrix passed from C++ to transform the input model vertex position into world space
	float4 modelPos = float4(vIn.Pos, 1.0f); // Promote to 1x4 so we can multiply by 4x4 matrix, put 1.0 in 4th element for a point (0.0 for a vector)
	float4 worldPos = mul(modelPos, WorldMatrix);
	vOut.WorldPos = worldPos.xyz;

	// Use camera matrices to further transform the vertex from world space into view space (camera's point of view) and finally into 2D "projection" space for rendering
	float4 viewPos = mul(worldPos, ViewMatrix);
	vOut.ProjPos = mul(viewPos, ProjMatrix);

	// Transform the vertex normal from model space into world space (almost same as first lines of code above)
	float4 modelNormal = float4(vIn.Normal, 0.0f); // Set 4th element to 0.0 this time as normals are vectors
	vOut.Tangent = vIn.Tangent;
	vOut.WorldNormal = mul(modelNormal, WorldMatrix).xyz;

	// Pass texture coordinates (UVs) on to the pixel shader, the vertex shader doesn't need them
	vOut.UV = vIn.UV;

	return vOut;
}

I think I have posted everything you wanted but if I have missed something let me know.

Edited by PlaydohLegs

Share this post


Link to post
Share on other sites

Input layout creation would fail if it didn't match your vertex shader input signature. But the error happens at a draw call, so maybe you just forgot to set the correct layout or technique. Other drawing going on than your mesh ? 

 

Using your debugger should also reveal WHEN that failing draw call happens exactly (which draw call is it ?).

Share this post


Link to post
Share on other sites

unbird's post <-- this

 

Also, FYI, you can make your life easier, and avoid potential errors, by using D3D10_APPEND_ALIGNED_ELEMENT for the AlignedByteOffset in your input layout code. I.e., you can avoid updating the size of the offset for each element. If you ever change (e.g.) R32G32B32 to R32G32B32A32 in some element and forget to (also) change the size of the offset, you'll end up with a bug that may be very difficult to find.

 

From the docs: "Use D3D10_APPEND_ALIGNED_ELEMENT for convenience to define the current element directly after the previous one, including any packing if necessary."

Edited by Buckeye

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this