Advertisement Jump to content
Sign in to follow this  
3DModelerMan

DX11 Vertex elements besides position don't work

This topic is 1832 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to add a normal element to my vertex structure, but it's always (0.0, 0.0, 0.0) when it gets sent to the shader. I tried using it in a lighting shader, but it wasn't working so I just displayed the color of the normals and it was solid black. I've also tried adding texcoords before and they were 0.0 as well because they only sampled the corner of the image. I use this code to describe my vertex format. It's not D3D11 because it's a wrapper around the D3D11 input layouts, but I checked in the debugger at the point where the D3D11 input layout is created and it's EXACTLY the same as the input layout description from Tutorial 4 and my vertex structure is the exact same size as well.

std::vector<graphics::SInputElementDesc> elements;
elements.push_back(graphics::SInputElementDesc("POSITION", graphics::EIF_VEC_3));
elements.push_back(graphics::SInputElementDesc("COLOR", graphics::EIF_VEC_4));
graphics::IInputLayout* layout = gfxMod->getDriver()->createInputLayout(elements, "test_input_layout");

This is my vertex structure:

class Vert
{
public:

	Vert(glm::vec3 pos, glm::vec4 norm)
		:m_pos(pos), m_norm(norm)
	{}

	glm::vec3 m_pos;
	glm::vec4 m_norm;
};

This is my mesh data:

std::vector<Vert> vertices;
vertices.push_back(Vert(glm::vec3( -1.0f, 1.0f, -1.0f ), glm::vec4( 0.0f, 0.0f, 1.0f, 1.0f )));
vertices.push_back(Vert(glm::vec3( 1.0f, 1.0f, -1.0f ), glm::vec4( 0.0f, 1.0f, 0.0f, 1.0f )));
vertices.push_back(Vert(glm::vec3( 1.0f, 1.0f, 1.0f ), glm::vec4( 0.0f, 1.0f, 1.0f, 1.0f )));
vertices.push_back(Vert(glm::vec3( -1.0f, 1.0f, 1.0f ), glm::vec4( 1.0f, 0.0f, 0.0f, 1.0f )));
vertices.push_back(Vert(glm::vec3( -1.0f, -1.0f, -1.0f ), glm::vec4( 1.0f, 0.0f, 1.0f, 1.0f )));
vertices.push_back(Vert(glm::vec3( 1.0f, -1.0f, -1.0f ), glm::vec4( 1.0f, 1.0f, 0.0f, 1.0f )));
vertices.push_back(Vert(glm::vec3( 1.0f, -1.0f, 1.0f ), glm::vec4( 1.0f, 1.0f, 1.0f, 1.0f )));
vertices.push_back(Vert(glm::vec3( -1.0f, -1.0f, 1.0f ), glm::vec4( 0.0f, 0.0f, 0.0f, 1.0f )));

std::vector<unsigned int> indices;
indices.push_back(3);
indices.push_back(1);
indices.push_back(0);
indices.push_back(2);
indices.push_back(1);
indices.push_back(3);
indices.push_back(0);
indices.push_back(5);
indices.push_back(4);
indices.push_back(1);
indices.push_back(5);
indices.push_back(0);
indices.push_back(3);
indices.push_back(4);
indices.push_back(7);
indices.push_back(0);
indices.push_back(4);
indices.push_back(3);
indices.push_back(1);
indices.push_back(6);
indices.push_back(5);
indices.push_back(2);
indices.push_back(6);
indices.push_back(1);
indices.push_back(2);
indices.push_back(7);
indices.push_back(6);
indices.push_back(3);
indices.push_back(7);
indices.push_back(2);
indices.push_back(6);
indices.push_back(4);
indices.push_back(5);
indices.push_back(7);
indices.push_back(4);
indices.push_back(6);

mesh->init(&vertices[0], vertices.size(), layout, &indices[0], indices.size());

The mesh->init method looks like this:

bool CDX11MeshBuffer::init(void* vertices, unsigned int numVertices, IInputLayout* inputLayout, unsigned int* indices, unsigned int numIndices)
{
	//Create vertex buffer
	{
		D3D11_BUFFER_DESC bd;
		ZeroMemory( &bd, sizeof(bd) );
		bd.Usage = D3D11_USAGE_DEFAULT;
		bd.ByteWidth = inputLayout->getInputLayoutSize() * numVertices;
		bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
		bd.CPUAccessFlags = 0;
		bd.MiscFlags = 0;
		D3D11_SUBRESOURCE_DATA InitData;
		ZeroMemory( &InitData, sizeof(InitData) );
		InitData.pSysMem = vertices;
		if( FAILED( m_device->CreateBuffer( &bd, &InitData, &m_vertexBuffer ) ) )
			return false;
	}

	//Create index buffer
	{
		D3D11_BUFFER_DESC bd;
		ZeroMemory( &bd, sizeof(bd) );
		bd.Usage = D3D11_USAGE_DEFAULT;
		bd.ByteWidth = sizeof( unsigned int ) * numIndices;
		bd.BindFlags = D3D11_BIND_INDEX_BUFFER;
		bd.CPUAccessFlags = 0;
		bd.MiscFlags = 0;

		D3D11_SUBRESOURCE_DATA InitData;
		ZeroMemory( &InitData, sizeof(InitData) );
		InitData.pSysMem = indices;

		// Create the buffer with the device.
		if( FAILED( m_device->CreateBuffer( &bd, &InitData, &m_indexBuffer ) ) )
			return false;
	}

	//Store input layout
	m_inputLayout = inputLayout;

	//Store vertex and index count
	m_numVertices = numVertices;
	m_numIndices = numIndices;

	return true;
}

Now the weird thing is that position works perfectly fine. My meshes are rendering on the screen fine, except that I can't get any other data aside from position into the shader. This code should create the EXACT same results as Tutorial 4, but the colors never make it to the shader. If you want to see the shader code then it's below:

//===================================================================================
//Default basic VS
//===================================================================================
const std::string CShaderManager::g_default_basic_vs = 
"cbuffer ConstantBuffer : register( b0 )\n"
"{\n"
"	matrix World;\n"
"	matrix ViewProj;\n"
"}\n\n"

"struct VS_OUTPUT\n"
"{\n"
"	float4 Pos : SV_POSITION;\n"
"	float4 Color : COLOR0;\n"
"};\n\n"

"VS_OUTPUT VS( float4 Pos : POSITION, float4 Color : COLOR )\n"
"{\n"
"	VS_OUTPUT output = (VS_OUTPUT)0;\n"
"	output.Pos = mul( Pos, World );\n"
"	output.Pos = mul( output.Pos, ViewProj );\n"
"	output.Color = Color;\n"
"	return output;\n"
"}";

//===================================================================================
//Default textured PS
//===================================================================================
const std::string CShaderManager::g_default_textured_ps = 
"struct VS_OUTPUT\n"
"{\n"
"	float4 Pos : SV_POSITION;\n"
"	float4 Color : COLOR0;\n"
"};\n\n"

"float4 PS( VS_OUTPUT input ) : SV_Target\n"
"{\n"
"	return input.Color;\n"
"}";

The shaders compile perfectly fine, and it's the same shader that I've had the mesh rendering just fine with.

 

Share this post


Link to post
Share on other sites
Advertisement

The createInputLayout method just "news up" a CDX11InputLayout object and calls its init function.

bool IInputLayout::init(std::vector<SInputElementDesc>& elements)
{
	unsigned int lastOffset = 0;

	for ( unsigned int i=0; i<elements.size(); ++i )
	{
		elements[i].m_offset = lastOffset;

		size_t elementSz = getElementSize(elements[i].m_format);

		lastOffset += elementSz;
		elements[i].m_size = elementSz;
	}
	
	m_elements = elements;
	return true;
}
bool CDX11InputLayout::init(std::vector<SInputElementDesc>& elements)
{
	IInputLayout::init(elements);

	std::vector<D3D11_INPUT_ELEMENT_DESC> d3dElements;
	for ( unsigned int i=0; i<elements.size(); ++i )
	{
		D3D11_INPUT_ELEMENT_DESC element;
		ZeroMemory(&element, sizeof(element));

		element.SemanticName = elements[i].m_name.c_str();
		element.SemanticIndex = elements[i].m_index;
		element.InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
		element.Format = icicleFormatToD3DFormat(elements[i].m_format);
		element.AlignedByteOffset = elements[i].m_offset;
		m_inputLayoutSz += elements[i].m_size;

		d3dElements.push_back(element);
	}

	HRESULT hr = m_device->CreateInputLayout(&d3dElements[0], d3dElements.size(), m_driver->m_defaultVSblob->GetBufferPointer(), m_driver->m_defaultVSblob->GetBufferSize(), &m_layout );
	if( FAILED(hr) )
		return false;

	return true;
}

I looked in nVidia nSight and it's saying that my vertex shader only has an input tied to the POSITION semantic, and an output tied to the SV_POSITION semantic, but I clearly have an input and output tied to the COLOR semantic, and I've tried other semantic names before. Strangely though, nSight notices that my pixel shader has a COLOR input semantic.

 

EDIT:

When looking at the shader source in nSight it strips out all the code that uses the COLOR semantic from the vertex shader. The COLOR code stays in the pixel shader.

 

EDIT:

Also of note is the conversion function for input formats.

DXGI_FORMAT icicleFormatToD3DFormat(E_INPUT_FORMAT format)
{
	if ( format == EIF_VEC_4 )
	{
		return DXGI_FORMAT_R32G32B32A32_FLOAT;
	}
	else if ( format == EIF_VEC_3 )
	{
		return DXGI_FORMAT_R32G32B32_FLOAT;
	}
	else if ( format == EIF_VEC_2 )
	{
		return DXGI_FORMAT_R32G32_FLOAT;
	}
	else if ( format == EIF_FLOAT )
	{
		return DXGI_FORMAT_R32_FLOAT;
	}

	return DXGI_FORMAT_R32G32B32_FLOAT;
}
Edited by 3DModelerMan

Share this post


Link to post
Share on other sites

Can you show the code for graphics::SInputElementDesc(...) ? 

 

Cheers!

Edited by kauna

Share this post


Link to post
Share on other sites

I vaguely remerber something with the semantic name and how it needs to be either a static or const allocated string, when it is dynamic and the reference the pointer is pointing at disappears before the layout is properly created, it fails to find the actual name.

 

I use code defined strings at the moment for these things, but my memory on this is vague as I wrote that code a long time ago now and generally dont need to revisit it.

Share this post


Link to post
Share on other sites

@kauna

It's pretty much just a basic struct.

struct SInputElementDesc
{
public:

	SInputElementDesc()
		:m_name(""), m_index(0)
	{}

	SInputElementDesc(const std::string& name, E_INPUT_FORMAT format, unsigned int idx=0)
		:m_name(name), m_index(idx), m_format(format)
	{}

	std::string m_name;///< The semantic name of the input element

	unsigned int m_index;///< The index of the input element

	E_INPUT_FORMAT m_format;///<The format of this input element
			
	unsigned int m_offset;///< Offset from the start of the vertex structure. Set by the input layout: read only.

	unsigned int m_size;///< Size of the input element. Set by the input layout: read only.
};

@NightCreature83

createInputLayout returns S_OK though, and nSight shows the COLOR semantic in the input layout, just not the vertex shader input. I've also opened up the vertex shader in nSight and it compiled the COLOR semantic out or something. This is how I compile the vertex shader.

IVertexShader* CDX11GraphicsDriver::compileVertexShader(const std::string& shaderSource, const std::string& name)
{
	ID3DBlob* VSblob;
	ID3DBlob* errorBlob;

	HRESULT hr = D3DCompile(&shaderSource[0], shaderSource.length(), name.c_str(), NULL, NULL, "VS", "vs_5_0", D3DCOMPILE_ENABLE_STRICTNESS, NULL, &VSblob, &errorBlob);
	if ( FAILED(hr) )
	{
		std::string error = std::string((char*)errorBlob->GetBufferPointer());//TODO: print this out
		errorBlob->Release();

		return NULL;
	}

	ID3D11VertexShader* VS = NULL;

	hr = m_device->CreateVertexShader(m_defaultVSblob->GetBufferPointer(), m_defaultVSblob->GetBufferSize(), NULL, &VS);
	if ( FAILED(hr) )
		return NULL;

	return new CDX11VertexShader(VS);
}

Share this post


Link to post
Share on other sites

 

@kauna

It's pretty much just a basic struct.

struct SInputElementDesc
{
public:

	SInputElementDesc()
		:m_name(""), m_index(0)
	{}

	SInputElementDesc(const std::string& name, E_INPUT_FORMAT format, unsigned int idx=0)
		:m_name(name), m_index(idx), m_format(format)
	{}

	std::string m_name;///< The semantic name of the input element

	unsigned int m_index;///< The index of the input element

	E_INPUT_FORMAT m_format;///<The format of this input element
			
	unsigned int m_offset;///< Offset from the start of the vertex structure. Set by the input layout: read only.

	unsigned int m_size;///< Size of the input element. Set by the input layout: read only.
};

@NightCreature83

createInputLayout returns S_OK though, and nSight shows the COLOR semantic in the input layout, just not the vertex shader input. I've also opened up the vertex shader in nSight and it compiled the COLOR semantic out or something. This is how I compile the vertex shader.

IVertexShader* CDX11GraphicsDriver::compileVertexShader(const std::string& shaderSource, const std::string& name)
{
	ID3DBlob* VSblob;
	ID3DBlob* errorBlob;

	HRESULT hr = D3DCompile(&shaderSource[0], shaderSource.length(), name.c_str(), NULL, NULL, "VS", "vs_5_0", D3DCOMPILE_ENABLE_STRICTNESS, NULL, &VSblob, &errorBlob);
	if ( FAILED(hr) )
	{
		std::string error = std::string((char*)errorBlob->GetBufferPointer());//TODO: print this out
		errorBlob->Release();

		return NULL;
	}

	ID3D11VertexShader* VS = NULL;

	hr = m_device->CreateVertexShader(m_defaultVSblob->GetBufferPointer(), m_defaultVSblob->GetBufferSize(), NULL, &VS);
	if ( FAILED(hr) )
		return NULL;

	return new CDX11VertexShader(VS);
}

Pass the debug flags to the shader and device intialisation, then go into the DX control panel and activate the debug runtime for your application. By doing this you should be able to get proper symbols in your shaders. Which allow for more extensive debugging.

//Shader Flags
#if defined( DEBUG ) || defined( _DEBUG )
DWORD shaderCompilerFlags = D3DCOMPILE_ENABLE_STRICTNESS | D3DCOMPILE_PACK_MATRIX_ROW_MAJOR | D3DCOMPILE_SKIP_OPTIMIZATION | D3DCOMPILE_DEBUG;
#else
DWORD shaderCompilerFlags = D3DCOMPILE_ENABLE_STRICTNESS | D3DCOMPILE_PACK_MATRIX_ROW_MAJOR;
#endif

Device flags

    DWORD deviceCreationFlags = D3D11_CREATE_DEVICE_SINGLETHREADED;
#ifdef _DEBUG
    deviceCreationFlags |= D3D11_CREATE_DEVICE_DEBUG ;//| D3D11_CREATE_DEVICE_DEBUGGABLE;
#endif

And my shader creation function, I load everything from file and that's what the getShaderBufferFunction does for me.

//-----------------------------------------------------------------------------
//! @brief   TODO enter a description
//! @remark
//-----------------------------------------------------------------------------
bool VertexShader::createShader(const DeviceManager& deviceManager)
{
    size_t length = 0;
    char* shaderCodeBuffer = getShaderBuffer(m_fileName, length);
    if (shaderCodeBuffer)
    {
        std::string profileName = "";
        getProfileName(deviceManager, eVertexShader, profileName);
        ID3DBlob* errorBlob;
        HRESULT hr = D3DCompile(shaderCodeBuffer, length, m_fileName.c_str(), 0, D3D_COMPILE_STANDARD_FILE_INCLUDE, m_entryPoint.c_str(), profileName.c_str(), shaderCompilerFlags, 0, &m_vertexShaderBlob, &errorBlob);
        if (FAILED(hr))
        {
            MSG_TRACE_CHANNEL("VertexShader_ERROR", "Failed to compile vertex shader with error code: 0x%x(%s)", hr, D3DDebugHelperFunctions::D3DErrorCodeToString(hr));
            MSG_TRACE_CHANNEL("VertexShader_ERROR", "Failed to compile vertex shader: %s, with errors: \n%s", m_fileName.c_str(), (errorBlob != nullptr ? (char*)errorBlob->GetBufferPointer() : "<errorBlob pointer is nullptr>"));
            delete [] shaderCodeBuffer;
            return false;
        }
        
        hr = deviceManager.getDevice()->CreateVertexShader(m_vertexShaderBlob->GetBufferPointer(), m_vertexShaderBlob->GetBufferSize(), 0, &m_shader);
        if (FAILED(hr))
        {
            MSG_TRACE_CHANNEL("VERTEXSHADER_ERROR", "Failed to create vertex shader: 0x%x(%s)", hr, D3DDebugHelperFunctions::D3DErrorCodeToString(hr));
            delete [] shaderCodeBuffer;
            return false;
        }

        delete []  shaderCodeBuffer;
        return true;
    }

    return false;
}
Edited by NightCreature83

Share this post


Link to post
Share on other sites

Okay. I set everything up with the debugging. I fixed a handful of warnings. One of which was that the primitive topology was never set. I'm using indexed primitives but I set it anyway. No change at all. I'm still having the same problem, except that now I'm getting 

 

D3D11 ERROR: ID3D11DeviceContext::DrawIndexed: Vertex Shader - Pixel Shader linkage error: Signatures between stages are incompatible. The input stage requires Semantic/Index (COLOR,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND]

 

On every frame. But this error doesn't seem to tell me anything about the problem. And google doesn't pull up much about it either. Obviously the signatures between stages are incompatible, but it's not telling me WHY my vertex shader strips the COLOR code out and the pixel shader doesn't.

Share this post


Link to post
Share on other sites

There's also another weird thing that I just discovered... If I add the color semantic to the shader I use to create input layouts (I have one super simple one that I kept the VS blob around for to create input layouts without needing to load a shader first). Then it works. I don't want to add a new element to the input element creation shader though, because I want my input layouts to be loaded with my meshes. Each of my mesh buffers stores a pointer to its input layout. Input layout creation succeeds without the COLOR semantic in the throwaway shader... But it doesn't work otherwise. I also know that the throwaway shader isn't set, because there's never a point that I set it as the vertex shader.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!