Retrieving Vertex Declaration from Effects Framework

Started by
16 comments, last by david w 14 years, 5 months ago
Hi guys, I have a technical question. There is a way to generate programmatically a Vertex Declaration from an Effect of the Effects Framework? (in either DX9 and DX10) I'm searching on the net, but there aren't many information on this question. Thanks! [Edited by - feal87 on October 24, 2009 11:56:11 AM]
Advertisement
No, not that I know of. But if you'd like you can assign whatever custom attributes you'd like to techniques and passe, which you could use to provide information about the vertex inputs.
Quote:Original post by MJP
No, not that I know of. But if you'd like you can assign whatever custom attributes you'd like to techniques and passe, which you could use to provide information about the vertex inputs.


Mhn...thanks for the clarification MJP. I'll go follow another route to solve my problem then. :)
Excuse my curiosity but why would you want to do that?
Yes you can do this, you can find out what inputs a vertex shader in an Effect requires and then build a vertex declaration from that. It takes a bit of work on your part, but, its pretty straight forward.

For DirectX9, use D3DXGetShaderInputSemantics() which will give you an array of the inputs that a vertex shader requires.

For DirectX10, use GetInputSignatureElementDesc(), which will get you the same sort of information.
Quote:Original post by snake5
Excuse my curiosity but why would you want to do that?


Remove the hassle of creating a vertex declaration manually for each shader i use.
Anyway thanks andur, i'll take a look at the functions you posted. :)
I also am interested in this. It would greatly reduce alot of problems. I have found some small information on the internet. If someone could further expand or give some example of how to use this that would be great.

                 array<ShaderSemantic>^ ShaderBytecode::GetInputSemantics()                  {                          UINT count = 0;                          const DWORD* function = (const DWORD*) m_Pointer->GetBufferPointer();                            HRESULT hr = D3DXGetShaderInputSemantics( function, NULL, &count );                          GraphicsException::CheckHResult( hr );                          if( FAILED( hr ) )                                  return nullptr;                                                    array<ShaderSemantic>^ inputs = gcnew array<ShaderSemantic>( count );                          pin_ptr<ShaderSemantic> pinnedInputs = &inputs[0];                            hr = D3DXGetShaderInputSemantics( function, (D3DXSEMANTIC*) pinnedInputs, &count );                          GraphicsException::CheckHResult( hr );                          if( FAILED( hr ) )                                  return nullptr;                                                    return inputs;                  } andhr=D3DXCompileShaderFromFileA("pixelshader.txt",NULL,NULL,"PS","ps_2_0",dwShaderFlags,&g_pCompiledPSBuffer,&g_pCompiledPSErrorBuffer,&g_pPSConstantTable); _hr=D3DXGetShaderInputSemantics((DWORD*)g_pCompiledPSBuffer->GetBufferPointer(),NULL,&count);


I just found those examples, not my code. But it would be great to figure this out. Thanks.
I've looked at the functions. In DX10/11 there is no problem to determine the vertex declaration (the function return even the type of the elements), but in DX9 its a little tricky cause the function does not give the type of the element, but only the semantic and little other information...
In straight C++, the above code might look something like:

void getInputSemantics(const ID3DXBuffer *shaderCode, std::vector<D3DXSEMANTIC> &inputs) {    const DWORD* function = (const DWORD*) shaderCode->GetBufferPointer();    UINT count;    if (FAILED(D3DXGetShaderInputSemantics( function, NULL, &count )))        // Error    inputs.resize(count);    if (FAILED(D3DXGetShaderInputSemantics( function, &inputs[0], &count )))        // Error}
Quote:Original post by feal87
Quote:Original post by snake5
Excuse my curiosity but why would you want to do that?


Remove the hassle of creating a vertex declaration manually for each shader i use.
Anyway thanks andur, i'll take a look at the functions you posted. :)


Well usually the vertex declaration is determined from your vertex data and the shaders are written to fit that, not the other way around. This is because vertex data is usually packed in certain ways to improve performance or reduce the memory footprint.

This topic is closed to new replies.

Advertisement