Jump to content
  • Advertisement
Sign in to follow this  
SpaceCowboy850

Automagically determining vertex shader input

This topic is 3709 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm am trying to programatically determine what the vertex shader inputs are in D3DXEffects9. I have found the nice handy function D3DXGetShaderInputSemantics, which is great, and gets me nearly all the way there. It provides me a D3DXSemantic array which contains a usage and a usage index. The usage can be used to determine if the input is a position, normal, uv, etc. The problem is, that knowing this information is not enough, I think. For instance, your position could be a float3 or a float4. Same with normal. I know all of this gets expanded to 4 32-bit floats when going to the video card (or at least I think that is the case), so right now, regardless of the size of the variable, I'm registering each variable as 16 bytes in size. What I would like to do, though is figure out exactly how much space each input is using (so a float3 would be 12 bytes, float4, 16 bytes, etc) and base my sizes off of that. Anyone know of a way to do this outside of just adding annotations somewhere?

Share this post


Link to post
Share on other sites
Advertisement
Good question! It's been quite a while since I did any D3D9/FX9 work and v10 is quite a bit better in this regard. As a suggestion, I wonder if you can use D3DXDisassembleShader() and scan the output for dcl_usage instructions? A bit fugly, but at least it should allow you to map a dimensionality to the information you already have...

hth
Jack

Share this post


Link to post
Share on other sites
You can't determine the size of an individual element of the vertex stream from the vertex shader. As you mention, the stream elements are automatically "expanded" as described in the documentation for D3DDECLTYPE. Unfortunately, the only thing you can do with the D3DXGetShaderInputSemantic is to validate that the vertex declaration you're setting gives exactly what the shader expects.

When you think about it, this is kind of handy because using the same vertex shader you can use different vertex declaration (compress data and such). Though I think this isn't the case in d3d10 anymore (i'm a noob with 10).

FYI: Parsing dcl_usage in the dissasembly will pretty much give you the same result as D3DXGetShaderInputSemantic (it's parsing the byte code tokens exactly like the disassemble shader function does).

Share this post


Link to post
Share on other sites
Thanks for the help. I think I'm just going to leave it like it is for now, and we might just go for annotations if we determine we really need to know about this data.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!