Jump to content
  • Advertisement
Sign in to follow this  
Drakex

Is ID3DXSkinInfo useful?

This topic is 4846 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been looking into implementing animated meshes in my engine, preferably using the built-in animation capabilities of D3DX. After a lot of reading and looking at examples, I'm slowly getting my head around the rather, erm, obtuse animation functionality in D3DX. One thing that has always kind of eluded me is the exact purpose of ID3DXSkinInfo. I think I've kind of figured it out - it holds the skinning info for a mesh and bones in a generic internal format which can be converted to something useable with either fixed-function skinning or with vertex shaders. I've already made the decision to pass over fixed-function skinning entirely - I'm under the impression that it's completely unsupported on nVidia hardware. So I've decided to use vertex shaders instead. The skinning examples that come with DirectX show how to do skinning in a few different ways, one of which is with a vertex shader. For example, look at the Managed SimpleAnimation sample. In the process of preparing the mesh to be drawn, the sample uses the function SkinInformation.ConvertToIndexedBlendedMesh(). Eerily, in the (much better-written) C++ DirectX docs for the equivalent function, ID3DXSkinInfo::ConvertToIndexedBlendedMesh, there is this comment at the bottom: "This method does not run on hardware that does not support fixed-function vertex blending." Wait a second, so it's telling me that ID3DXSkinInfo can't really be used for anything on nVidia cards, which don't support fixed-function skinning? What am I supposed to do, then? And does this function indeed run correctly on nVidia hardware, meaning that this comment in the documentation is wrong? Or maybe this badly-worded comment is supposed to mean "this function does not run in hardware on hardware that does not support fixed-function vertex blending". Who knows? As far as I can tell, the only two functions which make the entire ID3DXSkinInfo interface work are the ConvertToBlendedMesh and ConvertToIndexedBlendedMesh, and apparently they don't function on nVidia hardware. As I don't have an nVidia card, I have no way of knowing whether or not this is true, although I sincerely hope that it isn't.

Share this post


Link to post
Share on other sites
Advertisement
Let me see if I can help here a bit. I am miles away from my source code right now.

I use the SkinMeshInfo class for performing software skinning in an application that does not need high performance. The skin info system works great for me because I have a collection of bone matrices and a collection of vertex weighting information. Using the skin info made it a short process of getting it working, and since it runs in software, it runs on all PCs. I use UpdateSkinnedMesh to do this.

Realistically, writing your own vertex shader for skinning is not that difficult a task. We did that for everything that is performance based. You will ultimately have to do this if you want to do anything beyond the fixed function capabilities with skinning.

Of course, in all cases we do not use the D3DX animation system but our own so we can not leverage existing D3DX drawing code.

I am not suprised that the functions require fixed function vertex blending - that is what they depend on. The code samples have a limit of four unique matrices per weighting set which is consistent with FF blending implementations. I do find it odd that the functions for convert need the FF implementation but that might be due to how they lay out the data in the resulting mesh data - optimized for the FF implementation.

Why not just look at the sample and see if it runs on your hardware. The skinning sample has a number of different options on how it renders the data and it will give you performance information as you go, so you can determine if it runs in hardware or not. You also might want to double check that the sample does transform and lighting in hardware instead of software since that might disable some validity of your testing.

If the sample using vertex shaders and the convert functions works then go for it. If not, I don´t think writing the code to create valid subsets that match to the bone matrix palette is too difficult. It can be a tedious process and we always write a tool to do that as part of the conversion process so it is only done once.

None the less, best of luck.

- S

Share this post


Link to post
Share on other sites
Quote:
If the sample using vertex shaders and the convert functions works then go for it.


But that's the thing - it works for me, as I have a fixed-function-vertex-blending-capable card (a radeon 9500). I don't know if it'll work on an nVidia card. If I can find that out, I'll be happy!

Share this post


Link to post
Share on other sites
I don't know about the fixed-function blending. I use a vertex shader for skinning and I also use ID3DXSkinInfo::ConvertToIndexBlendedMesh and it runs great (GeForce6). I don't think it's simulating with software.

Share this post


Link to post
Share on other sites
Quote:
I also use ID3DXSkinInfo::ConvertToIndexBlendedMesh and it runs great (GeForce6)


Yay! I'll be using that then.

Share this post


Link to post
Share on other sites
I use the D3DXSkinInfo stuff when working with D3DXMesh stuff that is animated via a skeleton (using the ConvertToIndexedBlendedMesh() call) and I use a vertex shader for rendering. Using software vertex processing (and the same shader running in software) on non-shader cards.

In most of my test cases, using the software shader was faster than using the FFP. It really only drops significantly below the FFP when you have a very low end CPU (Pre-Athlon AMD, or P2 or less Intel). And the reason is most likely attributable to batch counts more than anything else. Since the shader could use more matrices per batch it usually required less batches.

Of course, on non-shader nVidia cards you only have 2 matrix blending (without indices) which is basically worthless. So, indexed blending via a software shader is almost always a win, primarily because of signicantly less batches being needed.

Share this post


Link to post
Share on other sites
Yeah, that's what I'm planning to do. A SW vertex shader seems to run just as fast as software skinning on non-shader cards. So I can run the shader in HW on cards that support it, and in SW on cards that don't, and I don't have to change very much in my engine at all, and it works nicely.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!