Sign in to follow this  
Icebone1000

DX11 Is the X file format deprecated?

Recommended Posts

Im trying to figure out what file format to start practicing skeletal animation..but I cant find any good tips on this..

I want to avoid COLLADA because it looks way too complex, and I kind dislikes xml stuff..

So I started look for the .X file..But the actual dx sdk have created its own file format (sdkmesh)just for its examples, why would they do that if they alredy have its own x file? Its like "X is not good either for an sdk sample.."

Since Im using DX11, I like to avoid using those kind of stuff, like the ID3DX10Mesh..(.X is still mentionated on the dx10)

Where can I find a list of file formats that support skeletal animation? And what ones are you used to use ?

Sorry my english..

Share this post


Link to post
Share on other sites
Yes the x file format has been deprecated for quite some time now, as someone suffering from this deprecation I highly recommend you avoid it, move on to something else like FBX.

Share this post


Link to post
Share on other sites
directx was using .sdkmesh for awhile during the transition from 9-10-11.
But i'm not sure if .x is officially deprecated.

Share this post


Link to post
Share on other sites
It's not "deprecated" - it's just a format specification, after all. Collada is better supported, though. As is FBX, even though it's a closed format. Search for a readymade loader library, such as the FBX SDK (for FBX, naturally) or Assimp (for Collada, X and 20 others) - that saves you a lot of hassles and gets you the data you need for skeletal experiments.

Share this post


Link to post
Share on other sites
Quote:
Original post by _meds
as someone suffering from this deprecation


Just curious, how are you suffering from it?

Share this post


Link to post
Share on other sites
Just curious..
since this is a game development forum, specially on a dx subforum..I was expecting more feedback about mesh formats that support animation..but I dont find many things either when I use the search or look for it on google...I was hoping for some kind of best alternative that everyone agrees with...why there isnt any? How ppl do this? I mean, its a must task for who works with games...

Thanx for mentioning those load libs, I guess I will start from there, I thinking on the FBX...

Share this post


Link to post
Share on other sites
Quote:
Original post by Icebone1000.I was hoping for some kind of best alternative that everyone agrees with...why there isnt any?


Because everyone needs different things, different priorities, etc..

Share this post


Link to post
Share on other sites
Im getting 661 warnings compiling the fbx sdk, can I hide the warnings referent to just those header files? I have just one warning on my "own" app, those will makes my life dificult x_x


"Because everyone needs different things, different priorities, etc.."
Yeah but, if you talking about games, a file that holds a mesh and rigging animation is a commom need between every app.

Share this post


Link to post
Share on other sites
Quote:
Original post by Icebone1000
Yeah but, if you talking about games, a file that holds a mesh and rigging animation is a commom need between every app.
Not necessarily - for instance, I need creased subdivision surfaces, rather than polygonal meshes (and many modern games follow this trend).

Share this post


Link to post
Share on other sites
Quote:
Original post by Icebone1000
"Because everyone needs different things, different priorities, etc.."
Yeah but, if you talking about games, a file that holds a mesh and rigging animation is a commom need between every app.
Alright, but should it hold both skeletal and morph-based animation? What types of materials does it support? Is it Y-up or Z-up? What are its base units? Are all polygons triangulated? Are all triangles stripified? Is collision volume information stored, and what types of collision volumes are supported?

You can have a catchall file format which supports every feature you might want and records each representation decision you've made -- that's what FBX and COLLADA are for -- but then that format is useless for use in a game engine because it supports things that your engine doesn't. It's also probably a lot slower to load, because you can't copy directly into your application's structures.

This is why you have one (standardized) format for content export, and one (custom) format for engine import.

Share this post


Link to post
Share on other sites
Quote:
Original post by Icebone1000
I was hoping for some kind of best alternative that everyone agrees with...why there isnt any? How ppl do this? I mean, its a must task for who works with games...


The 'best' final format is a binary dump of your game's vertex data in the correct order to be loaded and rendered directly.

COLLADA however ISNT a final format, its a graphics interchange format which really shouldn't be used for final game assets (although I get the impression that some people use it as that).

Share this post


Link to post
Share on other sites
Most larger game engines store their content in a format that can be directly loaded into memory (for instance by directly serializing their C++ data structures). It's often crucial for achieving short load times, particularly on consoles.

Formats like X/Collada/FBX are considered "interchange formats", because they're generic and have an open specification (or freely available importer/exporter tools, in the case of FBX). This allows them to be used for exporting from one content creation utility and importing into another. Or alternatively many game studios will initially export from their content creation app into one of these formats, and then their offline toolchain will process the data and produce their custom data format suitable for runtime loading. In some cases in can be perfectly feasible or even practical to directly load these formats at runtime, which generally applies to smaller games on the PC. It really depends on your needs.

[Edited by - MJP on October 2, 2010 7:12:18 PM]

Share this post


Link to post
Share on other sites
Thanks for the insight, reading the fbx sdk docs(and a bit of the book focus on 3d meshs) I alredy get the idea that you are mentioning, the thing is I didnt know what file to start from(I was looking for some kind of OBJ but with animation)..
Looks like FBX and collada is really the way to go


btw about the warnings Im getting, Im using this:
#pragma warning( push )
#pragma warning(disable:4100)
#pragma warning(disable:4512)
#include <fbxsdk.h>
#pragma warning( pop )

Pretty cool

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      627765
    • Total Posts
      2978983
  • Similar Content

    • By schneckerstein
      Hello,
      I manged so far to implement NVIDIA's NDF-Filtering at a basic level (the paper can be found here). Here is my code so far:
      //... // project the half vector on the normal (?) float3 hppWS = halfVector / dot(halfVector, geometricNormal) float2 hpp = float2(dot(hppWS, wTangent), dot(hppWS, wBitangent)); // compute the pixel footprint float2x2 dhduv = float2x2(ddx(hpp), ddy(hpp)); // compute the rectangular area of the pixel footprint float2 rectFp = min((abs(dhduv[0]) + abs(dhduv[1])) * 0.5, 0.3); // map the area to ggx roughness float2 covMx = rectFp * rectFp * 2; roughness = sqrt(roughness * roughness + covMx); //... Now I want combine this with LEAN mapping as state in Chapter 5.5 of the NDF paper.
      But I struggle to understand what theses sections actually means in Code: 
      I suppose the first-order moments are the B coefficent of the LEAN map, however things like
      float3 hppWS = halfVector / dot(halfVector, float3(lean_B, 0)); doesn't bring up anything usefull.
      Next theres:
      This simply means:
      // M and B are the coefficents from the LEAN map float2x2 sigma_mat = float2x2( M.x - B.x * B.x, M.z - B.x * B.y, M.z - B.x * B.y, M.y - B.y * B.y); does it?
      Finally:
      This is the part confuses me the most: how am I suppose to convolute two matrices? I know the concept of convolution in terms of functions, not matrices. Should I multiple them? That didn't make any usefully output.
      I hope someone can help with this maybe too specific question, I'm really despaired to make this work and i've spend too many hours of trial & error...
      Cheers,
      Julian
    • By Baemz
      Hello,
      I've been working on some culling-techniques for a project. We've built our own engine so pretty much everything is built from scratch. I've set up a frustum with the following code, assuming that the FOV is 90 degrees.
      float angle = CU::ToRadians(45.f); Plane<float> nearPlane(Vector3<float>(0, 0, aNear), Vector3<float>(0, 0, -1)); Plane<float> farPlane(Vector3<float>(0, 0, aFar), Vector3<float>(0, 0, 1)); Plane<float> right(Vector3<float>(0, 0, 0), Vector3<float>(angle, 0, -angle)); Plane<float> left(Vector3<float>(0, 0, 0), Vector3<float>(-angle, 0, -angle)); Plane<float> up(Vector3<float>(0, 0, 0), Vector3<float>(0, angle, -angle)); Plane<float> down(Vector3<float>(0, 0, 0), Vector3<float>(0, -angle, -angle)); myVolume.AddPlane(nearPlane); myVolume.AddPlane(farPlane); myVolume.AddPlane(right); myVolume.AddPlane(left); myVolume.AddPlane(up); myVolume.AddPlane(down); When checking the intersections I am using a BoundingSphere of my models, which is calculated by taking the average position of all vertices and then choosing the furthest distance to a vertex for radius. The actual intersection test looks like this, where the "myFrustum90" is the actual frustum described above.
      The orientationInverse is the viewMatrix in this case.
      bool CFrustum::Intersects(const SFrustumCollider& aCollider) { CU::Vector4<float> position = CU::Vector4<float>(aCollider.myCenter.x, aCollider.myCenter.y, aCollider.myCenter.z, 1.f) * myOrientationInverse; return myFrustum90.Inside({ position.x, position.y, position.z }, aCollider.myRadius); } The Inside() function looks like this.
      template <typename T> bool PlaneVolume<T>::Inside(Vector3<T> aPosition, T aRadius) const { for (unsigned short i = 0; i < myPlaneList.size(); ++i) { if (myPlaneList[i].ClassifySpherePlane(aPosition, aRadius) > 0) { return false; } } return true; } And this is the ClassifySpherePlane() function. (The plane is defined as a Vector4 called myABCD, where ABC is the normal)
      template <typename T> inline int Plane<T>::ClassifySpherePlane(Vector3<T> aSpherePosition, float aSphereRadius) const { float distance = (aSpherePosition.Dot(myNormal)) - myABCD.w; // completely on the front side if (distance >= aSphereRadius) { return 1; } // completely on the backside (aka "inside") if (distance <= -aSphereRadius) { return -1; } //sphere intersects the plane return 0; }  
      Please bare in mind that this code is not optimized nor well-written by any means. I am just looking to get it working.
      The result of this culling is that the models seem to be culled a bit "too early", so that the culling is visible and the models pops away.
      How do I get the culling to work properly?
      I have tried different techniques but haven't gotten any of them to work.
      If you need more code or explanations feel free to ask for it.

      Thanks.
       
    • By evelyn4you
      hi,
      i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
      e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
       is the buffer bound
      a.  to the VertexShaderStage
      or
      b. to the VertexShader that is currently set as the active VertexShader
      Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
      I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

      Look at this example:
      SetVertexShader ( VS_A )
      updateSubresource(buffer_A)
      vertexshader.setConstantbuffer ( buffer_A,  slot_A )
      perform drawcall       ( buffer_A is used )

      SetVertexShader ( VS_B )
      updateSubresource(buffer_B)
      vertexshader.setConstantbuffer ( buffer_B,  slot_A )
      perform drawcall   ( buffer_B is used )
      SetVertexShader ( VS_A )
      perform drawcall   (now which buffer is used ??? )
       
      I ask this question because i have made a custom render engine an want to optimize to
      the minimum  updateSubresource, and setConstantbuffer  calls
       
       
       
       
       
    • By noodleBowl
      I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
      IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
      Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data
       
    • By Rockmover
      I am really stuck with something that should be very simple in DirectX 11. 
      1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
      2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).
       
      However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?
       
      If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC). 
      I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.  
      I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?  
      I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.
       
      I'm also more than happy to post my simple test code if that helps as well!
       
      THANKS SO MUCH IN ADVANCE!!!
  • Popular Now