• Advertisement
Sign in to follow this  

DX11 Alternate model format for DX9?

This topic is 2890 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm wondering - what model format is typically used in DirectX 9 (besides the .x format)? From what I've been reading, the support for .x is pretty awful, and Microsoft is even dropping support for the format in DX11. The animator on a project I'm working on can't even get one of his animated models to export to .x, and even when he could on previous versions of the model, there were always issues with them. Yuck! So, I'm guessing that most projects use something besides .x. What alternate formats are commonly used? This is a commercial project, so the format/library used to load them would have to be something non-restrictive for commercial products. Any help would be greatly appreciated. Thanks!

Share this post


Link to post
Share on other sites
Advertisement
I would suggest collada (search Google for its specification) or just rolling your own formats.

Share this post


Link to post
Share on other sites
DirectX .x files are still fine, I think... after all it's just a file format. The builtin exporters for DirectX models in most modelling applications are awful, though. For 3DSMax I suggest getting the kwXPort plugin, for Maya the CVXporter is doing a good job.

Other formats might still be interesting, though. Especially if the exporters are maintained. I therefore recommend Collada or FBX. Both come with well-maintained exporters for various 3d modelling packages, and both support a basic SDK to read them into your engine. On the loading side the Collada SDK is close to unusable in my opinion, FBX seems better.

That said, there's a generic model loader library called the Open Asset Import Library for which I wrote several loaders. It reads 20+ file formats, including DirectX .x, Collada .dae, Milkshape .ms3d, Lightwave .obj, 3DSMax .3ds and a lot more... FBX support is in the works at the moment. This library might save you the hassle of writing your own loaders.

Share this post


Link to post
Share on other sites
I would consider inventing your own custom format that contains the data you need in the format you need it. You can make it little more than a binary dump of your vertex and index buffers and any other data fields will be little more that the fields of your model object.

If you do that then your models will be very fast and easy to load, and you won't have to include any complex and large API into your actual game code. I wouldn't feel happy to include large third party APIs directly into my game code.

You'll of course have to write a tool to convert your models and run it on all of them as part of your build, but at least then your converter can load lots of different kinds of file without bloating and slowing your engine and it doesn't matter if your tool gets "bloated" by including several different model APIs.

Share this post


Link to post
Share on other sites
Quote:
Original post by jbb
I would consider inventing your own custom format that contains the data you need in the format you need it. You can make it little more than a binary dump of your vertex and index buffers and any other data fields will be little more that the fields of your model object.

If you do that then your models will be very fast and easy to load, and you won't have to include any complex and large API into your actual game code. I wouldn't feel happy to include large third party APIs directly into my game code.

You'll of course have to write a tool to convert your models and run it on all of them as part of your build, but at least then your converter can load lots of different kinds of file without bloating and slowing your engine and it doesn't matter if your tool gets "bloated" by including several different model APIs.


For a final game run, I would also recommend a custom binary format, that loads in a blink of an eye, and could be quickly streamed and stuff.
But that's only for a final game(or techdemo). Using Assimp for a converting tool is very attractive solution, as it support many formats, that get exported from 3D packages, and you don't have to implement and maintain your own exporters. Assimp is very sweet and provides nice uniform data structure back to the application. I don't know how fast it actually is, parsing different models, but I guess, loading a binary dump from hard disk straight to the memory buffers will be faster. Also, when I compile it in release mode without boost, it's 14 MB big static lib ? That kind of concerns me including it into my application.
Maybe i'm doing something completely wrong with the compilation process ? Sorry for biasing the topic to Assimp, but if Schrompf(or someone else) could answer would be nice.

Share this post


Link to post
Share on other sites
Assimp is quite slow. The reasons for this are that most 3d file formats do not contain plain vertex data only, but alot of additional data, and mostly in a very non-straightforward manner. It takes time to parse textbased files, and it takes time to reorganise data and calculate derived data. The other reason for the relative slowliness is the post processing applied to the imported 3D scene. Things such as calculating tangent data for your vertices, triangulating polygons or creating index buffers also cost time.

Therefore I also suggest inventing your own file format for quick loading, just like everybody else here did. Assimp is not intended for loading assets on game startup. You can do this, but it will take time. A binary format specifically designed to hold the data you need in an engine-specific way will load much faster. Assimp is intended to be an importer library - read files once when your graphics artist delivered them and save them in your custom file format for quick every-day reading.

Concerning the size of the static library: I never actually looked for how large my Assimp build got. It's just a temporary file, after all... if you link statically. In my case the final executable got larger by only a few 100 kb, and that's what counts for me. I guess it's due to the usage of templates, but I never checked.

Share this post


Link to post
Share on other sites
Yeah, the lib is something like concatenation of the object files, and I guess it get's pretty fat(duplicate code ?) when templates are involved. But for a final linkage to the exe(or dll) most of it would be eliminated, so no big worries. :)

Share this post


Link to post
Share on other sites
I see. So basically what you guys are saying is that I should take something like AssImp, use that to load the files, convert them to my own format, and then use that new format to load into the game?

Is there a tutorial or something on that? Thinking of loading all of the animations and heirarchical data and skin information is going to make my head explode.

Share this post


Link to post
Share on other sites
I'm also using assimp as temporarily collada loader. The only problem is that the documentation is poor and the assimp viewer source is bit too complex just to search for a one example call. Using assimp requires patience. Took me a while to get an overview how to e.g. load the materials using that ::Get()

[Edited by - nvtroll on March 22, 2010 12:02:36 PM]

Share this post


Link to post
Share on other sites
Concerning getting started with Assimp: there's the official documentation which explains the basic usage at http://assimp.sourceforge.net/lib_html/usage.html and how to interpret the imported data at http://assimp.sourceforge.net/lib_html/data.html. There's also this little tutorial at http://wasaproject.net16.net/?p=175

In my defense: I, too, don't like the material system. I would have preferred a flat structure because of easier access and clean debuggability. But this one wasn't my decision and now my spare time is way too lacking to implement an easier path to access it.

Share this post


Link to post
Share on other sites
So, can Assimp export in all of those formats, too? Would it be possible to load a Collada file, and then export to a .x DirectX file? I'm really not interested in making my own file format or anything - I just want our modelers to be able to make the .x files the engine uses.

Share this post


Link to post
Share on other sites
I'm new to Assimp, but I'm afraid it can't be use to export .x
You could however lock and fill up your LPD3DXMESH mesh buffers with vertices, indices and stuff provided by Assimp and then call D3DXSaveMeshtoX to save it.

Share this post


Link to post
Share on other sites
But then how would I save the hierarchy, bone, animation and skin information? There's a lot more to a mesh than just vertices, indices, normals, etc.

Share this post


Link to post
Share on other sites
Quote:
Original post by solenoidz
This could help, i think.
http://www.toymaker.info/Games/html/x_file_saving.html


Ah, excellent! That's such a great site!

Well, awesome. So, the modelers can export to Collada or another well-supported format, I can load them using Assimp, and then save them to .x using the info from that site. I think I should be all set then!

Thanks everyone!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By turanszkij
      Hi, right now building my engine in visual studio involves a shader compiling step to build hlsl 5.0 shaders. I have a separate project which only includes shader sources and the compiler is the visual studio integrated fxc compiler. I like this method because on any PC that has visual studio installed, I can just download the solution from GitHub and everything just builds without additional dependencies and using the latest version of the compiler. I also like it because the shaders are included in the solution explorer and easy to browse, and double-click to open (opening files can be really a pain in the ass in visual studio run in admin mode). Also it's nice that VS displays the build output/errors in the output window.
      But now I have the HLSL 6 compiler and want to build hlsl 6 shaders as well (and as I understand I can also compile vulkan compatible shaders with it later). Any idea how to do this nicely? I want only a single project containing shader sources, like it is now, but build them for different targets. I guess adding different building projects would be the way to go that reference the shader source project? But how would they differentiate from shader type of the sources (eg. pixel shader, compute shader,etc.)? Now the shader building project contains for each shader the shader type, how can other building projects reference that?
      Anyone with some experience in this?
    • By osiris_dev
      Hello!
      Have a problem with reflection shader for D3D11:
      1>engine_render_d3d11_system.obj : error LNK2001: unresolved external symbol IID_ID3D11ShaderReflection
      I tried to add this:
      #include <D3D11Shader.h>
      #include <D3Dcompiler.h>
      #include <D3DCompiler.inl>
      #pragma comment(lib, "D3DCompiler.lib")
      //#pragma comment(lib, "D3DCompiler_47.lib")
      As MSDN tells me but still no fortune. I think lot of people did that already, what I missing?
      I also find this article http://mattfife.com/?p=470
      where recommend to use SDK headers and libs before Wind SDK, but I am not using DirectX SDK for this project at all, should I?
    • By trojanfoe
      Hi there, this is my first post in what looks to be a very interesting forum.
      I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering?
      Thanks in advance!
    • By Matt_Aufderheide
      I am trying to draw a screen-aligned quad with arbitrary sizes.
       
      currently I just send 4 vertices to the vertex shader like so:
      pDevCon->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP);
      pDevCon->Draw(4, 0);
       
      then in the vertex shader I am doing this:
      float4 main(uint vI : SV_VERTEXID) : SV_POSITION
      {
       float2 texcoord = float2(vI & 1, vI >> 1);
      return float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
      }
      that gets me a screen-sized quad...ok .. what's the correct way to get arbitrary sizes?...I have messed around with various numbers, but I think I don't quite get something in these relationships.
      one thing I tried is: 
       
      float4 quad = float4((texcoord.x - (xpos/screensizex)) * (width/screensizex), -(texcoord.y - (ypos/screensizey)) * (height/screensizey), 0, 1);
       
      .. where xpos and ypos is number of pixels from upper right corner..width and height is the desired size of the quad in pixels
      this gets me somewhat close, but not right.. a bit too small..so I'm missing something ..any ideas?
       
      .
    • By Stewie.G
      Hi,
      I've been trying to implement a gaussian blur recently, it would seem the best way to achieve this is by running a bur on one axis, then another blur on the other axis.
      I think I have successfully implemented the blur part per axis, but now I have to blend both calls with a proper BlendState, at least I think this is where my problem is.
      Here are my passes:
      RasterizerState DisableCulling { CullMode = BACK; }; BlendState AdditiveBlend { BlendEnable[0] = TRUE; BlendEnable[1] = TRUE; SrcBlend[0] = SRC_COLOR; BlendOp[0] = ADD; BlendOp[1] = ADD; SrcBlend[1] = SRC_COLOR; }; technique11 BlockTech { pass P0 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurV())); SetRasterizerState(DisableCulling); SetBlendState(AdditiveBlend, float4(0.0, 0.0, 0.0, 0.0), 0xffffffff); } pass P1 { SetVertexShader(CompileShader(vs_5_0, VS())); SetGeometryShader(NULL); SetPixelShader(CompileShader(ps_5_0, PS_BlurH())); SetRasterizerState(DisableCulling); } }  
      D3DX11_TECHNIQUE_DESC techDesc; mBlockEffect->mTech->GetDesc( &techDesc ); for(UINT p = 0; p < techDesc.Passes; ++p) { deviceContext->IASetVertexBuffers(0, 2, bufferPointers, stride, offset); deviceContext->IASetIndexBuffer(mIB, DXGI_FORMAT_R32_UINT, 0); mBlockEffect->mTech->GetPassByIndex(p)->Apply(0, deviceContext); deviceContext->DrawIndexedInstanced(36, mNumberOfActiveCubes, 0, 0, 0); } No blur

       
      PS_BlurV

      PS_BlurH

      P0 + P1

      As you can see, it does not work at all.
      I think the issue is in my BlendState, but I am not sure.
      I've seen many articles going with the render to texture approach, but I've also seen articles where both shaders were called in succession, and it worked just fine, I'd like to go with that second approach. Unfortunately, the code was in OpenGL where the syntax for running multiple passes is quite different (http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/). So I need some help doing the same in HLSL :-)
       
      Thanks!
  • Advertisement