• Advertisement
Sign in to follow this  

DX11 Loading images in DirectX 11 to use as textures and sprites

This topic is 724 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I'm working on an Indie project for my startup and I am working on the graphics system for the game.  I'm trying to get textures loaded and displayed on the screen but with the removal of the d3dx11.h header and all of the useful stuff that it brought with it I'm forced to figure out another way to load images into textures.  I have looked into DirectXTK as a possibility but I would really like to learn how to do this stuff on my own.  The problem I'm running into is figuring out how to load various image file formats into memory to place into a ID3D11Texture2D.  Does anybody know of any good resource material on the topic?  I've been reading a lot of different sites about the topic and half of them talk about the old d3dx11 header and everything in there, and the other half just talks about using DirectXTK.

 

In case the question comes up the reason why I'm trying to stick with DirectX 11 for this project is that we are hoping to get approved in the next few months to also develop this project on XBOXONE so I wanted to stick to an API that will work on both PC and console.

 

Any help on this topic would be greatly appreciated and if you need more info please let me know.

Share this post


Link to post
Share on other sites
Advertisement
You can use directxtex, a loader that can load dds and other graphics files or you can use WIC. WIC is windows own image loading library which is pluggable, so if windows can load that format so can you if you use WIC.

In one of my games I used WIC to load sprites and textures from png files.

This isn't the most efficient way to do it and you're better off converting to dds or a format that closely matches the memory format of the image as part of your build process as this means less parsing to load assets, but png is friendly to modding so that's why I chose it.

Have fun!

Share this post


Link to post
Share on other sites

You can use directxtex, a loader that can load dds and other graphics files or you can use WIC. WIC is windows own image loading library which is pluggable, so if windows can load that format so can you if you use WIC.

In one of my games I used WIC to load sprites and textures from png files.

This isn't the most efficient way to do it and you're better off converting to dds or a format that closely matches the memory format of the image as part of your build process as this means less parsing to load assets, but png is friendly to modding so that's why I chose it.

Have fun!


I've used WIC successfully as well. Its not 'optimal' but still a solid route, and very easy I might add.

Share this post


Link to post
Share on other sites

The link to ChuckW's blog post above is terribly out of date, I'm afraid. What you actually want is this:

http://blogs.msdn.com/b/chuckw/archive/2015/08/05/where-is-the-directx-sdk-2015-edition.aspx

Which leads to DirectXTex:

https://github.com/Microsoft/DirectXTex

Which is a wonderfully simple library to use. Copy WICTextureLoader and DDSTextureLoader into your codebase and enjoy single function call texture loading.

 

P.S. Call CoInitialize near the beginning of your main function, or WIC won't work.

Edited by Promit

Share this post


Link to post
Share on other sites

Wow I thank everybody for the responses and when I said various image formats I was simply unsure what I was going to use, I am probably going to use DDS for my textures but I wasn't 100% certain.  The more I research things the more unsure I am about what I want to do.  I have read more about DirectXTK and do see a major benefit to its use in a project with it being supported on PC and Console.

 

The part that I'm trying to wrap my head around is that from what I can tell you don't get any control over resource management and memory management with using it.  I'm just trying to figure out if it will be a problem or not.  I was using a system for resource management that uses a template class with a stack used to hold the actual resources.  I use a different resource manager for each type of Resource.

 

I'm not opposed to changing this system or other parts of my code to make it better and more robust I'm just trying to make sure I'm making the correct choice.  The game that my company is trying to release is going to be a platformer mainly side scrolling with some in and out of the screen movement if that makes sense.  It will be similar to Mega Man style in the sense that you will go to a "world" in the game world that it will load and the player can explore that section of the map.

Share this post


Link to post
Share on other sites

resource management and memory management

 

If you use the DirectXTex library you need to be cool with COM (you're already using dx right?) and WIC calls. If that's a deal breaker you can roll your own dds loader, WIC is only really useful for loading more complex formats like jpeg or png. The downside to dds files is simply that they're large and don't trivially compress that great but I doubt this is a pressing matter for you.

Edited by Dingleberry

Share this post


Link to post
Share on other sites

Thanks for the response again and I have decided to use the DirectXTK in my project and I'm learning how to make its various systems work in my engine.  I am very happy to say that I am making some great progress in making my game.  The more I looked into the resource management of DirectXTK I like it.

 

Thanks for everybody's help deciding what to do.

Share this post


Link to post
Share on other sites

resource management and memory management


If you use the DirectXTex library you need to be cool with COM (you're already using dx right?) and WIC calls. If that's a deal breaker you can roll your own dds loader, WIC is only really useful for loading more complex formats like jpeg or png. The downside to dds files is simply that they're large and don't trivially compress that great but I doubt this is a pressing matter for you.
DDS textures will compress quite nicely if you use the right compression format for the textures. What's more, most graphics cards these days support the compression formats natively and all are supported by directx , so you send the card compressed texture data and if decompresses it on the fly for you, saving graphics ram and processing time. There are many options for compressing dds textures, some lossy and some not, some supporting alpha channels of varied depths and some not, both affect the compression ratio.

Check it out, you'll be glad you did, I could use it to get a dds down from several megs to about the same size as a 24 bit compressed png for equal dimensions of image file (1024x1024)... It's well worth it...

Share this post


Link to post
Share on other sites


DDS textures will compress quite nicely if

 

This is true but it's not trivial. Out-of-the-box 8:1 compression is usually fine for a gpu footprint, but isn't that great for file size. John Olick wrote about compressing dds files a few years ago, though it might be overkill compared to just zipping a dds. 

 

http://www.jonolick.com/home/dxt-compression-part-4-entropy

 

Also don't forget that if you're using a block compressed format, you usually can't accurately represent greys. Sometimes fine, but don't let the errors compound.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By AxeGuywithanAxe
      I wanted to see how others are currently handling descriptor heap updates and management.
      I've read a few articles and there tends to be three major strategies :
      1 ) You split up descriptor heaps per shader stage ( i.e one for vertex shader , pixel , hull, etc)
      2) You have one descriptor heap for an entire pipeline
      3) You split up descriptor heaps for update each update frequency (i.e EResourceSet_PerInstance , EResourceSet_PerPass , EResourceSet_PerMaterial, etc)
      The benefits of the first two approaches is that it makes it easier to port current code, and descriptor / resource descriptor management and updating tends to be easier to manage, but it seems to be not as efficient.
      The benefits of the third approach seems to be that it's the most efficient because you only manage and update objects when they change.
    • By evelyn4you
      hi,
      until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight.
      Now i have implemented realtime environment  probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap.
      For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7
      My Graphic Card is Directx 12 compatible NVidia GTX 960
      the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it )
      https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/
      Now my questions
       is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ?
        the same question is about the constant buffer of the matrixes
       my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal
      for example i could use 2 vertexbuffer bindings
      1 containing only the uv coordinates
      2.containing position and normal
      How do i copy from the RWByteAddressBuffers to the vertexbuffer ?
       
      (Code from turanszkij )
      Here is my shader implementation for skinning a mesh in a compute shader:
      1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer;   ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices   RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos   inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) {  float4 p = 0, pp = 0;  float3 n = 0;  float4x4 m;  float3x3 m3;  float weisum = 0;   // force loop to reduce register pressure  // though this way we can not interleave TEX - ALU operations  [loop]  for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i)  {  m = boneBuffer[(uint)inBon].pose;  m3 = (float3x3)m;   p += mul(float4(pos.xyz, 1), m)*inWei;  n += mul(nor.xyz, m3)*inWei;   weisum += inWei;  }   bool w = any(inWei);  pos.xyz = w ? p.xyz : pos.xyz;  nor.xyz = w ? n : nor.xyz; }   [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) {  const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now...   uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress);  uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress);  uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress);  uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress);   float4 pos = asfloat(pos_u);  float4 nor = asfloat(nor_u);  float4 wei = asfloat(wei_u);  float4 bon = asfloat(bon_u);   Skinning(pos, nor, bon, wei);   pos_u = asuint(pos);  nor_u = asuint(nor);   // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props:  streamoutBuffer_POS.Store4(fetchAddress, pos_u);  streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }  
    • By mister345
      Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception?
       
      _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up));
      It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions.
          m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp
      and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter.
      https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
    • By mister345
      Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file?
      I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and
      double clicked on the frame to open it, but no idea where to go from there.
       
      I've been searching for hours and there's no information on this, not even on the Microsoft Website!
      They say "open the  Graphics Pixel History window" but there is no such window!
      Then they say, in the "Pipeline Stages choose Start Debugging"  but the Start Debugging option is nowhere to be found in the whole interface.
      Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger?
       
      All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated
      and Microsoft's instructions are horrible! Somebody please, please help.
       
       
       

    • By mister345
      I finally ported Rastertek's tutorial # 42 on soft shadows and blur shading. This tutorial has a ton of really useful effects and there's no working version anywhere online.
      Unfortunately it just draws a black screen. Not sure what's causing it. I'm guessing the camera or ortho matrix transforms are wrong, light directions, or maybe texture resources not being properly initialized.  I didnt change any of the variables though, only upgraded all types and functions DirectX3DVector3 to XMFLOAT3, and used DirectXTK for texture loading. If anyone is willing to take a look at what might be causing the black screen, maybe something pops out to you, let me know, thanks.
      https://github.com/mister51213/DX11Port_SoftShadows
       
      Also, for reference, here's tutorial #40 which has normal shadows but no blur, which I also ported, and it works perfectly.
      https://github.com/mister51213/DX11Port_ShadowMapping
       
  • Advertisement