• Advertisement
Sign in to follow this  

DX11 [SlimDX] DX11 - Load cube texture from dds-file

This topic is 2383 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! Is there a way to load a cube-texture from a dds-file into SlimDX (at Direct3D11)? At the moment, I'm forced to use a self-modified SlimDX version that can create a ShaderResourceView using D3DX11CreateShaderResourceViewFromFile from the unmanaged API. But now I'm wondering if there's an "official" way to do this ( that I just haven't found ) or whether this functionality might be integrated into the next official release? Thanks! Fabian

Share this post


Link to post
Share on other sites
Advertisement
ShaderResourceView.FromFile was added in a recent commit to the repository. The alternative is to load the texture separately and then create the resource view from that.

Cube textures in D3D10 and D3D11 are supported through the more generic "texture arrays". Loading your image as an array of 2D textures will allow you to achieve the same result.

Share this post


Link to post
Share on other sites
I am having the same problem. I can't figure out how would I go about loading a DDS file into a texture array. Could you elaborate a little bit more on this topic? Would you perhaps be willing to post a code snippet demonstrating how to load a texture cube from a DDS file and pass it on to the pixel shader? I have to say, the transition from DX9 is not a cakewalk...

Share this post


Link to post
Share on other sites
Loading a DDS Cubemap from file using the API or loading it yourself? If from the API:



Device device;
String fileName;

...

ImageLoadInformation loadInfo = new ImageLoadInformation();
loadInfo.OptionFlags = ResourceOptionFlags.TextureCube;

Texture2D tex2D = Texture2D.FromFile(device, fileName);
ShaderResourceView srView = new ShaderResourceView(device, tex2D);


Notice, just like Mike said, Cube maps are really just texture2d arrays (so there's no special texture cube object), where the array has 6 textures representing each face. If you're loading the DDS file yourself, you can create an empty cube texture like so:


Texture2DDescription descTex = new Texture2DDescription();
descTex.ArraySize = 6;
descTex.Width = 512;
descTex.Height = 512;
descTex.Usage = ResourceUsage.Default;
descTex.CpuAccessFlags = CpuAccessFlags.None;
descTex.Format = Format.R8G8B8A8_UNorm;
descTex.SampleDescription = new SampleDescription(1, 0);
descTex.MipLevels = 1;
descTex.BindFlags = D3D.BindFlags.ShaderResource;
descTex.OptionFlags = D3D.ResourceOptionFlags.TextureCube;

Texture2D texture2D = new Texture2D(graphicsDevice, descTex);
ShaderResourceView shaderResourceView = new ShaderResourceView(graphicsDevice, texture2D);

Share this post


Link to post
Share on other sites
Thanks!

I've managed to render my scene, but there's one thing I'm still trying to figure out - is there a way to bind the texture to the shader using the shader variable name? For now, I've just typed:

Device device;

String texturePath;


...

ImageLoadInformation loadInfo = new ImageLoadInformation();

loadInfo.OptionFlags = ResourceOptionFlags.TextureCube;

texture = Texture2D.FromFile(device, texturePath);

textureResourceView = new ShaderResourceView(device, texture);

device.ImmediateContext.PixelShader.SetShaderResource(textureResourceView, 0);



I've found someone else's solution here: [slimDX] textures on the simpletriangle but the code seems kind of messy with the shaders being compiled twice...

Share this post


Link to post
Share on other sites
You pretty much have two options:

1. Use the effects framework (write FX files and not individual vertex/pixel HLSL fragments). Then, you can query an EffectResourceVariable from the Effect by index, by name or by semantic, so if you have "DiffuseMap" as your texture resource name, then myEffect.GetVariableByName("DiffuseMap") would return an EffectVariable. Then you'd call AsResource() to cast it into an EffectResourceVariable, which has a SetResource(ShaderResourceView) method. I'd strongly advise reviewing the official SlimDX docs as you they would help in following/understanding this progression.

Note: The effects framework is built ontop of all the regular shader API, you can choose not to use it and set your shaders and their variables manually (or create a replacement of the effects framework even), just like what you're doing.

2. Use shader reflection classes from the SlimDX.D3DCompiler namespace. You can create a ShaderReflection object by using your shader's byte code, which allows you to query info about your shader. What you're looking for is the shaderReflection.GetResourceBindingDescription(int) method. Loop over each resource (the shader reflection description will have a resource count), and the resource binding description will have a name you can use.

The shader reflection API is pretty powerful, albeit in my opinion a bit cumbersome to navigate if you haven't touched it before. But it can be a useful tool as you can query all sorts of things about your variables and shaders. Although I use the effects framework, I do use shader reflection to build an input layout when I load my shaders, for an example.

Edit: I should also mention, with option #2 you still will be setting the shader resource views via an index in the *ShaderWrapper classes. But you can use what you find out from the shader reflection queries in a custom setup where you associate those indices with resource names. This is why using the effects framework is easier, as it does this sort of book keeping for you automatically.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By Hawkblood
      I've been away for a VERY long time, so if this topic has already been discussed, I couldn't find it.
      I started using VS2017 recently and I keep getting warnings like this:
      1>c:\program files (x86)\microsoft directx sdk (june 2010)\include\d3d10.h(609): warning C4005: 'D3D10_ERROR_FILE_NOT_FOUND': macro redefinition (compiling source file test.cpp) 1>C:\Program Files (x86)\Windows Kits\10\Include\10.0.16299.0\shared\winerror.h(54103): note: see previous definition of 'D3D10_ERROR_FILE_NOT_FOUND' (compiling source file test.cpp) It pops up for various things, but the reasons are all the same. Something is already defined.....
      I have DXSDK June2010 and referencing the .lib and .h set correctly (otherwise I wouldn't get this, I'd get errors)
      Is there a way to correct this issue or do I just have to live with it?
       
      Also (a little off-topic) the compiler doesn't like to compile my code if I make very small changes.... What's up with that? Can I change it? Google is no help.
    • By d3daywan
      【DirectX9 Get shader bytecode】
      I hook DrawIndexedPrimitive
          HookCode(PPointer(g_DeviceBaseAddr + $148)^,@NewDrawIndexedPrimitive, @OldDrawIndexedPrimitive);    
          function NewDrawIndexedPrimitive(const Device:IDirect3DDevice9;_Type: TD3DPrimitiveType; BaseVertexIndex: Integer; MinVertexIndex, NumVertices, startIndex, primCount: LongWord): HResult; stdcall;
          var
              ppShader: IDirect3DVertexShader9;
              _Code:Pointer;
              _CodeLen:Cardinal;
          begin
              Device.GetVertexShader(ppShader);//<------1.Get ShaderObject(ppShader)
              ppShader.GetFunction(nil,_CodeLen);
              GetMem(_Code,_CodeLen);
              ppShader.GetFunction(_Code,_CodeLen);//<----2.Get bytecode from ShaderObject(ppShader)
              Result:=OldDrawIndexedPrimitive(Self,_Type,BaseVertexIndex,MinVertexIndex, NumVertices, startIndex, primCount);
          end;
      【How to DirectX11 Get VSShader bytecode?】
      I hook DrawIndexed
          pDrawIndexed:=PPointer(PUINT_PTR(UINT_PTR(g_ImmContext)+0)^ + 12 * SizeOf(Pointer))^;
          HookCode(pDrawIndexed,@NewDrawIndexed,@OldDrawIndexed);
          procedure NewDrawIndexed(g_Real_ImmContext:ID3D11DeviceContext;IndexCount:     UINT;StartIndexLocation: UINT;BaseVertexLocation: Integer); stdcall;
          var
              game_pVertexShader: ID3D11VertexShader;
                  ppClassInstances: ID3D11ClassInstance;
                  NumClassInstances: UINT
          begin
              g_Real_ImmContext.VSGetShader(game_pVertexShader,ppClassInstances,NumClassInstances);    //<------1.Get ShaderObject(game_pVertexShader)
              .....//<----【2.Here's how to get bytecode from ShaderObject(game_pVertexShader)?】
              OldDrawIndexed(ImmContext, IndexCount, StartIndexLocation, BaseVertexLocation);
          end;

      Another way:
      HOOK CreateVertexShader()
      but
      HOOK need to be created before the game CreateVertexShader, HOOK will not get bytecode if the game is running later,I need to get bytecode at any time like DirectX9
    • By matt77hias
      Is it ok to bind nullptr shader resource views and sample them in some shader? I.e. is the resulting behavior deterministic and consistent across GPU drivers? Or should one rather bind an SRV to a texture having just a single black texel?
    • By matt77hias
      Is it common to have more than one ID3D11Device and/or associated immediate ID3D11DeviceContext?
      If I am correct a single display subsystem (GPU, video memory, etc.) is completely determined (from a 3D rendering perspective) by a
      IDXGIAdapter (meta functionality facade); ID3D11Device (resource creation facade); ID3D11DeviceContext (pipeline facade). So given that you want to use multiple display subsystems, you will have to handle multiple of these interfaces. A concrete example would be a graphics card dedicated to rendering and a separate graphics card dedicated to computation, or combining an integrated and dedicated graphics card. All such cases seem to me quite far fetched to justify support in a majority of games. So moving one abstraction level further downstream, should a game engine even consider multiple display systems (i.e. there is just one ID3D11Device and one immediate ID3D11DeviceContext)?
    • By pcmaster
      Hi all, I have another "niche" architecture error
      On our building servers, we're using head-less machines on which we're running DX11 WARP in a console session, that is D3D_DRIVER_TYPE_WARP plus D3D_FEATURE_LEVEL_11_0. It's Windows 7 or Windows Server 2008 R2 with "Platform Update for Windows 7". Everything's been fine, it's running all kinds of complex rendering, compute shaders, UAVs, everything fine and even fast.
      The problem: Writes to a cubemap array specific slice and specific mipmap using PS+UAV seem to be dropped.
      Do note that with D3D_DRIVER_TYPE_HARDWARE it works correctly; I can reproduce the bug on any normal workstation (also Windows 7 x64) with D3D_DRIVER_TYPE_WARP.
      The shader in question is a simple average 4->1 mipmapping PS, which samples a source SRV texture and writes into a UAV like this:
       
      RWTexture2DArray<float4> array2d; array2d[int3(xy, arrayIdx)] = avg_float4_value; The output merger is set to do no RT writes, the only output is via that one UAV.
      Note again that with a normal HW driver (GeForce) it works right, but with WARP it doesn't.
      Any ideas how I could debug this, to be sure it's really WARP causing this? Do you think RenderDoc will capture also a WARP application (using their StartFrameCapture/EndFrameCapture API of course, since the there's no window nor swap chain)? EDIT: RenderDoc does make a capture even with WARP, wow
      Thanks!
  • Advertisement