Sign in to follow this  

DX11 Auto Generate MipMap Not Working

This topic is 419 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

i want to use SampleLevel to use different mipmap levels manually but its not working as below image shows:

Untitled.png

as the picture shows it always use level 0 texture!

 

here is my code for loading a texture i use Directx Textue tool for loading also i use dx11:

HR(DirectX::CreateDDSTextureFromFileEx(
AngelCore::AngelSubSystemResources::GraphicDeviceResources::Device.Get(),
path.c_str(),MAXSIZE_T,D3D11_USAGE_DEFAULT,
D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE,0,
D3D11_RESOURCE_MISC_GENERATE_MIPS,false,
m_textureResource.GetAddressOf(), m_textureView.GetAddressOf()));

also i create my sampler state with below options:

D3D11_FILTER_ANISOTROPIC,D3D11_TEXTURE_ADDRESS_WRAP, D3D11_TEXTURE_ADDRESS_WRAP
, D3D11_TEXTURE_ADDRESS_WRAP, 
D3D11_COMPARISON_ALWAYS

what is the problem?

Share this post


Link to post
Share on other sites

I can't see what parameters you're using when you create the texture and shader resource view since you're using a helper function. Are you creating the texture with a full mipmap chain, or are you creating it with only 1 mip level? Are you creating the shader resource view with a full mipmap chain?

Share this post


Link to post
Share on other sites

I can't see what parameters you're using when you create the texture and shader resource view since you're using a helper function. Are you creating the texture with a full mipmap chain, or are you creating it with only 1 mip level? Are you creating the shader resource view with a full mipmap chain?

this function doesn't require anything,it will return shaderResourceView.

i also use that d3dx11 function but nothing changed.

Share this post


Link to post
Share on other sites

 What format is your texture?  The documentation page for ID3D11DeviceContext::GenerateMips notes a list of supported formats, and also notes that:

 

For all other unsupported formats, GenerateMips will silently fail.

 

Assuming that you are actually using a supported format, then you should review the documentation for the helper function you're using; this one I assume (although you really should have said so).  Note the discussion around the maxsize parameter, how it interacts with miplevels, and ensure that the behaviour you're requesting is the behaviour you actually want.  Also note that you must be calling the overload that takes your context as a param if you wish it to generate mips for you.

 

Finally, since you're loading a DDS file, why are you generating mips at runtime anyway?  DDS is capable of storing pregenerated mips, so you could consider that instead.

Share this post


Link to post
Share on other sites

 What format is your texture?  The documentation page for ID3D11DeviceContext::GenerateMips notes a list of supported formats, and also notes that:

 

For all other unsupported formats, GenerateMips will silently fail.

 

Assuming that you are actually using a supported format, then you should review the documentation for the helper function you're using; this one I assume (although you really should have said so).  Note the discussion around the maxsize parameter, how it interacts with miplevels, and ensure that the behaviour you're requesting is the behaviour you actually want.  Also note that you must be calling the overload that takes your context as a param if you wish it to generate mips for you.

 

Finally, since you're loading a DDS file, why are you generating mips at runtime anyway?  DDS is capable of storing pregenerated mips, so you could consider that instead.

ok. now i use this function:

HR(DirectX::CreateDDSTextureFromFileEx(
AngelCore::AngelSubSystemResources::GraphicDeviceResources::Device.Get(),
AngelCore::AngelSubSystemResources::GraphicDeviceResources::DeviceContext.Get()
,
path.c_str(),D3D10_FLOAT32_MAX,D3D11_USAGE_DEFAULT,
D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE,0,
D3D11_RESOURCE_MISC_GENERATE_MIPS,false,
m_textureResource.GetAddressOf(), m_textureView.GetAddressOf()));

but nothings changed!

how i'm suppose to generate static mipmaps?

Share this post


Link to post
Share on other sites

Now you're passing a float to a size_t parameter - have you warnings turned off?

 

Please read the documentation - it's legal to specify 0 for maxsize, which is probably the behaviour you actually want:

 

If '0', then if the attempt to create the Direct3D 11 resource fails and there are mipmaps present, it will retry assuming a maxsize based on the device's current feature level.

 

Again, note from the documentation (with my emphasis):

 

If a d3dContext is given to these functions, they will attempt to use the auto-generation of mipmaps features in the Direct3D 11 API if supported for the pixel format.

 

You really need to check the pixel format of your DDS file and confirm that it is a format for which automatic mipmap generation is supported.

Share this post


Link to post
Share on other sites

This topic is 419 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628740
    • Total Posts
      2984470
  • Similar Content

    • By GreenGodDiary
      Having some issues with a geometry shader in a very basic DX app.
      We have an assignment where we are supposed to render a rotating textured quad, and in the geometry shader duplicate this quad and offset it by its normal. Very basic stuff essentially.
      My issue is that the duplicated quad, when rendered in front of the original quad, seems to fail the Z test and thus the original quad is rendered on top of it.
      Whats even weirder is that this only happens for one of the triangles in the duplicated quad, against one of the original quads triangles.

      Here's a video to show you what happens: Video (ignore the stretched textures)

      Here's my GS: (VS is simple passthrough shader and PS is just as basic)
      struct VS_OUT { float4 Pos : SV_POSITION; float2 UV : TEXCOORD; }; struct VS_IN { float4 Pos : POSITION; float2 UV : TEXCOORD; }; cbuffer cbPerObject : register(b0) { float4x4 WVP; }; [maxvertexcount(6)] void main( triangle VS_IN input[3], inout TriangleStream< VS_OUT > output ) { //Calculate normal float4 faceEdgeA = input[1].Pos - input[0].Pos; float4 faceEdgeB = input[2].Pos - input[0].Pos; float3 faceNormal = normalize(cross(faceEdgeA.xyz, faceEdgeB.xyz)); //Input triangle, transformed for (uint i = 0; i < 3; i++) { VS_OUT element; VS_IN vert = input[i]; element.Pos = mul(vert.Pos, WVP); element.UV = vert.UV; output.Append(element); } output.RestartStrip(); for (uint j = 0; j < 3; j++) { VS_OUT element; VS_IN vert = input[j]; element.Pos = mul(vert.Pos + float4(faceNormal, 0.0f), WVP); element.Pos.xyz; element.UV = vert.UV; output.Append(element); } }  
      I havent used geometry shaders much so im not 100% on what happens behind the scenes.
      Any tips appreciated! 
    • By mister345
      Hi, I'm building a game engine using DirectX11 in c++.
      I need a basic physics engine to handle collisions and motion, and no time to write my own.
      What is the easiest solution for this? Bullet and PhysX both seem too complicated and would still require writing my own wrapper classes, it seems. 
      I found this thing called PAL - physics abstraction layer that can support bullet, physx, etc, but it's so old and no info on how to download or install it.
      The simpler the better. Please let me know, thanks!
    • By Hexaa
      I try to draw lines with different thicknesses using the geometry shader approach from here:
      https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader
      It seems to work great on my development machine (some Intel HD). However, if I try it on my target (Nvidia NVS 300, yes it's old) I get different results. See the attached images. There
      seem to be gaps in my sine signal that the NVS 300 device creates, the intel does what I want and expect in the other picture.
      It's a shame, because I just can't figure out why. I expect it to be the same. I get no Error in the debug output, with enabled native debugging. I disabled culling with CullMode.None. Could it be some z-fighting? I have little clue about it but I tested to play around with the RasterizerStateDescription and DepthBias properties with no success, no change at all. Maybe I miss something there?
      I develop the application with SharpDX btw.
      Any clues or help is very welcome
       


    • By Beny Benz
      Hi,
      I'm currently trying to write a shader which shoud compute a fast fourier transform of some data, manipulating the transformed data, do an inverse FFT an then displaying the result as vertex offset and color. I use Unity3d and HLSL as shader language. One of the main problems is that the data should not be passed from CPU to GPU for every frame if possible. My original plan was to use a vertex shader and do the fft there, but I fail to find out how to store changing data betwen shader calls/passes. I found a technique called ping-ponging which seems to be based on writing and exchangeing render targets, but I couldn't find an example for HLSL as a vertex shader yet.
      I found https://social.msdn.microsoft.com/Forums/en-US/c79a3701-d028-41d9-ad74-a2b3b3958383/how-to-render-to-multiple-render-targets-in-hlsl?forum=xnaframework
      which seem to use COLOR0 and COLOR1 as such render targets.
      Is it even possible to do such calculations on the gpu only? (/in this shader stage?, because I need the result of the calculation to modify the vertex offsets there)
      I also saw the use of compute shaders in simmilar projects (ocean wave simulation), do they realy copy data between CPU / GPU for every frame?
      How does this ping-ponging / rendertarget switching technique work in HLSL?
      Have you seen an example of usage?
      Any answer would be helpfull.
      Thank you
      appswert
    • By ADDMX
      Hi
      Just a simple question about compute shaders (CS5, DX11).
      Do the atomic operations (InterlockedAdd in my case) should work without any issues on RWByteAddressBuffer and be globaly coherent ?
      I'v come back from CUDA world and commited fairly simple kernel that does some job, the pseudo-code is as follows:
      (both kernels use that same RWByteAddressBuffer)
      first kernel does some job and sets Result[0] = 0;
      (using Result.Store(0, 0))
      I'v checked with debugger, and indeed the value stored at dword 0 is 0
      now my second kernel
      RWByteAddressBuffer Result;  [numthreads(8, 8, 8)] void main() {     for (int i = 0; i < 5; i++)     {         uint4 v0 = DoSomeCalculations1();         uint4 v1 = DoSomeCalculations2();         uint4 v2 = DoSomeCalculations3();                  if (v0.w == 0 && v1.w == 0 && v2.w)             continue;         //    increment counter by 3, and get it previous value         // this should basically allocate space for 3 uint4 values in buffer         uint prev;         Result.InterlockedAdd(0, 3, prev);                  // this fills the buffer with 3 uint4 values (+1 is here as the first 16 bytes is occupied by DrawInstancedIndirect data)         Result.Store4((prev+0+1)*16, v0);         Result.Store4((prev+1+1)*16, v1);         Result.Store4((prev+2+1)*16, v2);     } } Now I invoke it with Dispatch(4,4,4)
      Now I use DrawInstancedIndirect to draw the buffer, but ocassionaly there is missed triangle here and there for a frame, as if the atomic counter does not work as expected
      do I need any additional synchronization there ?
      I'v tried 'AllMemoryBarrierWithGroupSync' at the end of kernel, but without effect.
      If I do not use atomic counter, and istead just output empty vertices (that will transform into degenerated triangles) the all is OK - as if I'm missing some form of synchronization, but I do not see such a thing in DX11.
      I'v tested on both old and new nvidia hardware (680M and 1080, the behaviour is that same).
       
  • Popular Now