Sign in to follow this  
hupsilardee

DX11 Mip map initialisation

Recommended Posts

So my adventures in D3D11 continue. I am struggling to understand how mipmapping works. In this project I am also trying not to depend on D3DX11 so I am creating textures programmatically. However I am struggling to understand how the mip map generation works. Here's how I create a texture (minus error checking etc)

[code]

DWORD* loaded_data; // loaded using FreeImage

uint width = FreeImage_GetWidth(image32);
uint height = FreeImage_GetHeight(image32);
uint mips = GetNumMipLevels(width, height);
Desc.Width = width;
Desc.Height = height;
Desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
Desc.MipLevels = mips; // auto-generates the whole chain
Desc.MiscFlags = 0;
Desc.ArraySize = 1;
Desc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
Desc.CPUAccessFlags = 0;
Desc.SampleDesc.Count = 1;
Desc.SampleDesc.Quality = 0;
Desc.Usage = D3D11_USAGE_IMMUTABLE;

D3D11_SUBRESOURCE_DATA* initialData = new D3D11_SUBRESOURCE_DATA[mips];

for (uint i = 0; i < mips; i++)
{
uint mip = (uint)powf(2.f, (float)i);
initialData[i].pSysMem = loaded_data;
initialData[i].SysMemPitch = width * 4;
initialData[i].SysMemSlicePitch = width * height * 4;
}

pDevice->pDevice->CreateTexture2D(&Desc, initialData, &pTexture);
delete[] initialData;

D3D11_SHADER_RESOURCE_VIEW_DESC viewDesc;
ZeroMemory(&viewDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
viewDesc.Format = Desc.Format;
viewDesc.Texture2D.MipLevels = -1;
viewDesc.Texture2D.MostDetailedMip = 0;
viewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;

pDevice->pDevice->CreateShaderResourceView(pTexture, &viewDesc, &pView);

return S_OK;

[/code]

However I don't think this is generating the mip maps correctly, or they aren't being used. Look at the left of the cube in this screenshot

[url="http://imageshack.us/photo/my-images/209/32138660.png/"][img]http://img209.imageshack.us/img209/6316/32138660.png[/img][/url]
Uploaded with [url="http://imageshack.us"]ImageShack.us[/url]

I dont [i]think[/i] the sampler is at fault, but just in case, it is set up like this
[code]
D3D11_SAMPLER_DESC desc;
desc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
desc.AddressV = desc.AddressU;
desc.AddressW = desc.AddressU;
desc.BorderColor[0] = desc.BorderColor[1] = desc.BorderColor[2] = desc.BorderColor[3] = 1.0f;
desc.ComparisonFunc = D3D11_COMPARISON_NEVER;
desc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
desc.MaxAnisotropy = 0;
desc.MaxLOD = 0;
desc.MinLOD = 0;
desc.MipLODBias = 0.0f;
[/code]

So can anyone see what I'm doing wrong?

Share this post


Link to post
Share on other sites
The way you're trying to do things would require that you have the mipmap data already generated when you created the function. There is no "automatic" mip generation for IMMUTABLE textures, since they require that you pass [i]all[/i] of the required data to it. What you're actually doing in your code is you're passing the top-level mip data to all of the lower-resolution mip levels, which isn't right. If you do have all of the mips generated, then for each D3D11_SUBRESOURCE_DATA you need to have it point to the data for that mip. You would also need to set the proper pitch for that mip level, which get smaller as you go up in the levels.

If you don't actually have the mips generated and you want them to be generated by the hardware, you need to set it up differently. Here's an example of how to do it, taken from the WICTextureLoader sampler from DirectXTex:

[code]
// Create texture
D3D11_TEXTURE2D_DESC desc;
desc.Width = twidth;
desc.Height = theight;
desc.MipLevels = (autogen) ? 0 : 1;
desc.ArraySize = 1;
desc.Format = format;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = (autogen) ? (D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET) : (D3D11_BIND_SHADER_RESOURCE);
desc.CPUAccessFlags = 0;
desc.MiscFlags = (autogen) ? D3D11_RESOURCE_MISC_GENERATE_MIPS : 0;
D3D11_SUBRESOURCE_DATA initData;
initData.pSysMem = temp.get();
initData.SysMemPitch = static_cast<UINT>( rowPitch );
initData.SysMemSlicePitch = static_cast<UINT>( imageSize );
ID3D11Texture2D* tex = nullptr;
hr = d3dDevice->CreateTexture2D( &desc, (autogen) ? nullptr : &initData, &tex );
if ( SUCCEEDED(hr) && tex != 0 )
{
if (textureView != 0)
{
D3D11_SHADER_RESOURCE_VIEW_DESC SRVDesc;
memset( &SRVDesc, 0, sizeof( SRVDesc ) );
SRVDesc.Format = format;
SRVDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
SRVDesc.Texture2D.MipLevels = (autogen) ? -1 : 1;
hr = d3dDevice->CreateShaderResourceView( tex, &SRVDesc, textureView );
if ( FAILED(hr) )
{
tex->Release();
return hr;
}
if ( autogen )
{
assert( d3dContext != 0 );
d3dContext->UpdateSubresource( tex, 0, nullptr, temp.get(), static_cast<UINT>(rowPitch), static_cast<UINT>(imageSize) );
d3dContext->GenerateMips( *textureView );
}
}
}
[/code]

Share this post


Link to post
Share on other sites
Sorry to bump such an old thread. I would like to say, thanks for the help MJP, and it has been solved as you said. FreeImage has image resize functions which made it very quick and easy, and they support a variety of filters as well, which is great.

I have another question about this method though. It appears that you are marking it for default usage and as a potential render target.
Surely there is a performance hit for this?

(I thought that if the texture is IMMUTABLE on some cards it goes in a different texture memory that the hardware can access faster. However if not then this is doubly excellent, not only is texture generation likely to be much faster due to GPU mip map generation, but all textures can be render targets, so I could have super fast bullet holes and decals!)

I am also bumping to point out that the sampler is in fact incorrectly set up. As the MinLOD and MaxLOD members are both set to 0, this forces the sampler to use only the most detailed mip, which of course is wrong. The correct sampler state is as above, with:

[code]
desc.MaxLOD = D3D11_FLOAT32_MAX; // or simply FLT_MAX
desc.MinLOD = 0;
[/code]

Share this post


Link to post
Share on other sites
It's possible that using default usage and render target binding could cause different behavior for a texture, but that's completely dependent on the hardware and the driver. Your best bet for GPU performance is to use IMMUTABLE, since that guaranteed that the driver will put the texture in the best possible memory location with the best possible memory layout for that use case.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628301
    • Total Posts
      2981906
  • Similar Content

    • By GreenGodDiary
      SOLVED: I had written 
      Dispatch(32, 24, 0) instead of
      Dispatch(32, 24, 1)  
       
      I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
      Assume I'm doing something to a UAV in my CS:
      RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.
       
      I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):
       
      DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();  
      After I render the scene, I dispatch like this:
      gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
      I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

      If someone with more experience could point me in the right direction I would really appreciate it!

      On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

      Thank you <3
       
      P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

      P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over
    • By mister345
      Does buffer number matter in ID3D11DeviceContext::PSSetConstantBuffers()? I added 5 or six constant buffers to my framework, and later realized I had set the buffer number parameter to either 0 or 1 in all of them - but they still all worked! Curious why that is, and should they be set up to correspond to the number of constant buffers?
      Similarly, inside the buffer structs used to pass info into the hlsl shader, I added padding inside the c++ struct to make a struct containing a float3 be 16 bytes, but in the declaration of the same struct inside the hlsl shader file, it was missing the padding value - and it still worked! Do they need to be consistent or not? Thanks.
          struct CameraBufferType
          {
              XMFLOAT3 cameraPosition;
              float padding;
          };
    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By turanszkij
      If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.
  • Popular Now