Hello all
For my project i have developed my own texture format and im currently writing a program that converts png images into that format including their precalculated mip map layers. I thought id use d3d11 to calculate the mipmaps since ive been using them mipmaps created by the engine itself so far for the textures and just read the actual data from the texture. In order to do so ive first created a texture with the appropriate flags and bindings to generate mipmaps and then copied it to a texture which can be read from the CPU. I then use squish to convert these layers into (right now statically) dxt1.
In code this means:
std::vector<uint8> img = createImage(file, w, h);
/* snippet removed: getting layer count -> it works */
D3D11_TEXTURE2D_DESC texDesc = { 0 };
texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
/* removed obvious like array size, usage, and so on, it all works */
ID3D11Texture2D* mipTexture = nullptr;
massert(SUCCEEDED(gImageDevice->CreateTexture2D(&texDesc, nullptr, &mipTexture)));
gImageCtx->UpdateSubresource(mipTexture, 0, nullptr, img.data(), w * 4, 0);
ID3D11ShaderResourceView* srv = nullptr;
/* snippet removed, obvious SRV creation, same mip levels, same format */
massert(SUCCEEDED(gImageDevice->CreateShaderResourceView(mipTexture, &srvd, &srv)));
gImageCtx->GenerateMips(srv);
texDesc.BindFlags = 0;
texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
texDesc.MiscFlags = 0;
texDesc.Usage = D3D11_USAGE_STAGING;
ID3D11Texture2D* cpuTexture = nullptr;
massert(SUCCEEDED(gImageDevice->CreateTexture2D(&texDesc, nullptr, &cpuTexture)));
//gImageCtx->CopyResource(cpuTexture, mipTexture);
for (uint32 i = 0; i < numLayers; ++i) {
gImageCtx->CopySubresourceRegion(cpuTexture, i, 0, 0, 0, mipTexture, i, nullptr);
}
/* snippet removed, opening the file (binary) and writing the header */
for (uint32 i = 0; i < numLayers; ++i) {
D3D11_MAPPED_SUBRESOURCE resource;
massert(SUCCEEDED(gImageCtx->Map(cpuTexture, i, D3D11_MAP_READ, 0, &resource)));
uint32 cw = std::max<uint32>(w >> i, 1);
uint32 ch = std::max<uint32>(h >> i, 1);
std::vector<uint8> layerData(cw * ch * 4);
memcpy(layerData.data(), resource.pData, layerData.size());
gImageCtx->Unmap(cpuTexture, i);
auto compSize = squish::GetStorageRequirements(cw, ch, squish::kDxt1);
std::vector<uint8> outData(compSize);
squish::CompressImage(img.data(), cw, ch, outData.data(), squish::kDxt1);
os.write((const char*) outData.data(), outData.size());
}
While this works fine for the first layer i have some problems with subsequent mip levels. For the first layer see:
(RGBA vs BGRA aka D3D11 vs Chromium)
Now for example the second layer already looks bad, see here:
Layer 1:
Layer 2:
Layer 3:
and so on
As you can see im not happy with how stuff looks after layer 1. This also is visible when im using said texture it looks very bad:
Am i doing something wrong or is that just.... uhm... the way d3d creates mip levels? Are there good alternatives to d3d to create the mipmaps?
Any help or hints are much appreciated. I wish you a nice evening (or whatever time of the day applies to you ;))
Plerion