Jump to content

  • Log In with Google      Sign In   
  • Create Account

Creating readable mipmaps in D3D11


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 Plerion   Members   -  Reputation: 368

Like
0Likes
Like

Posted 05 August 2014 - 04:05 PM

Hello all

 

For my project i have developed my own texture format and im currently writing a program that converts png images into that format including their precalculated mip map layers. I thought id use d3d11 to calculate the mipmaps since ive been using them mipmaps created by the engine itself so far for the textures and just read the actual data from the texture. In order to do so ive first created a texture with the appropriate flags and bindings to generate mipmaps and then copied it to a texture which can be read from the CPU. I then use squish to convert these layers into (right now statically) dxt1.

 

In code this means:

	std::vector<uint8> img = createImage(file, w, h);
	/* snippet removed: getting layer count -> it works */

	D3D11_TEXTURE2D_DESC texDesc = { 0 };
	texDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
	texDesc.CPUAccessFlags = 0;
	texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
	texDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
	/* removed obvious like array size, usage, and so on, it all works */

	ID3D11Texture2D* mipTexture = nullptr;
	
	massert(SUCCEEDED(gImageDevice->CreateTexture2D(&texDesc, nullptr, &mipTexture)));
	gImageCtx->UpdateSubresource(mipTexture, 0, nullptr, img.data(), w * 4, 0);
	ID3D11ShaderResourceView* srv = nullptr;
	/* snippet removed, obvious SRV creation, same mip levels, same format */
	massert(SUCCEEDED(gImageDevice->CreateShaderResourceView(mipTexture, &srvd, &srv)));
	gImageCtx->GenerateMips(srv);

	texDesc.BindFlags = 0;
	texDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
	texDesc.MiscFlags = 0;
	texDesc.Usage = D3D11_USAGE_STAGING;

	ID3D11Texture2D* cpuTexture = nullptr;
	massert(SUCCEEDED(gImageDevice->CreateTexture2D(&texDesc, nullptr, &cpuTexture)));

	//gImageCtx->CopyResource(cpuTexture, mipTexture);
	for (uint32 i = 0; i < numLayers; ++i) {
		gImageCtx->CopySubresourceRegion(cpuTexture, i, 0, 0, 0, mipTexture, i, nullptr);
	}
	/* snippet removed, opening the file (binary) and writing the header */

	for (uint32 i = 0; i < numLayers; ++i) {
		D3D11_MAPPED_SUBRESOURCE resource;
		massert(SUCCEEDED(gImageCtx->Map(cpuTexture, i, D3D11_MAP_READ, 0, &resource)));
		uint32 cw = std::max<uint32>(w >> i, 1);
		uint32 ch = std::max<uint32>(h >> i, 1);

		std::vector<uint8> layerData(cw * ch * 4);
		memcpy(layerData.data(), resource.pData, layerData.size());
		gImageCtx->Unmap(cpuTexture, i);

		auto compSize = squish::GetStorageRequirements(cw, ch, squish::kDxt1);
		std::vector<uint8> outData(compSize);
		squish::CompressImage(img.data(), cw, ch, outData.data(), squish::kDxt1);
		os.write((const char*) outData.data(), outData.size());
	}

While this works fine for the first layer i have some problems with subsequent mip levels. For the first layer see:

NICXbtD.png

(RGBA vs BGRA aka D3D11 vs Chromium)

 

Now for example the second layer already looks bad, see here:

Layer 1:

S08AmeK.png

 

Layer 2:

L5elb4B.png

 

Layer 3:

xLPwQDg.png

 

and so on

 

As you can see im not happy with how stuff looks after layer 1. This also is visible when im using said texture it looks very bad:

hHFU0NK.png

 

Am i doing something wrong or is that just.... uhm... the way d3d creates mip levels? Are there good alternatives to d3d to create the mipmaps? 

 

Any help or hints are much appreciated. I wish you a nice evening (or whatever time of the day applies to you ;))

Plerion



Sponsor:

#2 MJP   Moderators   -  Reputation: 11438

Like
4Likes
Like

Posted 05 August 2014 - 08:22 PM

You need to use the RowPitch member of D3D11_MAPPED_SUBRESOURCE when reading your mapped staging texture. Staging textures can have their width padded in order to accommodate hardware requirements, so you need to take it into account in your code. Typically what you'll do is read the data one row at a time in a loop. For each iteration you'll memcpy a single row of unpadded texture data, and then increment your source pointer by the pitch size.



#3 Plerion   Members   -  Reputation: 368

Like
1Likes
Like

Posted 06 August 2014 - 10:32 AM

Hello MJP

 

Indeed i forgot about that, but in this case it was not the source of the problem. Even with the correct row pitch it still showed up wrong. Turns out that squish converted the first layer correctly to S3TC but then it did wrong conversions. Switched to libtxc_dxtn and got it all working now!

 

Greetings

Plerion






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS