D3D11 texture image data from memory

Started by
4 comments, last by Hodgman 8 years, 4 months ago

Hello,

for some tests i used D3DX11CreateShaderResourceViewFromFile and everything worked fine.

In the future i want to pack all my images in a container format and preload that file.

Now i want to preload all my images with stb_image.h and use them in D3D11.

MSDN says i can use D3DX11CreateShaderResourceViewFromMemory to do this.

What is the proper way to create a texture from a unsigned char* and use them ?

Loading image data:


object.imageData = stbi_load(imagePath.c_str(), &object.width, &object.height, &object.bit, STBI_rgb_alpha);

Creating texture from image data:


result = D3DX11CreateShaderResourceViewFromMemory(m_pDevice,
			(void*)object.imageData,
			0,
			NULL,
			NULL,
			&object.texture,
			NULL);

I have some problems with the image data size.

I tried various calculations, but even with result = S_OK it's crashing.

Advertisement

Passing '0' for the size (3rd argument) doesn't sound like what you want to be doing.

Care to supply a call stack and relevant error for the crash or should we guess what it was?

Adam Miles - Principal Software Development Engineer - Microsoft Xbox Advanced Technology Group

D3DX11CreateShaderResourceViewFromMemory expects that the data you give it is from an image file, such as JPEG, DDS, or PNG file. stbi_load parses an image file, and gives you back the raw pixel data that was decoded from the image file. To use that raw data to initialize a texture, you should call ID3D11Device::CreateTexture2D and pass the raw image data through a D3D11_SUBRESOURCE_DATA structure that's passed as the "pInitialData" parameter. For a 2D texture, you should set pSysMem to the image data pointer that you get back from stbi_load, and you should set SysMemPitch to the size of a pixel times the width of your texture. So in your case it looks like you're loading 8-bit RGBA data which is 4 bytes per pixel, so you should set it to "object.width * 4".

D3DX11CreateShaderResourceViewFromMemory expects that the data you give it is from an image file, such as JPEG, DDS, or PNG file.

I read about that but I thought stbi_load would NOT decode it.

Now that I think about it makes sense.

But if I want to pack my images in the future, I need to read the raw data without decoding.

If I read the PNG images with the winapi and not with stbi_load and

then use D3DX11CreateShaderResourceViewFromMemory it should work ?

If I read the PNG images with the winapi and not with stbi_load and
then use D3DX11CreateShaderResourceViewFromMemory it should work ?


Yes. You can use OpenFile and ReadFile to load the contents of a file into memory, and then pass that to D3DX11CreateShaderResourceViewFromMemory.

I should point out that many games do not store their textures using image file formats such as JPEG and PNG. While these formats are good for reducing the size of the image on disk, they can be somewhat expensive to decode. They also don't let you pre-generate mipmaps or compress to GPU-readable block compression formats, which many games do in order to save performance and memory. As a result games will often use their own custom file format, or will use the DDS format. DDS can store compressed data with mipmaps, and it can also store texture arrays, cubemaps, and 3D textures.

What is the proper way to create a texture from a unsigned char* and use them?
Don't use the D3DX library. Create a texture resource, using your unsigned char* as the "initial data", and then make an SRV for that resource.

This topic is closed to new replies.

Advertisement