Jump to content
  • Advertisement
Sign in to follow this  
MysteryX

Converting Texture Format on GPU

This topic is 964 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm processing video frame data via DX9 and a series of HLSL pixel shaders. The processing itself is done in the D3DFMT_A16B16G16R16 or D3DFMT_A16B16G16R16F colorspace. I've been successful at getting the result back in D3DFMT_X8R8G8B8 by changing the format of the last RenderTarget in the chain, and that works.

 

For input textures, however, if I change the pixel format to D3DFMT_X8R8G8B8, there is image distortion in the processing. Here is the code that creates the textures. For input textures, "memoryTexture=true" and "isSystemMemory=false"

 

Is there a way to do the conversion of 8-bit-per-channel input textures on the GPU before doing 16-bit-per-channel processing?

HRESULT D3D9RenderImpl::CreateInputTexture(int index, int clipIndex, int width, int height, bool memoryTexture, bool isSystemMemory) {
	if (index < 0 || index >= maxTextures)
		return E_FAIL;

	InputTexture* Obj = &m_InputTextures[index];
	Obj->ClipIndex = clipIndex;
	Obj->Width = width;
	Obj->Height = height;

	if (memoryTexture && !isSystemMemory) {
		HR(m_pDevice->CreateOffscreenPlainSurface(width, height, m_formatIn, D3DPOOL_DEFAULT, &Obj->Memory, NULL));
	}
	else if (isSystemMemory) {
		HR(m_pDevice->CreateOffscreenPlainSurface(width, height, m_formatOut, D3DPOOL_SYSTEMMEM, &Obj->Memory, NULL));
	}
	if (!isSystemMemory) {
		HR(m_pDevice->CreateTexture(width, height, 1, D3DUSAGE_RENDERTARGET, m_format, D3DPOOL_DEFAULT, &Obj->Texture, NULL));
		HR(Obj->Texture->GetSurfaceLevel(0, &Obj->Surface));
	}

	return S_OK;
}

Share this post


Link to post
Share on other sites
Advertisement

You are distorting the final image if you change your format to D3DFMT_X8R8G8B8 because you are assuming a D3DFMT_A16B16G16R16 in your shader. AKA you are performing pixel operations on a color format that represents the color in half the amount of memory. Bad juju mon.

 

The solution might have something to do with editing the texture bytes directly to convert it to your desired format before running it through the shader. My question though is why do you want to convert the format BEFORE you run it through your pipeline if you are successful in converting AFTER you run it through your pipeline?

 

Edit: The only thing I can think of right now, if you are trying to blend two textures(ala watermark or w/e you want to do) is to make a separate shader. It doesn't make sense to have a single shader that expects one surface format and than try to force the user to convert all of their textures before using it. You make a new shader for each specific case scenario. 

Edited by ExErvus

Share this post


Link to post
Share on other sites

I found the problem. The parameters to configure the shaders were wrong when I passed 8-bit data; the Width was then twice larger than it should.

 

Copying between surfaces of various formats does work. It does the conversion automatically. Now it's working.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!