Jump to content
  • Advertisement
Sign in to follow this  
yk_cadcg

[dx10,gpgpu] why can't CreateTexture2D(RENDER_TARGET &&DXGI_FORMAT_R32G32B32_UINT)?

This topic is 4154 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, for gpgpu, I want to use ps to read int, and process, and write int to rendertarget. read int is ok, since texture of int is successfully created. but create(rendertarget of R32G32B32_UINT) failed, why? Thanks. My card is 8800GTX. ps. Now I tested DXGI_FORMAT_R32G32B32A32_UINT and it works! why 3 channels can't work and 4 channels work? isn't it that dx10 is "all caps supported"?
	descTex.Width = texW;
	descTex.Height = texH;
	descTex.ArraySize = 1;
	descTex.SampleDesc.Count = 1;
	descTex.SampleDesc.Quality = 0;
	descTex.Format = DXGI_FORMAT_R32G32B32_UINT; //failed! can only DXGI_FORMAT_R32G32B32A32_FLOAT;
	descTex.Usage = D3D10_USAGE_DEFAULT;
	descTex.BindFlags = D3D10_BIND_RENDER_TARGET; //rendertarget
	descTex.CPUAccessFlags = 0;
	descTex.MiscFlags = 0;
	descTex.MipLevels = 1;
	g_pd3dDevice->CreateTexture2D(&descTex, NULL, &pTex);

ps. I have set a fake format during the earliest creation of device:
	UINT createDeviceFlags = 0;
#ifdef _DEBUG
	createDeviceFlags |= D3D10_CREATE_DEVICE_DEBUG;
#endif
	DXGI_SWAP_CHAIN_DESC sd;
	ZeroMemory( &sd, sizeof(sd) );
	sd.BufferCount = 1;
	sd.BufferDesc.Width = 256;
	sd.BufferDesc.Height = 256;
	sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;	//
	sd.BufferDesc.RefreshRate.Numerator = 60;
	sd.BufferDesc.RefreshRate.Denominator = 1;
	sd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
	sd.OutputWindow = g_hWnd;
	sd.SampleDesc.Count = 1;
	sd.SampleDesc.Quality = 0;
	sd.Windowed = TRUE;
	V_RETURN(D3D10CreateDeviceAndSwapChain(NULL, D3D10_DRIVER_TYPE_HARDWARE, NULL, createDeviceFlags, D3D10_SDK_VERSION, &sd, &g_pSwapChain, &g_pd3dDevice));
#endif
by the sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; I never meant to use this format, since I actually render to a much bigger texture (4k*4k), which has nothing to do with the swap chain. Actually I only render *once* and don't enter the ProcMsg loop. Does this have problem? Thanks a lot!!1 [Edited by - yk_cadcg on May 13, 2007 1:00:42 AM]

Share this post


Link to post
Share on other sites
Advertisement
Well there isn’t a caps structure no more but there is still a CheckFormatSupport method. GPUs need only support a set of usages for every format (the list is not in the SDK) anything beyond is optional and need to checked with CheckFormatSupport.

If you don’t need the back buffer you can create it with any valid format and size. But you can although create a device without a swapchain and backbuffers at all.

Share this post


Link to post
Share on other sites
Quote:
Original post by Demirug
GPUs need only support a set of usages for every format (the list is not in the SDK)
I've asked them a few times to put the list in the SDK, but doesn't seem to have happened yet...

The format you're asking for is marked as "optional" in the spec which would explain why you're not getting anywhere with it. Try adding an alpha channel - DXGI_FORMAT_R32G32B32A32_UINT and you get guaranteed render target support.

I don't have time to list all the options, but basically NO _TYPELESS formats can be used as render targets, nor can any of the depth/stencil formats (R32G8X24_TYPELESS and R24G8_TYPELESS groups). Everything else is marked as "required".

hth
Jack

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!