How to initialize a D3DFMT_A16B16G16R16F texture?

Started by
6 comments, last by supersonicstar 18 years, 1 month ago
I use following code to initialize a D3DFMT_A8R8G8B8 texture, D3DCOLOR temp; for( int j = 0; j <TEX_PREC; j++ ) { for( int i = 0; i < TEX_PREC; i++ ) { temp=density(i,j); (*(DWORD*)( (BYTE*)d3dlr.pBits + ((DWORD)d3dlr.Pitch * (j)) + (i*4)))=temp; } } I tried to change it to a D3DFMT_A16B16G16R16F version. But it seems that there is no 16-bit floating point data structure for CPU program. Could anyone tell me how to do it? Thanks a lot. [Edited by - supersonicstar on March 15, 2006 10:31:29 AM]
Advertisement
read up on D3DXFillTexture, should do what you're looking for.
Another option is if you've set the texture up as a render target, you could render a screen aligned quad and use the pixel shader to initialize all the data to whatever value you want.

neneboricua
Thank you,Scoob Droolins and neneboricua19, very helpful!

The second method may have a limitation when it comes to a volume texture. But it seems that Quadro FX 4000 GPU has the ability to render to a 3D texture(google:Technical Brief: NVIDIA HPDR Technology site:nvidia.com). However, according to dx docs(see:D3DUSAGE), the volume texture cannot be set as a render target.

Now I am facing such a problem: I have 256 slices of D3DFMT_A16B16G16R16F 256x256 2d textures(I set them as render targets, and they store the rendering result). And I hope to export these 2d textures into a 3d texture. I have not figured out any good solutions.

Anyone could give me some suggestions?
Quote:Original post by supersonicstar
But it seems that there is no 16-bit floating point data structure for CPU program. Could anyone tell me how to do it? Thanks a lot.
Yes there is - D3DXFLOAT16 [smile]

You might also be interested in this thread. It's a slightly different problem, but it's still related to reading/writing/manipulating FP16 data.

Quote:Now I am facing such a problem: I have 256 slices of D3DFMT_A16B16G16R16F 256x256 2d textures(I set them as render targets, and they store the rendering result). And I hope to export these 2d textures into a 3d texture. I have not figured out any good solutions.
So, if I understand you correctly you want to convert a IDirect3DTexture9[256] into a single IDirect3DVolumeTexture9?

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Thank you, I will read that thread.:-)
Yes,I want to convert a IDirect3DTexture9[256] into a single IDirect3DVolumeTexture9.
Quote:Original post by supersonicstar
Yes,I want to convert a IDirect3DTexture9[256] into a single IDirect3DVolumeTexture9.
Okay, have you considered the simplest possibility? IDirect3DDevice9::CreateVolumeTexture() and then use IDirect3DVolumeTexture9::LockBox() to manually compose the final volume texture, finally using D3DXSaveVolumeToFile() if you want it stored for later?

Probably won't be too speedy (especially as you're pulling data from (ex) render-targets), but you didn't mention anything about requiring it to fit a particular performance profile [grin]

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Thanks a lot, it works fine! Here the speed is not a problem (it is a one-off post-processing of 256 slices of rendering results).

I changed it to a 16-bit floating version:
	float x1,y1,z1;	x1=-(float(VOXEL_PREC)-1)/2;	y1=-(float(VOXEL_PREC)-1)/2;	z1=-(float(VOXEL_PREC)-1)/2;	D3DXVECTOR4_16F temp;	for( int k = 0; k < LAYER_NUM; k++ )	{		for( int y = 0; y <VOXEL_PREC; y++ )		{			for( int x = 0; x < VOXEL_PREC; x++ )			{       				    temp=density(x1,y1,z1,1);	     			DWORD overall_offset=pLockedVolume.SlicePitch*k+pLockedVolume.RowPitch*y;					(*(D3DXVECTOR4_16F*)((BYTE*)pLockedVolume.pBits+overall_offset+x*sizeof(D3DXVECTOR4_16F)))=temp;					x1++;					}			x1=-(float(VOXEL_PREC)-1)/2;			y1++;		}		y1=-(float(VOXEL_PREC)-1)/2;		z1++;	}


Now there are still two things I don’t understand:

(1) It seems that the range of D3DXFLOAT16 is 0-1, for example, if density(x1,y1,z1,1) return D3DXVECTOR4_16F(100.0f,5.0f,1.7f,0.5f), when writing it to texture, this value will be cut to D3DXVECTOR4_16F(1.0f,1.0f,1.0f,0.5f).After I store the volume texture as file, and check the value of each pixel, I found that “1.0f” stands for “255”, “0.5f” stands for “128”, and so on.
I also tried D3DFMT_A32B32G32R32F texture format (and corresponding D3DXVECTOR4), the same thing happened.

(2) According to dx docs,

typedef struct D3DXFLOAT16 {
WORD Value;
} D3DXFLOAT16, *LPD3DXFLOAT16;

I’m confused whether D3DXFLOAT16 is a 16-bit floating-point type or a 16-bit unsigned integer type.

Anyone know why? Thanks.

[Edited by - supersonicstar on March 16, 2006 6:45:06 AM]

This topic is closed to new replies.

Advertisement