Rendering to 64-bit textures using DirectX

Started by
14 comments, last by SaraJ 15 years, 6 months ago
Hi, I am trying to render to a 64 bit floating point texture, and later on to read the values from the texture but can't seem to get it to work. What I am trying to do is to render a number of lines to a texture, setting the color of each line to 1 in the red color-channel and then use additive blending to get a kind of density map where the value represents the number of lines crossing (two lines crossing giving a red value of 2 and so on). Using a 32-bit integer texture this works fine, but I can't get it to work at all with 64-bit textures (wich I need since the 8 bits of the red channel is not enough). What I get stuck on is that there don't seem to be any predefined color format for 64-bits and I do not now how to represent the color channels in a way that makes it possible to read the values of a separate color channel later on. Below is a very simple example (2 lines crossing eachother) of my code for creating lines, render to texture (using RenderToSurface) and reading from texture using 32-bit textures. I am using C# and DirectX.

private void SetUpTextureAndRenderSurface()
{
   _rtsHelper = new RenderToSurface(_device, _textureWidth, _textureHeight, Format.A8R8G8B8, true, DepthFormat.D16);
   _renderTexture = new Texture(_device, _textureWidth, _textureHeight, 1, Usage.RenderTarget, Format.A8R8G8B8, Pool.Default);

   _renderSurface = _renderTexture.GetSurfaceLevel(0);
}

private void SetUpVertices()
{
   _vertices = new CustomVertex.TransformedColored[4];
   _vertices[0] = new CustomVertex.TransformedColored((float)0, height * 0.5f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
   _vertices[1] = new CustomVertex.TransformedColored((float)width, height * 0.5f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
   _vertices[2] = new CustomVertex.TransformedColored((float)0, height * 0f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
   _vertices[3] = new CustomVertex.TransformedColored((float)width, height * 1f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
}

private void SetUpBuffer()
{
   _vertexBuffer = new VertexBuffer(typeof(CustomVertex.TransformedColored), 4, _device, Usage.Dynamic | Usage.WriteOnly, CustomVertex.TransformedColored.Format, Pool.Default);
   _vertexBuffer.SetData(_vertices, 0, LockFlags.None);
}


private void RenderTexture(int orderNumber)
{
   //Render lines to texture
   _rtsHelper.BeginScene(_renderSurface);
   _device.Clear(ClearFlags.Target, Color.Black, 1.0f, 0);

   _device.SetStreamSource(0, _vertexBuffer, 0);
   _device.VertexFormat = CustomVertex.TransformedColored.Format;
           
   for (int line = 0; line < 2; line++)
   {
       _device.DrawPrimitives(PrimitiveType.LineStrip, (line * 2), 1);
   }

   _rtsHelper.EndScene(Filter.None);
   _device.Present();

   //Load texture into surface that can be locked and read from
   Surface fdbck = _device.CreateOffscreenPlainSurface(_textureWidth, _textureHeight, Format.A8R8G8B8, Pool.Scratch);

   SurfaceLoader.FromSurface(fdbck, _renderSurface, Filter.None, 0);

   //Lock texture and store values in array
   uint[,] data2 = (uint[,])fdbck.LockRectangle(typeof(uint), LockFlags.None, new int[] { _textureHeight, _textureWidth });
   int[,] values = new int[_textureWidth, _textureHeight];

   for (int j = 0; j < _textureHeight; j++)
   {
      for (int i = 0; i < _textureWidth; i++)
      {
          values[i, j] = Color.FromArgb((int)data2[j, i]).R; 
       }
   }
}

Anyone got any idea how to do this using a 64-bit floating point texture (A16B16G16R16F)? I have been trying to solve this on my own for a couple of days now, and don't seem to get any closer to a solution.
Advertisement
Depends on the graphics boards. Only the most recent graphics boards (Geforce 8000 series and later, Radeon 3000 series and later, I think) support alpha blending on floating point buffers. Geforce6 might support filtering on 16Bit floating point buffers, though, I don't know for sure. If your graphics card is older than this, alpha blending might not be supported. Try using a 16Bit fixed point format such as A16B16G16R16 (without the F). To my knowlegde alpha blending and filtering of these formats is supported on every SM2 capable hardware.
----------
Gonna try that "Indie" stuff I keep hearing about. Let's start with Splatter.
Thanks for your answer.

I've got an nVidia GeForce 7800 gtx graphics board, and from what I can understand it does support alpha blending for 64bit floating point textures (but I might be wrong).

Anyhow, I have been trying to use 64bit integer textures as well and can't get that to work either. My main problem (independent of using floats or integers) is that I do not know how to represent color using 16 bits per channel, and as a result of this do not know how to interpret the values when reading from the texture (I need to get the blended value of the red channel)
Assuming you have the necessary hardware, you can read a 16F texture by casting to D3DXFLOAT16, which can be initialized with a float to set it, or cast to a float to read it.

You can use D3DXFloat16To32Array to convert more data at once too.
6 and 7 series both support alpha-blending and filtering of fp16 textures. They don't support multisampling, though.
Using D3DXFLOAT16 seems like a promising approach, however I do get a bit of a strange error when trying to use it. I get an error message that the "Microsoft.DirectX.PrivateImplementationDetails.D3DXFLOAT16" exist in both Direct3D and Direct3DX (I need to use both of them).

Am I using the wrong namespace for D3DXFLOAT16? Or maybe am I using the wrong version of DirectX (9.0c)?
Quote:Original post by MJP
6 and 7 series both support alpha-blending and filtering of fp16 textures. They don't support multisampling, though.
Are you sure of this? I'm not on my dev machine right now, but I was pretty sure one of the ATI or Nvidia chipsets in that generation did one or the other but not both...

Either way, you should write your code to use IDirect3D9::CheckDeviceFormat() with Usage = D3DUSAGE_QUERY_POSTPIXELSHADER_BLENDING to be sure.

Quote:I get an error message that the "Microsoft.DirectX.PrivateImplementationDetails.D3DXFLOAT16" exist in both Direct3D and Direct3DX (I need to use both of them).
Strange. I must admit to being (very) rusty on MDX coding but from general .NET principles you should be able to get away with a using alias or by fully qualifying your declaration to the one you actually want...


hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Quote:Original post by jollyjeffers
Are you sure of this? I'm not on my dev machine right now, but I was pretty sure one of the ATI or Nvidia chipsets in that generation did one or the other but not both...


Yup. The ATI X1000-series could multisample and blend fp16, but couldn't filter them. This old Nvidia programming guide has the supported texture formats for the 6 and 7-series.
I don't know much about the hardware involved here but beware that switching from an 8-bit integer to a 16-bit float will effectively only give you 11 bits before you start to loosing precision and increments by one start being dropped.
I suppose you could double that range by initializing the buffer to -2048, or minimize the loss of precision by using multiple buffers and performing the additions in some clever order. But surely there's some way to use 16-bit integer channels, or at least write a shader which treats the floats as such?
Quote:Original post by MJP
Yup. The ATI X1000-series could multisample and blend fp16, but couldn't filter them.
Ah, probably that case that I was thinking of - I distinctly remember having codepaths that manually implemented each of these. The blending was needed for SM2 Radeon's and I forget what the filtering was for, but I guess that would've also been ATI hardware [smile]


Cheers,
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

This topic is closed to new replies.

Advertisement