Sign in to follow this  
SaraJ

Rendering to 64-bit textures using DirectX

Recommended Posts

Hi, I am trying to render to a 64 bit floating point texture, and later on to read the values from the texture but can't seem to get it to work. What I am trying to do is to render a number of lines to a texture, setting the color of each line to 1 in the red color-channel and then use additive blending to get a kind of density map where the value represents the number of lines crossing (two lines crossing giving a red value of 2 and so on). Using a 32-bit integer texture this works fine, but I can't get it to work at all with 64-bit textures (wich I need since the 8 bits of the red channel is not enough). What I get stuck on is that there don't seem to be any predefined color format for 64-bits and I do not now how to represent the color channels in a way that makes it possible to read the values of a separate color channel later on. Below is a very simple example (2 lines crossing eachother) of my code for creating lines, render to texture (using RenderToSurface) and reading from texture using 32-bit textures. I am using C# and DirectX.
private void SetUpTextureAndRenderSurface()
{
   _rtsHelper = new RenderToSurface(_device, _textureWidth, _textureHeight, Format.A8R8G8B8, true, DepthFormat.D16);
   _renderTexture = new Texture(_device, _textureWidth, _textureHeight, 1, Usage.RenderTarget, Format.A8R8G8B8, Pool.Default);

   _renderSurface = _renderTexture.GetSurfaceLevel(0);
}

private void SetUpVertices()
{
   _vertices = new CustomVertex.TransformedColored[4];
   _vertices[0] = new CustomVertex.TransformedColored((float)0, height * 0.5f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
   _vertices[1] = new CustomVertex.TransformedColored((float)width, height * 0.5f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
   _vertices[2] = new CustomVertex.TransformedColored((float)0, height * 0f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
   _vertices[3] = new CustomVertex.TransformedColored((float)width, height * 1f, 0f, 0f, Color.FromArgb(255, 0, 0).ToArgb());
}

private void SetUpBuffer()
{
   _vertexBuffer = new VertexBuffer(typeof(CustomVertex.TransformedColored), 4, _device, Usage.Dynamic | Usage.WriteOnly, CustomVertex.TransformedColored.Format, Pool.Default);
   _vertexBuffer.SetData(_vertices, 0, LockFlags.None);
}


private void RenderTexture(int orderNumber)
{
   //Render lines to texture
   _rtsHelper.BeginScene(_renderSurface);
   _device.Clear(ClearFlags.Target, Color.Black, 1.0f, 0);

   _device.SetStreamSource(0, _vertexBuffer, 0);
   _device.VertexFormat = CustomVertex.TransformedColored.Format;
           
   for (int line = 0; line < 2; line++)
   {
       _device.DrawPrimitives(PrimitiveType.LineStrip, (line * 2), 1);
   }

   _rtsHelper.EndScene(Filter.None);
   _device.Present();

   //Load texture into surface that can be locked and read from
   Surface fdbck = _device.CreateOffscreenPlainSurface(_textureWidth, _textureHeight, Format.A8R8G8B8, Pool.Scratch);

   SurfaceLoader.FromSurface(fdbck, _renderSurface, Filter.None, 0);

   //Lock texture and store values in array
   uint[,] data2 = (uint[,])fdbck.LockRectangle(typeof(uint), LockFlags.None, new int[] { _textureHeight, _textureWidth });
   int[,] values = new int[_textureWidth, _textureHeight];

   for (int j = 0; j < _textureHeight; j++)
   {
      for (int i = 0; i < _textureWidth; i++)
      {
          values[i, j] = Color.FromArgb((int)data2[j, i]).R; 
       }
   }
}

Anyone got any idea how to do this using a 64-bit floating point texture (A16B16G16R16F)? I have been trying to solve this on my own for a couple of days now, and don't seem to get any closer to a solution.

Share this post


Link to post
Share on other sites
Depends on the graphics boards. Only the most recent graphics boards (Geforce 8000 series and later, Radeon 3000 series and later, I think) support alpha blending on floating point buffers. Geforce6 might support filtering on 16Bit floating point buffers, though, I don't know for sure. If your graphics card is older than this, alpha blending might not be supported. Try using a 16Bit fixed point format such as A16B16G16R16 (without the F). To my knowlegde alpha blending and filtering of these formats is supported on every SM2 capable hardware.

Share this post


Link to post
Share on other sites
Thanks for your answer.

I've got an nVidia GeForce 7800 gtx graphics board, and from what I can understand it does support alpha blending for 64bit floating point textures (but I might be wrong).

Anyhow, I have been trying to use 64bit integer textures as well and can't get that to work either. My main problem (independent of using floats or integers) is that I do not know how to represent color using 16 bits per channel, and as a result of this do not know how to interpret the values when reading from the texture (I need to get the blended value of the red channel)

Share this post


Link to post
Share on other sites
Using D3DXFLOAT16 seems like a promising approach, however I do get a bit of a strange error when trying to use it. I get an error message that the "Microsoft.DirectX.PrivateImplementationDetails.D3DXFLOAT16" exist in both Direct3D and Direct3DX (I need to use both of them).

Am I using the wrong namespace for D3DXFLOAT16? Or maybe am I using the wrong version of DirectX (9.0c)?

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
6 and 7 series both support alpha-blending and filtering of fp16 textures. They don't support multisampling, though.
Are you sure of this? I'm not on my dev machine right now, but I was pretty sure one of the ATI or Nvidia chipsets in that generation did one or the other but not both...

Either way, you should write your code to use IDirect3D9::CheckDeviceFormat() with Usage = D3DUSAGE_QUERY_POSTPIXELSHADER_BLENDING to be sure.

Quote:
I get an error message that the "Microsoft.DirectX.PrivateImplementationDetails.D3DXFLOAT16" exist in both Direct3D and Direct3DX (I need to use both of them).
Strange. I must admit to being (very) rusty on MDX coding but from general .NET principles you should be able to get away with a using alias or by fully qualifying your declaration to the one you actually want...


hth
Jack

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers
Are you sure of this? I'm not on my dev machine right now, but I was pretty sure one of the ATI or Nvidia chipsets in that generation did one or the other but not both...


Yup. The ATI X1000-series could multisample and blend fp16, but couldn't filter them. This old Nvidia programming guide has the supported texture formats for the 6 and 7-series.

Share this post


Link to post
Share on other sites
I don't know much about the hardware involved here but beware that switching from an 8-bit integer to a 16-bit float will effectively only give you 11 bits before you start to loosing precision and increments by one start being dropped.
I suppose you could double that range by initializing the buffer to -2048, or minimize the loss of precision by using multiple buffers and performing the additions in some clever order. But surely there's some way to use 16-bit integer channels, or at least write a shader which treats the floats as such?

Share this post


Link to post
Share on other sites
Quote:
Original post by MJP
Yup. The ATI X1000-series could multisample and blend fp16, but couldn't filter them.
Ah, probably that case that I was thinking of - I distinctly remember having codepaths that manually implemented each of these. The blending was needed for SM2 Radeon's and I forget what the filtering was for, but I guess that would've also been ATI hardware [smile]


Cheers,
Jack

Share this post


Link to post
Share on other sites
I don't think the hardware is the problem anyway, I am quite sure my graphics card support blending for floating point textures. Looking at the images I render it looks like the blending works. My problem is that I need to control the numeric values that are the "input" to the texture when rendering to it, and to be able to get values from the texture in a way I can interpret.

Quote:
Original post by jollyjeffers

Quote:
I get an error message that the "Microsoft.DirectX.PrivateImplementationDetails.D3DXFLOAT16" exist in both Direct3D and Direct3DX (I need to use both of them).
Strange. I must admit to being (very) rusty on MDX coding but from general .NET principles you should be able to get away with a using alias or by fully qualifying your declaration to the one you actually want...



The strangest thing is that I do get that error even though I use the full declaration...


Quote:
Original post by implicit
I suppose you could double that range by initializing the buffer to -2048, or minimize the loss of precision by using multiple buffers and performing the additions in some clever order. But surely there's some way to use 16-bit integer channels, or at least write a shader which treats the floats as such?


One would think so... There don't seem to be any predefined color or vertex format for anything but 32 bit integers, so I guess I have to try to define my own colour format. The -2048 to +2048 range seems to be a bit of an answer how to do that.

Share this post


Link to post
Share on other sites
Quote:
Original post by jollyjeffers

Either way, you should write your code to use IDirect3D9::CheckDeviceFormat() with Usage = D3DUSAGE_QUERY_POSTPIXELSHADER_BLENDING to be sure.


I am trying to use CheckDeviceFormat to find out what kind of textures my graphicscard support blending on, but I don't get it to work (it returns false even for Usage.AutoGenerateMipMap on a A8R8G8B8 texture)

This is what I write:


bool test = Manager.CheckDeviceFormat(0, DeviceType.Hardware, Format.A8R8G8B8, Usage.AutoGenerateMipMap, ResourceType.Textures, Format.A8R8G8B8);



What values should I use for the different parameters to get it right?

Can it be that I create the device in a wrong way?


PresentParameters presentParams = new PresentParameters();
presentParams.Windowed = true;
presentParams.SwapEffect = SwapEffect.Discard;

_device = new Device(0, DeviceType.Hardware, this, CreateFlags.SoftwareVertexProcessing, presentParams);


Share this post


Link to post
Share on other sites
CheckDeviceFormat doesn't return a bool. In C++ it returns an HRESULT, I presume it's a similar type in C#. 0 is a success code for HRESULTs (Although you should use the SUCCEEDED and FAILED macros or equivalent rather than testing for 0, since there's 2 billion other success codes).

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
CheckDeviceFormat doesn't return a bool. In C++ it returns an HRESULT, I presume it's a similar type in C#.


Well, actually, in c# it does return a bool (http://msdn.microsoft.com/en-us/library/bb323684.aspx). The use of directX is (unfortunately) not always identical for different languages :)

Anyhow, if anyone else has the same problem, I managed to get the Check device format to work using the following parameters:

bool test = Manager.CheckDeviceFormat(Manager.Adapters.Default.Adapter, DeviceType.Hardware, presentParams.BackBufferFormat, Usage.QueryPostPixelShaderBlending, ResourceType.Textures, Format.A16B16G16R16F);



And, to give an answer on the discussion on whether or not the 7-series support floating point blending, yes it does support blending on 64bit floating point textures (A16B16G16R16F in DirectX), but it does not support blending on 64bit integer textures.

So, I am kind of back where I began (with a slightly more well defined problem). I need to create a colour that is a 64-bit floating point, and I need to create a vertexformat of my own that supports colour as 64bit floating points. I have looked at the vertexformat of drunkenhyena (http://www.drunkenhyena.com/cgi-bin/view_net_article.pl?chapter=2;article=24), but just replacing their Int32 colour with a 64bit colour seems to cause an overflow. Anyone know how to do this?

To define the colour question more properly: If I where to create a 32 bit integer ARGB colour and a 64 bit integer ABGR colour they would be defined as

UInt32 colour32 = a * 256 * 256 * 256 + r * 256 * 256 + g * 256 + b;
UInt64 colour64 = a * 65536 * 65536 * 65536 + b * 65536 * 65536 + b * 65536 + r;



How do I define a 64bit floating point colour in a way that it is within the range of a 64 bit texture?



(Sorry for the long post, just want to avoid misunderstandings of what my problem is)

Share this post


Link to post
Share on other sites
Quote:

To define the colour question more properly: If I where to create a 32 bit integer ARGB colour and a 64 bit integer ABGR colour they would be defined as
*** Source Snippet Removed ***

How do I define a 64bit floating point colour in a way that it is within the range of a 64 bit texture?


Don't know what the math rules are for C#, but in C++ integer math is assumed to be 32bit at first. Thus you need to explicitely cast your operands to 64bit *before* starting to process them. Like this:


UInt64 colour64 = UInt64( a) * 65536 * 65536 * 65536 + UInt64( b) * 65536 * 65536 + UInt64( g) * 65536 + UInt64( r);



This should avoid an overflow. If not, try to mark the literals als uint64 as well. In C++ this is done by appending type hints. A 64bit unsigned integer would read as "65536ull", for example. "u" for unsigned, "ll" for long long.

I don't exactly understand what your actual question is. First the topic was "render target formats" and now you're at vertex declarations? If it's just for finding a back buffer format that you can read back to the CPU and process there, have a look at the D3DXFLOAT16 class and the associated conversion functions. Using this you should be able to read the contents of the back buffer and convert it to whatever type you need. If you want to achieve some sort of render to vertex buffer, you're mostly lost on DX9. Rendering to vertex buffers is possible only by vendor-specific hacks in DX9. Using fp16 in vertex declarations is possible, though, by the use of D3DDECLTYPE_FLOAT16_4 and the likes.

Share this post


Link to post
Share on other sites
Sorry that the actual question has been hidden within sidetracks such as whether or not my graphicsboard support blending on floating point textures, that was never the question :), and due to me being indistinct in what I want to know (and probably also since I found out a little more of where the actual problem lies).

What I am trying to do is to render a number of lines to a texture. The lines are drawn using DrawPrimitive, which means I have to define an array of vertices, hence the need of a VertexFormat.

I need to render to a texture with higher precision than 8 bits per colour channel, and it has to be a texture format where blending is supported on my graphichs card. That is why I need to render to a 64bit floating point texture.

Furthermore I need to be able to read values from the texture, after the rendering has been done.

I am not sure exactly where and due to what my incorrect result comes up, but what I do know is that I I need to define colour as a 64bit float value where I can control the value for each colour channel, and I don't know how to do this.

Secondly, since I use DrawPrimitive I need to define a VertexFormat that handles 64bit floating point colours instead of 32bit integer colours (which unfortunately all predefined VertexFormats do). Using a VertexFormat where colour is defined as a 32bit value, seems to make the texture behave as if it is a 32bit texture (which cause overflow between the colour channels when defining colours using 64bits).

I hope this clears up a bit more what I need help with :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this