How to fill a VolumeTexture with data?

Started by
6 comments, last by Bucky32 18 years, 4 months ago
Hi, I have som troubles filling a volume texture (3d texture) with values. This is how I try to create the texture now:

//Create volume texture, 16 bit integar for each channel RGBA
volumeTexture = new VolumeTexture(device, 32, 32, 32, 1, Usage.Dynamic, Format.A16B16G16R16, Pool.Default);

short[, ,] volumeData = new short[32, 32, 32];
for (int x = 0; x < 32; x++)
{
     for (int y = 0; y < 32; y++)
     {
          for (int z = 0; z < 32; z++)
          {
               volumeData[x, y, z] = (short)125;
          }
     }
}

GraphicsStream stream = volumeTexture.LockBox(0, LockFlags.None);
stream.Write(volumeData);
volumeTexture.UnlockBox(0);







I get no compile errors or runtime errors. Just a greyisch result that doesn't change even if I input other values into the texture at creation. In the end I want the texture values to be in floating point format. But at the moment I'm not sure this is the correct method to use... Mostly trial and error at the moment. I'm beginning to get a bit anoyed on the lack of Managed DirectX documentation. =) [Edited by - Bucky32 on December 4, 2005 9:11:48 AM]
Advertisement
No ideas?
Have you checked that there aren't any warnings from the debug-output?

What size is short in MDX? your pixel format is 64 bits wide, but all short types I've come across tend to be 16 or 32 bits wide - meaning that you could be leaving rubbish in memory...

What do you get if you lock the resource and pull the pixel data back out - is it the same as what you're putting in?

How are you rendering/verifying that you get a "greyisch result"?

Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

Format talk removed...

Shorts in C# are 16 bits, but I doubt you want to be using them for your color components. Using Format.A8R8G8B8 allows you to use the 0...255 range for all color components, which is probably enough. For more info on the formats, I'd have to go look into some books, since I'm not too sure about them. Just let me know if you need more info.

Regardless of your texture format, I think you're assigning the wrong values for filling your texture. If I am right, you're assigning a value of 125 to every texel in the volume texture (not per color component)... I don't think that this is what you want. Typically you will want something more like this:

[source lang=c#]volumeTexture = new VolumeTexture(device, 32, 32, 32, 1, Usage.Dynamic, Format.A8B8G8R8, Pool.Default);int[, ,] volumeData = new int[32, 32, 32];for (int x = 0; x < 32; x++){     for (int y = 0; y < 32; y++)     {          for (int z = 0; z < 32; z++)          {               // assign ARGB values for each texel               volumeData[x, y, z] = 125 << 24 | 125 << 16 | 125 << 8 | 125;           }     }}


I'm also having some doubts if the GraphicsStream is smart enough to correctly fill the volume texture with the 3D array, but I think the above should get you started :)

Another edit: Why would you want to use 64 bit wide formats anyway? I have to admit I haven't gotten around to working with floating point textures yet, but other than HDRI I haven't seen much practical applications yet, especially none in combination with volume textures... Additive blending perhaps??

Just curious :)
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
Thanks remigius and jollyjeffers for your responses, finally got something working here. =)

//Set dimensionsint width = 32;int height = 32;int depth = 32;int channels = 4;//Create Texture databyte[] volumeData = new byte[32 * 32 * 32 * 4];for (int x = 0; x < width; x++){     for (int y = 0; y < height; y++)     {           for (int z = 0; z < depth; z++)           {                for (int channel = 0; channel < channels; channel++)                {                     //Blue channel                     volumeData[x * height * depth * channels + y * depth * channels + z * channels + 0] = 0;                     //Green channel                     volumeData[x * height * depth * channels + y * depth * channels + z * channels + 1] = 0;                     //Red channel                     volumeData[x * height * depth * channels + y * depth * channels + z * channels + 2] = 255;                     //Alpha channel                     volumeData[x * height * depth * channels + y * depth * channels + z * channels + 3] = 255;                 }            }     }}//Read data into textureGraphicsStream stream = volumeTexture.LockBox(0, LockFlags.None);stream.Write(volumeData, 0, width * height * depth * channels);volumeTexture.UnlockBox(0);


I'm converting an application we have done in C++ and OpenGL that visualizes Nebulas (in our case the Orion Nebula) from volume data. The data we use need to be at higher depth then normal 8 bit to be rendered realisticly.

So at the moment I'm only halfway... still need to do figure out how to write floating point values.
Quote:Original post by Bucky32
still need to do figure out how to write floating point values.

I just had a look through the MDX documentation and I couldn't find an equivalent... but under native/C++ we have:

D3DXFloat32To16Array() and D3DXFloat16To32Array() as well as the D3DXFLOAT16 structure.

If you can find the equivalents for MDX then you should be set for writing floating-point values to your volume texture [smile]

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

I had the impression that the float encoding is quite application specific. From what I gather from the HDR sample in the SDK, that seems pretty much true, since floats are endoded for a range between 0f and an arbitrary maximum. The documentation accompanying the sample has the following to say:

Quote:
RGB16
Using a 16-bit per channel integer format, 65536 discrete values are available to store per-channel data. This encoding is a simple linear distribution of those 65536 values between 0.0f and an arbitrary maximum value (100.0f for this sample). The alpha channel is unused. Here is the formula for decoding the floating-point value from an encoded RGB16 color:

decoded.rgb = encoded.rgb dot max_value

RGBE8
This encoding is more sophisticated than RGB16 and allows for a far greater range of color data by using a logarithmic distribution. Each channel stores the mantissa of the color component, and the alpha channel stores a shared exponent. The added flexibility of this encoding comes at the cost of extra computation. Here is the formula for decoding the floating-point value from an encoded RGBE8 color:

decoded.rgb = encoded.rgb * 2encoded.a


I don't know if this specifically applies to HDR lighting, but since all applications must eventually output non-floating-point color values from their pixel shaders when rendering to the screen, it seems completely up to you on how to use the bits available, using whatever encoding you wish. As far as I can tell, the methods proposed for encoding are only agreed upon by convention, not because they're fixed standards...

If I'm wrong though, I'd be happy to learn more about it :)

Edited: I just noticed that there are also a lot of formats with an F postfix, indicating that they're 'natural' (or rather natively supported) floating point surfaces. Maybe if you would use such a format, you could write float values directly into the GraphicsStream?
Rim van Wersch [ MDXInfo ] [ XNAInfo ] [ YouTube ] - Do yourself a favor and bookmark this excellent free online D3D/shader book!
I guess you could use lesser bytes to represent the colors in this application using those encoding methods. Perhaps its even smart. =) Using real 64 bit floating point values with volume data isent very memory efficient.

Using the Bitconverter I now managed to store the color values in a A32B32G32R32F texture which I think will be sufficient precision for what I'm trying to do.

It's bit messy... but it works. =)

//Set volume texture dimensionsint width = 32;int height = 32;int depth = 32;int channels = 4;int bytesPerChannel = 4;//Create volume texture, 32 bit float for each channel RGBAvolumeTexture = new VolumeTexture(device, width, height, depth, 1, Usage.Dynamic, Format.A32B32G32R32F, Pool.Default);//Create Texture databyte[] volumeData = new byte[width * height * depth * channels * bytesPerChannel];for (int x = 0; x < width; x++){    for (int y = 0; y < height; y++)    {        for (int z = 0; z < depth; z++)        {            for (int byteIndex = 0; byteIndex < bytesPerChannel; byteIndex++)            {                //Red channel                volumeData[x * height * depth * bytesPerChannel * channels + y * depth * bytesPerChannel * channels + z * bytesPerChannel * channels + 0 * bytesPerChannel + byteIndex] = BitConverter.GetBytes(1.0f)[byteIndex];                //Green channel                volumeData[x * height * depth * bytesPerChannel * channels + y * depth * bytesPerChannel * channels + z * bytesPerChannel * channels + 1 * bytesPerChannel + byteIndex] = BitConverter.GetBytes(0.0f)[byteIndex];                //Blue channel                volumeData[x * height * depth * bytesPerChannel * channels + y * depth * bytesPerChannel * channels + z * bytesPerChannel * channels + 2 * bytesPerChannel + byteIndex] = BitConverter.GetBytes(0.0f)[byteIndex];                //Alpha channel                volumeData[x * height * depth * bytesPerChannel * channels + y * depth * bytesPerChannel * channels + z * bytesPerChannel * channels + 3 * bytesPerChannel + byteIndex] = BitConverter.GetBytes(1.0f)[byteIndex];            }        }    }}//Read data into textureGraphicsStream stream = volumeTexture.LockBox(0, LockFlags.None);stream.Write(volumeData, 0, width * height * depth * channels * bytesPerChannel);volumeTexture.UnlockBox(0);

This topic is closed to new replies.

Advertisement