SharpDX and R32_Float Texture Format

Started by
3 comments, last by empodecles 8 years, 11 months ago

Hello,

I am creating a R32_Float Texture/RenderTarget which I am writing to in a shader.

For debugging purposes I was hoping to render this texture to the screen but I can't seem to find any method to do this.

Tried creating a Direct2D bitmap, but I get a Pixel Format Not Supported exception.

The end result is I want to be able to read from the texture on the CPU side after it has been created on the GPU side.

thanks

P.

(texture description creation)


         var desc = new SharpDX.Direct3D11.Texture2DDescription()
            {
                ArraySize = 1,
                BindFlags = SharpDX.Direct3D11.BindFlags.RenderTarget | SharpDX.Direct3D11.BindFlags.ShaderResource,
                CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.None,
                Format = SharpDX.DXGI.Format.R32_Float,
                Width = width,
                Height = height,
                MipLevels=1,              
                OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None,
                SampleDescription =  new SharpDX.DXGI.SampleDescription(1, 0),
                Usage = SharpDX.Direct3D11.ResourceUsage.Default
                
            };
Advertisement
Rendering textured primitives is basic graphics hardware functionality, and though the pipeline is designed for 3D, 2D rendering works fine and "easy". Quoted since with D3D11 you won't find a simple method, you still have to setup the whole pipeline yourself. For a simple debug output going Direct2D is not advised, and as you see, it doesn't work that easily anyway.

You're halfway there: The bind flag will allow you to create a ShaderResourceView, which you then feed to the pixel shader. The pixel shader samples the texture and simply returns the value (*).

The "mesh" you render could be a simple quad, the vertex shader uses a simple orthographic projection to place it. There are even ways to omit vertex and index buffers for this (search for "full screen quad" or "full screen triangle").

Or give SharpDX's SpriteBatch a shot (in the Toolkit namespace) for convenience. IIRC it even allows tinting (*).

Futher notes:
  • If you're not familar with this, grab the simplest tutorial about texturing and 2D rendering (there are rastertek transliteration for SharpDX around)
  • (*) Be aware of the range your floats have. Best to normalize the outpout to [0..1], or expect output as "interesting" as a clear (not very helpful for debugging). And since this is one channel only: Convert it to greyscale.
  • Take care to unbind a resource first, if you want to bind it elsewhere. The API disallows potential (so-called) read-write hazard. Enable and watch the D3D debug output.
Reading back to CPU, well, that's even more involved tongue.png

You need to create a second texture tagged with ResourceUsage.Staging and CpuAccessFlags.Read, no bind flags. Copy your original texture with DeviceContext.CopyResource to the latter, then use DeviceContext.MapSubresource with MapType.Read. Be aware of DataBox.RowPitch. After reading from the data stream, unmap.

Rendering the texture to the screen for debugging purposes was simple with a PixelFormat of R8B8G8A8.... I was hoping to do it with a R32_Float.

But your second point was the one I ultimately need! I need the data from the shaders in a float[] for other calculations...

Eventually I will try tacking Compute Shaders, but I'm on a time crunch and want to stick with what I know right now!

Thanks for the pointer on copying the resource for CPU acces. I did that before with something else and completely forgot about that method.

cheers

P

You're welcome. I'd still recommend writing a debug pixel shader:
  • Reading back data from the GPU is slow and if you can avoid it, you should.
  • You can visualize your data better. E.g. log-scale, tagging NaNs, or if I want to look at floats which can be negative, too, I usually output them red, positive green.
  • It's a good exercise in post-processing. Or any processing for that matter. If the data processing isn't too complicated, a pixel shader is usually easier to write than a compute shader

I switched over to using a standard RGBA pixel format to test things out and make sure my pixel shader was doing what I envisioned. And it seems to.

Finding that the CopyResource to the CPU Staging texture is having unexpected results though when I map the data to a float[], byte[] or color[] array...

still working on debugging that! Seeing if it is something wrong I did. I haven't been able to figure out how to view the Copied texture. I get an error when I try to render or save it to file. But the CopyResource and Map.Read of the data seems to go pretty quick. Only working with a 512x512 texture at the moment (and that will probably be enough)

I have to do a bunch of (pretty complicated...) physics processing on the data after. Though to this stage getting the data "manually" was the bottleneck. The pixel shader seems to generate the data I need very fast! Its now trying to get access to that data on CPU side I'm working through.

This topic is closed to new replies.

Advertisement