# DX11 [SlimDX, DX11] Read From texture Into Array

## Recommended Posts

I'm sure I'm just being stupid, but I really can't figure out how to read the contents of a texture into an array using SlimDX and DirectX 11. In DirectX 9 it was easy, simply call LockRectangle and read from the resulting DataRectangle. I understand that in DirectX 11 I need to call MapSubresource to get a DataBox and then use that. However I can't figure out what I should be passing to the parameters of MapSubresource, and once I've done that I don't know what to do with the resulting DataBox. I'm currently doing the following to load an R32F DDS texture, and the values in my array seem completely wrong. It's as if it's only reading from every 4th row of my texture. Note: The renderSystem.MapSubresource passes the parameters directly to Device.MapSubresource().
ImageLoadInformation info = new ImageLoadInformation()
{
BindFlags = BindFlags.None,
FilterFlags = FilterFlags.None,
Format = SlimDX.DXGI.Format.R32_Float,
MipFilterFlags = FilterFlags.None,
OptionFlags = ResourceOptionFlags.None,
Usage = ResourceUsage.Staging,
};

float[] data = new float[2048 * 2048];

DataBox box = renderSystem.MapSubresource(tex, 0, tex.Description.Width * tex.Description.Height * sizeof(float), MapMode.Read, MapFlags.None);


Am I doing something obviously stupid? (Probably)

##### Share on other sites
Hmm, your parameters seems to be ok. Maybe the "float" is problematic, try "int". Or the texture is somehow compressed (4x4 blocks) and you got the raw data?

##### Share on other sites
The ReadRange method is a templated method that is supposed to take a datatype as the template parameter. Without looking at the source, I would guess that the default type is Byte.

Try:

ReadRange<float>(data, 0, 2048 * 2048);

##### Share on other sites
Also, try to fill all the fields of the image info structure (as in mip levels, width and height).

##### Share on other sites
Another thing :

Check if the pitch is really what you think it is (the drivers sometimes can use some extra data at the end of each scanline)

##### Share on other sites
Quote:
 Original post by PyrogameHmm, your parameters seems to be ok. Maybe the "float" is problematic, try "int". Or the texture is somehow compressed (4x4 blocks) and you got the raw data?

I've not had a chance to try this, but it's a floating point texture, so reading in using sizeof(int) seems completely wrong.

Quote:
 Original post by Nik02The ReadRange method is a templated method that is supposed to take a datatype as the template parameter. Without looking at the source, I would guess that the default type is Byte.Try:ReadRange(data, 0, 2048 * 2048);

It's a generic method (I know, same differencem but there is a distinction) and picks up that it should be outputting floats from the data parameter quite happily.

Quote:
 Original post by Nik02Also, try to fill all the fields of the image info structure (as in mip levels, width and height).

I'll try this later on. Although they seem to be being picked up fine by the Texture2D.FromFile method, the Description structure for the texture is as I would expect after loading.

Quote:
 Original post by feal87Another thing :Check if the pitch is really what you think it is (the drivers sometimes can use some extra data at the end of each scanline)

No matter what I do the pitch comes out as 8192. Which since it's a 2048x2048 texture and sizeof(float) is 4... seems right to me.

---

I'm surprised there is not a sample out there for this somewhere, or no one has done it already.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627668
• Total Posts
2978541
• ### Similar Content

• hi,
i have read very much about the binding of a constantbuffer to a shader but something is still unclear to me.
e.g. when performing :   vertexshader.setConstantbuffer ( buffer,  slot )
is the buffer bound
or
b. to the VertexShader that is currently set as the active VertexShader
Is it possible to bind a constantBuffer to a VertexShader e.g. VS_A and keep this binding even after the active VertexShader has changed ?
I mean i want to bind constantbuffer_A  to VS_A, an Constantbuffer_B to VS_B  and  only use updateSubresource without using setConstantBuffer command every time.

Look at this example:
perform drawcall       ( buffer_A is used )

perform drawcall   ( buffer_B is used )
perform drawcall   (now which buffer is used ??? )

I ask this question because i have made a custom render engine an want to optimize to
the minimum  updateSubresource, and setConstantbuffer  calls

• I got a quick question about buffers when it comes to DirectX 11. If I bind a buffer using a command like:
IASetVertexBuffers IASetIndexBuffer VSSetConstantBuffers PSSetConstantBuffers  and then later on I update that bound buffer's data using commands like Map/Unmap or any of the other update commands.
Do I need to rebind the buffer again in order for my update to take effect? If I dont rebind is that really bad as in I get a performance hit? My thought process behind this is that if the buffer is already bound why do I need to rebind it? I'm using that same buffer it is just different data

• I am really stuck with something that should be very simple in DirectX 11.
1. I can draw lines using a PC (position, colored) vertices and a simple shader just fine.
2. I can draw 3D triangles using PCN (position, colored, normal) vertices just fine (even transparency and SpecularBlinnPhong shaders).

However, if I'm using my 3D shader, and I want to draw my PC lines in the same scene how can I do that?

If I change my lines to PCN and pass them to the 3D shader with my triangles, then the lighting screws them all up.  I only want the lighting for the 3D triangles, but no SpecularBlinnPhong/Lighting for the lines (just PC).
I am sure this is because if I change the lines to PNC there is not really a correct "normal" for the lines.
I assume I somehow need to draw the 3D triangles using one shader, and then "switch" to another shader and draw the lines?  But I have no clue how to use two different shaders in the same scene.  And then are the lines just drawn on top of the triangles, or vice versa (maybe draw order dependent)?
I must be missing something really basic, so if anyone can just point me in the right direction (or link to an example showing the implementation of multiple shaders) that would be REALLY appreciated.

I'm also more than happy to post my simple test code if that helps as well!

• By Reitano
Hi,
I am writing a linear allocator of per-frame constants using the DirectX 11.1 API. My plan is to replace the traditional constant allocation strategy, where most of the work is done by the driver behind my back, with a manual one inspired by the DirectX 12 and Vulkan APIs.
In brief, the allocator maintains a list of 64K pages, each page owns a constant buffer managed as a ring buffer. Each page has a history of the N previous frames. At the beginning of a new frame, the allocator retires the frames that have been processed by the GPU and frees up the corresponding space in each page. I use DirectX 11 queries for detecting when a frame is complete and the ID3D11DeviceContext1::VS/PSSetConstantBuffers1 methods for binding constant buffers with an offset.
The new allocator appears to be working but I am not 100% confident it is actually correct. In particular:
1) it relies on queries which I am not too familiar with. Are they 100% reliable ?
2) it maps/unmaps the constant buffer of each page at the beginning of a new frame and then writes the mapped memory as the frame is built. In pseudo code:
BeginFrame:
page.data = device.Map(page.buffer)
device.Unmap(page.buffer)
RenderFrame
Alloc(size, initData)
...
memcpy(page.data + page.start, initData, size)
Alloc(size, initData)
...
memcpy(page.data + page.start, initData, size)
(Note: calling Unmap at the end of a frame prevents binding the mapped constant buffers and triggers an error in the debug layer)
Is this valid ?
3) I don't fully understand how many frames I should keep in the history. My intuition says it should be equal to the maximum latency reported by IDXGIDevice1::GetMaximumFrameLatency, which is 3 on my machine. But, this value works fine in an unit test while on a more complex demo I need to manually set it to 5, otherwise the allocator starts overwriting previous frames that have not completed yet. Shouldn't the swap chain Present method block the CPU in this case ?
4) Should I expect this approach to be more efficient than the one managed by the driver ? I don't have meaningful profile data yet.
Is anybody familiar with the approach described above and can answer my questions and discuss the pros and cons of this technique based on his experience ?
For reference, I've uploaded the (WIP) allocator code at https://paste.ofcode.org/Bq98ujP6zaAuKyjv4X7HSv.  Feel free to adapt it in your engine and please let me know if you spot any mistakes
Thanks
Stefano Lanza

• Hey all. I've been working with compute shaders lately, and was hoping to build out some libraries to reuse code. As a prerequisite for my current project, I needed to sort a big array of data in my compute shader, so I was going to implement quicksort as a library function. My implementation was going to use an inout array to apply the changes to the referenced array.

I spent half the day yesterday debugging in visual studio before I realized that the solution, while it worked INSIDE the function, reverted to the original state after returning from the function.

My hack fix was just to inline the code, but this is not a great solution for the future.  Any ideas? I've considered just returning an array of ints that represents the sorted indices.

• 10
• 10
• 10
• 12
• 22