Sign in to follow this  

GPU to CPU / HLSL to C++ communication

This topic is 3658 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello guys, I'm woundering whether somebody knows if in DirectX9 it is possible to read variables, which are changed by a pixel shader, from a C++ program. Passing variables from C++ to a shader is quite easy. You can use the ID3DXConstantTable interface to do so. But I cannot find any way to establish a HLSL -> CPU communication.

Share this post


Link to post
Share on other sites
Short Answer: You can't read back variables from HLSL to C++.

Long Answer: An HLSL shader can't "really" change the value of its variables. When a shader modifies a global variable, it is actually modifying a value stored in a different register, a temporary one. As it's name implies, the value of this register is reset each time the shader runs, so it's value is never stored past the run of a single instance of a shader. This means that the problem isn't reading the value back, it's actually changing it in the first place!

If you've looked around at the effect framework, you'll notice it has methods to get values from the constant table. This reads back the values you've set previously, but as I said earlier, these values can't be modified by the GPU, so reading them back is of little use.

Hope this helps.

Share this post


Link to post
Share on other sites
The typical way to commicate from the GPU to the CPU is to use the channels available to the GPU program you're authoring. For a pixel shader, these are the outputs -- color, primarily. You simply write the information you want as a color.

On the CPU, you know the shader wants to communicate something back to you. You create a texture as a render target, and render some dummy geometry (for example, a single full-screen quad) with your shader bound. This will populate the render target with the results of the pixel shader; you can then lock the render target and read its data back.

This is not efficient or particularly fun.

Share this post


Link to post
Share on other sites
So as I understood a GPU to CPU communication using DirectX is as far to complicated as to use it efficiently. And any small changes in the way of communication will likely need a lot of work to be done. That's what I feared.
If you make any processing on a graphics card you will not come around using NVIDIA's CUDA framework or ATI's Close To Metal, right?

Thanks guys. These infos were very helpful to me and saved me a lot of time.

[Edited by - chubaca on December 6, 2007 5:10:25 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by chubaca
If you make any processing on a graphics card you will not come around using NVIDIA's CUDA framework or ATI's Close To Metal, right?


I don't know about CTM. However, CUDA is certainly better than shaders for GPGPU.

Share this post


Link to post
Share on other sites
Remember that CUDA is only available on the G80+ chipsets, so this wouldn't be the best solution if you're looking for a wide audience of people to use your software. If it's just for research purposes, or if you have a farm of G80's or something and you want to process a lot of data, then it's perfect.

Share this post


Link to post
Share on other sites
Quote:
Original post by chubaca
any small changes in the way of communication will likely need a lot of work to be done.
I suppose you're overstimating this. Reading back a few buffers isn't going to be really hard and isn't going to change tomorrow.

Share this post


Link to post
Share on other sites
Quote:
Original post by jpetrie
... You create a texture as a render target, and render some dummy geometry (for example, a single full-screen quad) with your shader bound. This will populate the render target with the results of the pixel shader; you can then lock the render target and read its data back.
This is not efficient or particularly fun.


Sorry guys but have to nerve you once again.
To be sure that I have understood it correctly, another few questions to you concerning gpu to cpu communication via textures.

I have a texture with image data written inside it.
I process a shader onto the data. So to communicate the data back
I need to extend my texture by extra space for return data.
Into the extra space I write the data, which is to be returned from the shader, in the form of a color value. Is this right so far?

For the shader a color is a value between 0.0 and 1.0.
Additionally the left-upper corner of the texture is located at (0.0;0.0) and the right-lower corner at about (1.0;1.0).
To read the right value in my C++ program I have to make sure that the position for the C++ is the same a for the shader. So I have to calculate the position in the shader. If I have an texture, which for the C++ program is 150 pixels in width, and I want to write into the third horizontal pixel, in my shader I have to calculate the position, which for our example is:
outputdata.x = ((1.0 / 150) * 3).
If I want to write for example 42 into the pixel, I will need to write the result of ((1.0 / 255) * 42) to the position ((1.0 / 150) * 3).
What do I have to do to retrieve the data. Has it something to do with the render-to-texture story?


[Edited by - chubaca on December 7, 2007 11:47:17 AM]

Share this post


Link to post
Share on other sites

This topic is 3658 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this