reading vertex shader output
is there a way to read the output of a vertex shader?
i am using HLSL using Direct9
and i want to be able to read the x,y,z values of a vertex after applying the custom transformation i am making inside the shader
i thought at first i could use the contants and the constnats table to do that, but the constants are CONSTANTS..and there is no way to get their values back in the c++ code, alhtough it is possible to change them inside the shader itself..
is there anyway to do that...
if i could do that..i can move many computations which doesnt necessarily involve rendering to the GPU.... perhaps maybe collision detection or something.. by using a special pass which actually wont render anything
any ideas?
The only way i believe you can do this is to change the render target to a texture and apply the vertex position to the texture as a colour usinga pixel shader.
Dave
Dave
It is possible to do it, but if I remember correctly it is horribly slow and not suited for real-time.
Until you get onto D3D10 there isn't really an easy way to do this. ATI's "Render To Vertex Buffer" might be of interest - its similar to what Dave suggested.
Failing that, try looking into IDirect3DDevice9::ProcessVertices() - I've not used it myself, but the description seems to suggest it'll work. Although I'm pretty sure its CPU based, so performance wont be amazing [wink]
hth
Jack
Failing that, try looking into IDirect3DDevice9::ProcessVertices() - I've not used it myself, but the description seems to suggest it'll work. Although I'm pretty sure its CPU based, so performance wont be amazing [wink]
hth
Jack
You should think of the vertex processing part of the [hardware] graphics pipeline as being write-only, that's what they're designed for so there isn't really a way to tap into the output from a vertex shader.
The output of a vertex shader automatically feeds into the primitive assembly stage and then through to the pixel shader/pixel processing - that flow is almost all one-way (with the exception of things like vertex texturing from a previously rendered target).
As soon as you require a way to read back processed vertex data from the GPU, you make serialization and stalls much more likely (e.g. the CPU must stall to wait for the GPU to finish processing when it needs to read back those results).
I would only consider attempting to read back processed data in a general purpose GPU (GPGPU) scenario where interactivity isn't required and latency isn't a problem.
I wouldn't ever advise attempting that kind of thing in a normal game. Additionally, graphics mesh data is rarely ideal for use with collision detection/response - it contains things you don't need (e.g. texture coordinates and colours); it doesn't contain things you probably do need (e.g. face normals); it's also constructed in ways designed for visual quality rather than optimal collision use (repeated vertices due to texture boundaries; concavities etc). Keep your collisions separate from graphics, you'll save yourself a lot of pain!
What /might/ work would be to output the transformed positions via the pixel shader as colours (say to a set of 2x2 screen aligned quads) to a render target texture [possibly double/multi buffered to avoid stalls], then lock that texture to read it with the CPU when you need results back.
The output of a vertex shader automatically feeds into the primitive assembly stage and then through to the pixel shader/pixel processing - that flow is almost all one-way (with the exception of things like vertex texturing from a previously rendered target).
As soon as you require a way to read back processed vertex data from the GPU, you make serialization and stalls much more likely (e.g. the CPU must stall to wait for the GPU to finish processing when it needs to read back those results).
I would only consider attempting to read back processed data in a general purpose GPU (GPGPU) scenario where interactivity isn't required and latency isn't a problem.
I wouldn't ever advise attempting that kind of thing in a normal game. Additionally, graphics mesh data is rarely ideal for use with collision detection/response - it contains things you don't need (e.g. texture coordinates and colours); it doesn't contain things you probably do need (e.g. face normals); it's also constructed in ways designed for visual quality rather than optimal collision use (repeated vertices due to texture boundaries; concavities etc). Keep your collisions separate from graphics, you'll save yourself a lot of pain!
What /might/ work would be to output the transformed positions via the pixel shader as colours (say to a set of 2x2 screen aligned quads) to a render target texture [possibly double/multi buffered to avoid stalls], then lock that texture to read it with the CPU when you need results back.
Quote:Original post by Dave
The only way i believe you can do this is to change the render target to a texture and apply the vertex position to the texture as a colour usinga pixel shader.
I just happened to write a "stream output" class recently based on this idea. I'll see if I can put the code online somewhere.
BTW, check out RapidMind for something that might allow easily writing general parallel code for the GPU without using any 3D syntax. I have no idea what it will cost, but they just bugged me today (I saw their demo at GDC), and it does look interesting.
If you're using Direct3D9, and you don't mind using software vertex processing, there's an easy way to do this on the CPU. The device has a method called ProcessVertices() that executes the software vertex processing on a particular vertex buffer.
JB
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement