Jump to content
  • Advertisement
Sign in to follow this  
seaephpea

Returning values from vertex shaders (for use on the CPU, not a fragment shader)

This topic is 4900 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, Is there any way of accessing variables defined in a vertex shader in the main program body? There are various libraries that use the GPU to do numerical calculations so I presume it must be possible, however the Nehe Article on GLSL says uniform variables are read only and I'm not sure how else you'd do it. Ideally I would like to be able to do the following: Render a strip of several triangles (starting at the origin) using a vertex shader that: Uses the vertex coords more as "u and v" parameters to create new vertex vectors according to some (non-linear) function (so perhaps the strip of triangles will be mapped to a Torus). Stores these new vertex vectors in a VBO. Stores these new vertex vectors in a way accessible to the CPU. Transforms the new vertex vectors in the standard way, and sets gl_Position to the result. Then on future frames: My (CPU based) physics engine would consider the vertices generated by the GPU as an object. The VBO generated would be rendered with the standard fixed function pipeline in future frames. All though this may sound a tad silly a project, I do have good reasons for wanting to do the dynamic object creation on the VPU rather than the GPU... Any help would be much appreciated, Thanks, Tom ---- Edit: Made a bit clearer that I wasn't creating new vertices. I understand where the confusion came from. [Edited by - seaephpea on May 22, 2005 4:09:47 PM]

Share this post


Link to post
Share on other sites
Advertisement
Sorry, it's not possible to do that with the current hardware. You can't use vertex programs to create new vertices. The libraries that use GPU for calculations probably calculate the value of a function with the texcoords used as input points for the function parameters and write the result into the pixels of the framebuffer.

Share this post


Link to post
Share on other sites
I don't actually need to create new vertices. I'm effectively giving the vertex shader a stock of vertices to work with, and its using 2 coords of their position data much like texture coords. It just alters their position in a non linear way.

I got the impression this was possible??

Tom

----
Edit: By non-linear, I just meant, not the result of a linear transformation, i.e. multiplying by a matrix. So something like gl_Position = vec4(sin blah, blah, blah)

Share this post


Link to post
Share on other sites
Ah, I misunderstood. Of course it's possible to alter the vertex position based on a function,but the vertex shader would need to do that every time you render the object. You can't store the new positions of the vertices on a VBO or in RAM so you can use the for physics or whatever.

-EDIT: A clarification: If by "non-linear" you mean that for calculation of one vertex you need info from previous vertexes, then no. You can't do that for the reasons I mentioned above(no way to store vertices produced by VP).

Share this post


Link to post
Share on other sites
No vertex can directly affect the processing of any other vertex.

You'll have to find another way.

Share this post


Link to post
Share on other sites
Quote:
Original post by mikeman
You can't store the new positions of the vertices on a VBO or in RAM so you can use the for physics or whatever.


Damn well there goes that idea.

If I can't do that, I need a way of compiling functions (for the CPU) at runtime, where the same function code will also work on the GPU.

Can Sh do this? It's FAQ was a bit unclear.

Thanks,

Tom

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
No vertex can directly affect the processing of any other vertex.

Not entirely true, if you use multiple passes. You can render a certain result to a texture in pass 1, and reuse these results (by vertex shader texture accesses) on a different vertex on a subsequent pass. This concept can be extended as needed. You can also directly render to a vertex array, in order to achieve similar functionality.

These results can most definitely be read back by the CPU (texture, framebuffer, or VA memory), but the performance will be pretty bad, at least on AGP.

Share this post


Link to post
Share on other sites
Quote:

You can render a certain result to a texture in pass 1, and reuse these results (by vertex shader texture accesses) on a different vertex on a subsequent pass.


I don't really get this. Let's say I supply a vertex v(x,y,z), and is transformed through a vertex shader to v'(x',y',z'). I can write the v' triad to a texture? How?

Share this post


Link to post
Share on other sites
Quote:
Original post by mikeman

I don't really get this. Let's say I supply a vertex v(x,y,z), and is transformed through a vertex shader to v'(x',y',z'). I can write the v' triad to a texture? How?

Copy (x',y',z') to a colour result register, pass it onto the fragment stage (not altering it in any way), and bind a floating point texture target. Any vertex in any subsequent pass can now index the result of any other vertex from a previous pass by using dependent texture access, either in the vertex shader, or in the pixel shader.

Read up on deferred shading, it is actually based on the principle of writing transformed vertex positions to a texture for later usage.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
You can render a certain result to a texture in pass 1, and reuse these results (by vertex shader texture accesses) on a different vertex on a subsequent pass. This concept can be extended as needed. You can also directly render to a vertex array, in order to achieve similar functionality.

These results can most definitely be read back by the CPU (texture, framebuffer, or VA memory), but the performance will be pretty bad, at least on AGP.


Ahh that sounds like it'd do exactly what I need. It would only have to do the second pass once a second or something, so the delay might well be acceptable.

I'm sure I can find out how to render to a texture, and I've read about vertex shader texture access, so the only bits I'm not quite sure where to look for info on are the rendering to a vertex array bit, and the reading the results back by the CPU bit.

Any links?? (Sorry to be a big noob...)

Thanks loads for the info.

Tom

----
Edit: I think you directed me in the right direction while I was writing this message. Thanks.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!