Jump to content
  • Advertisement
Sign in to follow this  
Grumple

Sharing transformed vertices in subsequent render stages?

This topic is 1341 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello,

 

I am working on a problem where I want to render 3D objects in pseudo 2D by transforming to NDC coordinates in my vertex shader.  The models I'm drawing have numerous components rendered in separate stages, but all components of a given model are based from the same single point of origin.  

 

This all works fine, but each vertex shader for the various stages of the model render redundantly transform from cartesian xyz to NDC coordinates prior to performing work.  Instead, I'd like to perform an initial conversion stage, populating a buffer of NDC coordinates, such that all vertex shaders can then just accept the NDC coordinate as input.   I'm also looking to avoid doing this on the CPU as I may have many thousands of model instances to work with.

 

So, with an input buffer containing thousands of Cartesian positions, and an equal sized output buffer to receive transformed NDC coordinates, what is my best options to perform the work on the GPU?  Is this something I need to look to OpenCL for?

 

Being fairly unfamiliar to OpenCL, I was thinking of looking into ways of setting things up so that the first component to be rendered for my models will 'know' it is first, have the vertex shader do standard transform to NDC, and somehow write the results back to an 'NDC coord buffer '.  All subsequent vertex shaders for various model components would use the NDC coord buffer as input, skipping the redundant conversion effort.  

 

Is this reasonable?

Share this post


Link to post
Share on other sites
Advertisement
Look into transform feedback. It does almost exactly what you describe - it lets you run an input through just a vertex shader and record the results into another buffer. That buffer can then be used as input to another pass.

I'm not sure if you'll see much benefit, though. Transforming vertices to NDC space is not expensive. Generally you want to cut down on all the other unnecessary stuff in your shaders for different passes (e.g., don't bother calculating normal information if that pass doesn't need it).

There's also a lot you can do to avoid needing so many stages of redundant vertex transformation. For instance, you can render the geometry once and then write material IDs and properties to another set of buffers and do passes over those (whether or not this is faster will depend on a lot of factors, so as always, measure and see).

Share this post


Link to post
Share on other sites

Multiple render targets , or, encoded render target of large fromat (F32 or F16) can serve the similar purpose, while gaining the benefits of a constant vertex buffer. You would then put all stages to run in pixel function, outputing to your G buffers set up. You would call draw instruction only once (doing all the stages of the transformed geometry in pixel function). If this could suit your situation.of -constant transform- multiple raster stage information.

Share this post


Link to post
Share on other sites

Thanks a lot, for this reply....I knew about the old/deprecated fixed function feedback system, but didn't realize there was an official replacement for the shader world.  I'll do some more reading before diving in, but it looks to be a great solution.

 

I know the transformation is relatively cheap, but in my current implementation it is happening for 6 stages of render, per model, with potentially thousands of models.  I'm also going to be doing something similar for label rendering, but will need to be able to generate the NDC coord buffer and potentially read it back for de-clutter processing on the CPU.  Having a shader stage that will just populate an NDC coord buffer for readback/post-processing would be awesome.

 


There's also a lot you can do to avoid needing so many stages of redundant vertex transformation. For instance, you can render the geometry once and then write material IDs and properties to another set of buffers and do passes over those (whether or not this is faster will depend on a lot of factors, so as always, measure and see).

 

Sorry, but I don't quite follow you here...can you describe a bit more, or link some reading material?

 

Thanks again!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!