Jump to content

  • Log In with Google      Sign In   
  • Create Account


CUDA and OpenGL interop


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
No replies to this topic

#1 Tom Mulgrew   Members   -  Reputation: 190

Like
0Likes
Like

Posted 26 June 2014 - 03:52 AM

I could use some advice..

 

I'm working on a somewhat experimental graphics engine which combines polygon rendering (OpenGL) with ray-casting (implemented in CUDA).

The polygons are always in front of the ray-cast content, so what I want to do is render the polygons first, then ray-cast only the pixels that haven't been drawn.

 

My original plan was to create a framebuffer containing colour and depth buffers, render the polygon content into it, then pass them to the CUDA kernel (using CUDA/OpenGL interop). The kernel could then use the depth buffer to determine which pixels still need to be rendered.

The problem is that CUDA doesn't appear to support accessing depth buffers.

 

Plan B was to use a stencil buffer, but a bit of Googling suggests they aren't supported by CUDA/OpenGL interop either (and also that you should always create a depth+stencil buffer, which sounds even less likely to be supported).

 

So plan C is to initialise the colour buffer to a special "un-drawn" value (possibly using the alpha channel). This would mean the kernel has to read from and write to the colour buffer though, which feels a bit dirty, and may have a performance impact.

 

 

Is there a simpler/cleaner solution to this? Or should I just go with C?



Sponsor:



Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS