Sign in to follow this  
TC_nz

OpenGL CUDA and OpenGL interop

Recommended Posts

I could use some advice..

 

I'm working on a somewhat experimental graphics engine which combines polygon rendering (OpenGL) with ray-casting (implemented in CUDA).

The polygons are always in front of the ray-cast content, so what I want to do is render the polygons first, then ray-cast only the pixels that haven't been drawn.

 

My original plan was to create a framebuffer containing colour and depth buffers, render the polygon content into it, then pass them to the CUDA kernel (using CUDA/OpenGL interop). The kernel could then use the depth buffer to determine which pixels still need to be rendered.

The problem is that CUDA doesn't appear to support accessing depth buffers.

 

Plan B was to use a stencil buffer, but a bit of Googling suggests they aren't supported by CUDA/OpenGL interop either (and also that you should always create a depth+stencil buffer, which sounds even less likely to be supported).

 

So plan C is to initialise the colour buffer to a special "un-drawn" value (possibly using the alpha channel). This would mean the kernel has to read from and write to the colour buffer though, which feels a bit dirty, and may have a performance impact.

 

 

Is there a simpler/cleaner solution to this? Or should I just go with C?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this