CUDA and OpenGL interop

Started by
-1 comments, last by TC_nz 9 years, 9 months ago

I could use some advice..

I'm working on a somewhat experimental graphics engine which combines polygon rendering (OpenGL) with ray-casting (implemented in CUDA).

The polygons are always in front of the ray-cast content, so what I want to do is render the polygons first, then ray-cast only the pixels that haven't been drawn.

My original plan was to create a framebuffer containing colour and depth buffers, render the polygon content into it, then pass them to the CUDA kernel (using CUDA/OpenGL interop). The kernel could then use the depth buffer to determine which pixels still need to be rendered.

The problem is that CUDA doesn't appear to support accessing depth buffers.

Plan B was to use a stencil buffer, but a bit of Googling suggests they aren't supported by CUDA/OpenGL interop either (and also that you should always create a depth+stencil buffer, which sounds even less likely to be supported).

So plan C is to initialise the colour buffer to a special "un-drawn" value (possibly using the alpha channel). This would mean the kernel has to read from and write to the colour buffer though, which feels a bit dirty, and may have a performance impact.

Is there a simpler/cleaner solution to this? Or should I just go with C?

This topic is closed to new replies.

Advertisement