Jump to content
  • Advertisement
Sign in to follow this  
elect

OpenGL Hardware/Software rasterizer vs Ray-tracing

This topic is 1840 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi people smile.png,

 
I saw the presentation at the High-Perf Graphics "High-Performance Software Rasterization on GPUs" and I was very impressed of the work/analysis/comparison.. it looks amazing..
 
 
My background was Cuda, then I started learning OpenGL two years ago to develop the 3d interface of EMM-Check, a field-of-view-analyze program to check if a vehicle is going to fulfill a specific standard or not. essentially you load a vehicle (or different parts), then you can move it completely or separately, add mirrors/cameras, analyze the point of view and shadows for the point of view of the driver, etc..
 
We are dealing with some transparent elements (mainly the field of views, but also vehicle themselves might be), therefore I wrote some rough algorithm to sort on fly the elements to be rendered (at primitive level, a kind of Painter's algorithm) but of course there are cases in which it easily fails, although for most of cases is enough..
 
For this reason I started googling, I found many techniques, like (dual) depth peeling, A/R/K/F-buffer, ecc ecc
 
But it looks like all of them suffer at high resolution and/or large number of triangles..
 
Since we also deal with millions of triangles (up to 10 more or less), I was looking for something else and I ended up to software renderers, compared to the hw ones, they offer free programmability but they are slower..
 
So I wonder if it might be possible to implement something hybrid, that is using the hardware renderer for the opaque elements and the software one for the transparent elements and then combining the two results..
 
Or maybe a simple (no complex visual effect required, just position, color and simple light) ray-tracing algorithm in cuda/opencl might be much simpler from this point of view and give us also a lot of freedom/flexibility in the future?
 
I did not find anything on the net regarding this... maybe is there any particular obstacle?
 
I would like to know every single think/tips/idea/suggestion that you have regarding this
 
 
Ps: I also found "Single Pass Depth Peeling via CUDA Rasterizer" by Liu, but the solution from the first paper seems fair faster
Edited by elect

Share this post


Link to post
Share on other sites
Advertisement

I feel like implementing a GPU (or software) raytracer just to solve OIT is a bad idea. GPU raytracers are extremely powerful, but they have drawbacks as you saw. I would have initially recommended trying to composite together a hybrid approach, but at that point it makes more sense just to jump into a full raytracing solution. You saw the performance figures in those papers; that's a big step for flexibility.

The solutions to OIT are classic and well-known, but more importantly by contrast they are simpler and faster. You're right, on-the-fly sorting is prone to problems, especially with cyclic geometry, and it doesn't scale well. I suggest you reexamine the depth peeling algorithms, in particular.

You can do this in two passes, if you're careful, by using MRT. In the first pass, render opaque fragments normally into buffer 0. For semi-transparent fragments, do single-pass depth peeling, packing premultiplied RGB and depth into RGBA floating point render targets (buffers 1 to n). For the second pass, just blend the color buffers 1 to n onto buffer 0.

Exactly how the alpha channel is stored (premultiplied or not) may be the subject of some consternation, but that general idea would definitely work. Note that for single-pass depth peeling, you'll need shader mutexes. Here's, where I made a GLSL fragment program that does single-pass depth peeling.

Share this post


Link to post
Share on other sites

I feel like implementing a GPU (or software) raytracer just to solve OIT is a bad idea.

 

I had the same feeling..

 

GPU raytracers are extremely powerful, but they have drawbacks as you saw. I would have initially recommended trying to composite together a hybrid approach, but at that point it makes more sense just to jump into a full raytracing solution. You saw the performance figures in those papers; that's a big step for flexibility.

The solutions to OIT are classic and well-known, but more importantly by contrast they are simpler and faster. You're right, on-the-fly sorting is prone to problems, especially with cyclic geometry, and it doesn't scale well. I suggest you reexamine the depth peeling algorithms, in particular.

You can do this in two passes, if you're careful, by using MRT. In the first pass, render opaque fragments normally into buffer 0. For semi-transparent fragments, do single-pass depth peeling, packing premultiplied RGB and depth into RGBA floating point render targets (buffers 1 to n). For the second pass, just blend the color buffers 1 to n onto buffer 0.

Exactly how the alpha channel is stored (premultiplied or not) may be the subject of some consternation, but that general idea would definitely work. Note that for single-pass depth peeling, you'll need shader mutexes. Here's, where I made a GLSL fragment program that does single-pass depth peeling.

 

I decided to follow your suggestion and go on with the depth peeling. Currently I am trying to implement a simple version of it, that is just the original algorith (no dual) without occlusion query..

 

I just wonder if I can implement it without shaders.. is it possible? I ask because I have only a rough knowledge about and this would require a break to go deeper through shaders and then moving on

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!