Jump to content

  • Log In with Google      Sign In   
  • Create Account


Hardware/Software rasterizer vs Ray-tracing


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
2 replies to this topic

#1 elect   Members   -  Reputation: 115

Like
0Likes
Like

Posted 23 May 2013 - 03:04 AM

Hi people smile.png,

 
I saw the presentation at the High-Perf Graphics "High-Performance Software Rasterization on GPUs" and I was very impressed of the work/analysis/comparison.. it looks amazing..
 
 
My background was Cuda, then I started learning OpenGL two years ago to develop the 3d interface of EMM-Check, a field-of-view-analyze program to check if a vehicle is going to fulfill a specific standard or not. essentially you load a vehicle (or different parts), then you can move it completely or separately, add mirrors/cameras, analyze the point of view and shadows for the point of view of the driver, etc..
 
We are dealing with some transparent elements (mainly the field of views, but also vehicle themselves might be), therefore I wrote some rough algorithm to sort on fly the elements to be rendered (at primitive level, a kind of Painter's algorithm) but of course there are cases in which it easily fails, although for most of cases is enough..
 
For this reason I started googling, I found many techniques, like (dual) depth peeling, A/R/K/F-buffer, ecc ecc
 
But it looks like all of them suffer at high resolution and/or large number of triangles..
 
Since we also deal with millions of triangles (up to 10 more or less), I was looking for something else and I ended up to software renderers, compared to the hw ones, they offer free programmability but they are slower..
 
So I wonder if it might be possible to implement something hybrid, that is using the hardware renderer for the opaque elements and the software one for the transparent elements and then combining the two results..
 
Or maybe a simple (no complex visual effect required, just position, color and simple light) ray-tracing algorithm in cuda/opencl might be much simpler from this point of view and give us also a lot of freedom/flexibility in the future?
 
I did not find anything on the net regarding this... maybe is there any particular obstacle?
 
I would like to know every single think/tips/idea/suggestion that you have regarding this
 
 
Ps: I also found "Single Pass Depth Peeling via CUDA Rasterizer" by Liu, but the solution from the first paper seems fair faster

Attached Files


Edited by elect, 23 May 2013 - 05:08 AM.


Sponsor:

#2 Geometrian   Crossbones+   -  Reputation: 1432

Like
1Likes
Like

Posted 23 May 2013 - 08:41 AM

I feel like implementing a GPU (or software) raytracer just to solve OIT is a bad idea. GPU raytracers are extremely powerful, but they have drawbacks as you saw. I would have initially recommended trying to composite together a hybrid approach, but at that point it makes more sense just to jump into a full raytracing solution. You saw the performance figures in those papers; that's a big step for flexibility.

The solutions to OIT are classic and well-known, but more importantly by contrast they are simpler and faster. You're right, on-the-fly sorting is prone to problems, especially with cyclic geometry, and it doesn't scale well. I suggest you reexamine the depth peeling algorithms, in particular.

You can do this in two passes, if you're careful, by using MRT. In the first pass, render opaque fragments normally into buffer 0. For semi-transparent fragments, do single-pass depth peeling, packing premultiplied RGB and depth into RGBA floating point render targets (buffers 1 to n). For the second pass, just blend the color buffers 1 to n onto buffer 0.

Exactly how the alpha channel is stored (premultiplied or not) may be the subject of some consternation, but that general idea would definitely work. Note that for single-pass depth peeling, you'll need shader mutexes. Here's, where I made a GLSL fragment program that does single-pass depth peeling.


And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.

#3 elect   Members   -  Reputation: 115

Like
0Likes
Like

Posted 10 June 2013 - 06:18 AM

I feel like implementing a GPU (or software) raytracer just to solve OIT is a bad idea.

 

I had the same feeling..

 

GPU raytracers are extremely powerful, but they have drawbacks as you saw. I would have initially recommended trying to composite together a hybrid approach, but at that point it makes more sense just to jump into a full raytracing solution. You saw the performance figures in those papers; that's a big step for flexibility.

The solutions to OIT are classic and well-known, but more importantly by contrast they are simpler and faster. You're right, on-the-fly sorting is prone to problems, especially with cyclic geometry, and it doesn't scale well. I suggest you reexamine the depth peeling algorithms, in particular.

You can do this in two passes, if you're careful, by using MRT. In the first pass, render opaque fragments normally into buffer 0. For semi-transparent fragments, do single-pass depth peeling, packing premultiplied RGB and depth into RGBA floating point render targets (buffers 1 to n). For the second pass, just blend the color buffers 1 to n onto buffer 0.

Exactly how the alpha channel is stored (premultiplied or not) may be the subject of some consternation, but that general idea would definitely work. Note that for single-pass depth peeling, you'll need shader mutexes. Here's, where I made a GLSL fragment program that does single-pass depth peeling.

 

I decided to follow your suggestion and go on with the depth peeling. Currently I am trying to implement a simple version of it, that is just the original algorith (no dual) without occlusion query..

 

I just wonder if I can implement it without shaders.. is it possible? I ask because I have only a rough knowledge about and this would require a break to go deeper through shaders and then moving on






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS