Jump to content

  • Log In with Google      Sign In   
  • Create Account


elect

Member Since 22 May 2013
Offline Last Active Jun 27 2013 01:33 AM

Topics I've Started

Can depth peeling be implemented without any shader?

10 June 2013 - 05:53 AM

I don't know shaders, but I have a basic idea of their concept.

 

I need to implement depth peeling and so I would like to know if first I should go deeper into the shader world or it could be implemented without shaders, just using smartly the glDepthFunc..

 


Hardware/Software rasterizer vs Ray-tracing

23 May 2013 - 03:04 AM

Hi people smile.png,

 
I saw the presentation at the High-Perf Graphics "High-Performance Software Rasterization on GPUs" and I was very impressed of the work/analysis/comparison.. it looks amazing..
 
 
My background was Cuda, then I started learning OpenGL two years ago to develop the 3d interface of EMM-Check, a field-of-view-analyze program to check if a vehicle is going to fulfill a specific standard or not. essentially you load a vehicle (or different parts), then you can move it completely or separately, add mirrors/cameras, analyze the point of view and shadows for the point of view of the driver, etc..
 
We are dealing with some transparent elements (mainly the field of views, but also vehicle themselves might be), therefore I wrote some rough algorithm to sort on fly the elements to be rendered (at primitive level, a kind of Painter's algorithm) but of course there are cases in which it easily fails, although for most of cases is enough..
 
For this reason I started googling, I found many techniques, like (dual) depth peeling, A/R/K/F-buffer, ecc ecc
 
But it looks like all of them suffer at high resolution and/or large number of triangles..
 
Since we also deal with millions of triangles (up to 10 more or less), I was looking for something else and I ended up to software renderers, compared to the hw ones, they offer free programmability but they are slower..
 
So I wonder if it might be possible to implement something hybrid, that is using the hardware renderer for the opaque elements and the software one for the transparent elements and then combining the two results..
 
Or maybe a simple (no complex visual effect required, just position, color and simple light) ray-tracing algorithm in cuda/opencl might be much simpler from this point of view and give us also a lot of freedom/flexibility in the future?
 
I did not find anything on the net regarding this... maybe is there any particular obstacle?
 
I would like to know every single think/tips/idea/suggestion that you have regarding this
 
 
Ps: I also found "Single Pass Depth Peeling via CUDA Rasterizer" by Liu, but the solution from the first paper seems fair faster

PARTNERS