Hybrid Ray Tracing Feasability

Started by
11 comments, last by Krypt0n 11 years, 9 months ago
Do you think it is feasible to render a complex scene with both volumetric (SVO-based) models and polygonal models via ray tracing at interactive frame rates (~30fps)? Also, if it is, do you recommend using something like ARB GPU assembly to aid in calculation speed? I was going to implement a system where the entire workload is split between two cores and the GPU, where core 0 would perform general purpose calculations, all other necessary work, and other types of OS/general purpose use, core 1 would be a dedicated rendering core, and the GPU would also aid core 1 in rendering. Please tell me if this is a foolish idea and/or if I am confused. I didn't want to use OpenCL, however.

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.

Advertisement
Can you make a real-time polygonal ray-tracer? Yes.
Can you make a real-time SVO-based ray-tracer? I'm not sure if anyone has yet... John Carmack seemed to think so.

I was pondering writing an SVO ray-tracer/renderer for complicated objects in my rasterizing polygonal renderer, never got around to it though
I don't see why not, assuming you implement it well enough and run it on a pretty powerful GPU. Splitting the work between the CPU and GPU is possible but tricky. You have to be very careful not to stall either processor, since you get maximum efficiency when they're both working in parallel. However this is difficult due to the long latencies you can experience with GPU processing.

Either way you definitely do not want to use ARB assembly...that stuff is way old and has been superseded by GLSL. For something like this you'll want maximum programmability and flexibility, which means using the latest profiles and features available in either OpenGL 4 or D3D11.
If all of the instructions in the ARB assembly language are all I need to suit my needs, then would there really be that much of a difference in execution speed and/or compatibility? Here is an ARB instruction reference: http://www.rendergui...om/gpuguide.pdf . Would I be able to write the tracer without using a high level shading language like GLSL? I would like to use ARB assembly and avoid using GLSL, but if it is absolutely necessary, I definitely won't sacrifice performance. I just was wondering why ARB isn't preferable.

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.


If all of the instructions in the ARB assembly language are all I need to suit my needs, then would there really be that much of a difference in execution speed and/or compatibility? Here is an ARB instruction reference: http://www.rendergui...om/gpuguide.pdf . Would I be able to write the tracer without using a high level shading language like GLSL? I would like to use ARB assembly and avoid using GLSL, but if it is absolutely necessary, I definitely won't sacrifice performance. I just was wondering why ARB isn't preferable.

Premature optimization. You're deciding something will be faster; enough faster to make a difference, before you even know where the major slowdowns are.

Make it work first, then maybe worry about speed.
Bot the ARB assembly language and the GLSL language are compiled at run-time by your GPU driver, into actual GPU assembly... so it's likely that the only optimisation will be in parsing your shader code during compilation.
I'd thoroughly test your hypothesis (that ARM asm will reduce shading times), before throwing out a much more modern shading language for that old one.

Regarding the latency problem mentioned above, when splitting rendering work across CPU and GPU -- it's best if you can know where the camera/objects will be in 2 frames time, and use the CPU to render to render these, while you're submitting the GPU work for the current frame. This is pretty simple to do, but may require you to increase you input latency by buffering user input.
My understanding was that ARB assembly didn't even support most of the newer GL 4.0 features, and at this point it's mostly an artifact of older days before GLSL (as opposed to assembly in D3D, which is still used as an output format by HLSL). I'm not an OpenGL expert, so someone please correct me if I'm wrong.
I guess what I'll do is just use Cg. But, my main concern is, will ray traced graphics still be able to maintain interactive frame rates at 1920x1080 during a full game, or should I just use normal polygon projection methods and abandon the idea?

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.

that depends on the art and raytracing quality.
quite often you can combine both, rendering forward the usually ray to get a gbuffer and then use the gbuffer and raytrace all market pixels/fragments for reflections etc.
[color="#ff0000"]T[color=#ff8c00]h[color=#ffff00]e[color=#00ff00]n [color="#00ffff"]D[color=#ee82ee]o [color=#000000]you think I could have a ray tracer written in only Cg?

C dominates the world of linear procedural computing, which won't advance. The future lies in MASSIVE parallelism.

This topic is closed to new replies.

Advertisement