ray trace

Started by
10 comments, last by InvalidPointer 11 years, 9 months ago
Hi !

[color=#000000][font=Arial, sans-serif]

What do you think about , below[/font]

[color=#000000][font=Arial, sans-serif]

ray trace engine as[/font]

[color=#000000][font=Arial, sans-serif]

ray depth - 0 - raysterization[/font]
[color=#000000][font=Arial, sans-serif]

ray depth - 1 - ray tracing[/font]

[color=#000000][font=Arial, sans-serif]

?[/font]

[color=#000000][font=Arial, sans-serif]

And for more I have some ideas about optimizations for ray depth - 1.[/font]

[color=#000000][font=Arial, sans-serif]

If you interested about it I 'am write it.[/font]

Advertisement
This is (AFAIK) how nVidia's OptiX technology works; the advantages of rasterization for first-bounce rays are pretty well-documented.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.
I don't know, it's nice for now as it takes advantage of very fast rasterization hardware already present in GPU's, but once (and if) special-purpose ray-tracing hardware is developed, this step should be unnecessary. It's also quite redundant on any other hardware than graphics cards. I suppose it's good to have, but the ray-tracing problem has pretty much been solved in theory, all we need now is cheap and efficient ray-tracing hardware so that it can actually compete on equal grounds with real-time rasterization.

That said, the one advantage of rasterization is that the memory access pattern is completely predictable (which is partly why it is so fast on special-purpose hardware), whereas in ray-tracing it is rather random, so it makes sense to use rasterization on high-latency/high-throughput memory hardware like GDDR5.

Another is that the performance of rasterization does not depend too much on resolution, but it does depend on scene complexity linearly (it's the opposite for ray-tracing). As scenes get more and more complex rasterization may become prohibitively expensive even with rasterization hardware, forcing a switch to fully ray-traced graphics (but I doubt it, as user resolution increases considerably faster than scene complexity).

A drawback of rasterization is that it cannot represent implicit surfaces without triangulating them explicitly first (and hence paying the storage tradeoff). Ray-tracing can, however depending on the surface's complexity the analytical intersection formula may be too expensive to compute anyway. This is mostly for spheres, cylinders, toruses, and simple shapes like that.

And for more I have some ideas about optimizations for ray depth - 1.[/quote]
Don't hesitate to post them, I'm sure there are many people on this board interested (including me)!

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

Bacterius, Ok. for ray depth-1.

1) Polygons on screen is dividing to somes small squares. (around 16 pixels for example)

2) All vertices from all polygons are traced using sah bvh.

3.1) For all subpolygons find last bvh node which are intersected by all subpolygon vertices.

3.2) Trace other subpolygons points from founded lasts bvh nodes .

This in theory )) But , I have example for ray depth-0 where using triangles instead bvh nodes. and I have speed up to something around of 7.5x. ( for[color=#000000][font=verdana, arial, sans-serif]

camera distance - 20.5 [/font]).

in detail, below
1) ray trace each forth (for example) pixels for horizontal and vertical ( [1280x960] -> [320x240] ) .
2) if all four pixels intersects one triangle - check intersection for other (12) pixels for this triangle only.

base (ray tracer ) . cuda. (push for download - "???????")
http://www.gamedev.ru/files/?id=71921

optimazed - as I described in detail (ray tracer ) . cuda. (push for download - "???????")
http://www.gamedev.ru/files/?id=78149
(may bugs on some devices)

I have also version for opencl. but I'am not complited optimizations for this yet.


InvalidPointer, where I can read about rasterizations before ray tracing for gpgpu ?

I saw only http://research.nvid...terization-gpus

I have no saw optimizations for ray trace depth -0 in optiX ver 2.1.1 .


Hi !

[color=#000000][font=Arial, sans-serif]

What do you think about , below[/font]

[color=#000000][font=Arial, sans-serif]

ray trace engine as[/font]

[color=#000000][font=Arial, sans-serif]

ray depth - 0 - raysterization[/font]
[color=#000000][font=Arial, sans-serif]

ray depth - 1 - ray tracing[/font]

[color=#000000][font=Arial, sans-serif]

?[/font]



I've worked extensively on that approach in the past and I can tell you it works well. Rasterizing the scene for depth=0 gives you a nice performance improvement because it allows you to skip one tracing step. However, depth = 0 is the easiest and less expensive step because the rays are coherent. For depth > 0, the rays can get highly incoherent and will be the bottleneck of the rendering thus the advantages of using rasterization for the first step may become a lesser optimization.
Of course, all this depends on what kind of rendering complexity you're aiming at. In my case, I used ray tracing to complement the GPU rendered scene with a single bounce of ray-traced reflections which worked pretty well and fast. If you're interested, check my master thesis for details: http://voltaico.net/serenity.html
jcabeleira, how dou you combain rasterization and ray tracing ?
do you realise rasterization on gpgpu?
on your screenshots are not visible reflections.
What do you think about described about depth-1 optimization ?
Might I suggest using beam tracing if you're going to go this route. Microsoft actually has a paper on it. That's the non-linear version. The linear version is still complex. Tends to outperform raytracing solutions.
Excuse my ignorance, does depth 0 mean areas where no ray hits, e.g. the skybox? Or does it mean the first hit for each pixel and then we continue raytracing from there?

Bacterius, Ok. for ray depth-1.

1) Polygons on screen is dividing to somes small squares. (around 16 pixels for example)

2) All points from all polygons are traced using sah bvh.

3.1) For all subpolygons find last bvh node which are intersected by all subpolygon vertices.

3.2) Trace other subpolygons points from founded lasts bvh nodes .
...


You can't assume that if I trace a ray A in the general direction/origin of a bunch of other rays B, I only have to test intersection vs objects close to the first intersection of rays B.
I do something like that, I use the usualy DX11 pipeline to rasterize the scene, I also create a voxelspace and then (using direct compute) I ray trace through the voxel space to create ambient occlusion:
http://twitpic.com/8iohd5/full

I am working since some months on a more advanced version, using tracing for the whole lighting solution as well as for transparent objects and volumetric effects, but as jcabeleira said, the first hit is the simpelst and fastest pass, beside that it also is quite important for anti aliasing, so you either have to render with a very high resolution und trace for sub pixel, or you end up with aliasing. On problem with the same reason is that HDR rendering can produce neighbouring pixel with very different magnitudes of light intensity. without high AA that you just can get through raytracing, you will see aliasing. you can of course try the hacky way of sperating tonemapping and resolve before gamma correction, but it's kind of breaking the purpose of using raytracing (high quality and correct images).

There are some games that trace in screen space through the gbuffer for some local reflections and more advance occlusion/shadow checks.

This topic is closed to new replies.

Advertisement