coordinating raytracing with rasterization

Started by
8 comments, last by Bacterius 10 years, 2 months ago

i am tinkering with my own basic raytracing code, got idea that mixing

raytracing with rsterization would be good idea

Though there is a problem of coordination the raytracing of the figure (I mainly

using spheres and triangles) with the rasterization of the figure - coordinate it to

obtain a pixel area coherent results, of raytracing object and rasterizing it

(it should be just the same area)

Is this hard to do to rasterizer a projection (of triangle, sphere) that would give exactly the same "pixel area" as raytracing this figure? Could someone say me what it would be this projestion formula?

(when doing raytracung the rays i sent are taken from simple geometrical approach

for example I assume my eye is 1 meter distant to the creen screen is 40 cm wide

and 30 cm tall so it gave me coordinate of each pixel (for examle left upper would be just -0.20 -0.15 +1.0 and also direction of normalized (-0.20 -0.15 +1.0) -

what kind of rasterization projection would be idealy coherent with that rays thing?

tnx for answers

PS. casting sole vertices could be probably easy (the way i do it in my rasterizer it is stretchung x/y from proportion related to object to eye / screen to eye, very basic projeztion - but do the triangle edges stay linear when raytracing the triangle ?(I got not triangles in my raytracing code so i cannot see it - but i got spheres and i clearly see

that spheres are not round here, especialy if i set the screen dimension as a wide and eye close to it, then i got it more like rugby balls or something like that)

Advertisement

Raytracing means starting from a point, collecting all light that reaches it, and summing all light contributions to obtain a colour. This is rasterization (the points of interests are sample locations corresponding to pixels), so it isn't particularly clear what do you want to add and why it cannot be included in raytracing as light sources or interactions between light and objects, what "pixel areas" are involved, and what "coordination" and "coherence" issues might be a concern.

Omae Wa Mou Shindeiru

Raytracing means starting from a point, collecting all light that reaches it, and summing all light contributions to obtain a colour. This is rasterization (the points of interests are sample locations corresponding to pixels), so it isn't particularly clear what do you want to add and why it cannot be included in raytracing as light sources or interactions between light and objects, what "pixel areas" are involved, and what "coordination" and "coherence" issues might be a concern.

I want to rasterize scene in first pass (write sufrace numbers at least

to pixel grid) then use this surface numbers in raytracer to spare

costly searching between given ray and all the surfaces it could touch

I heard about space division trees but not yet readed about this so

i even dont know how it work; would like to try this rasterization first step first instead - the reason for this is that i got already rasterizer who i wrote a about year ago, though the problem appears that it shows that

my rasterizer is rasterizing not exacly correct

i use only traingles/quads and spheres in both rasterizer and raytracer

though in rasterizer

for triangles

1) i did only transforming of vertices to 2d then did linear interpolation (at 2d side) also liner interpolation of z - this is probably wrong -- and i am searching for help how to do it that it will bring the exact same result as

the raytracer and how to do it

for balls

2) i transformed ony centrum ball point and shrinked the r, (also write pure spherical z addition to depth bufer ) this shows to be totally wrong

comparing to raytracer results who often gives elipses not balls /spheres

i was shocked, now i m searching for help in writing proper (raytracer output compatible) routines for rasterization of ball and triangle/quad

If i would have it i could use the rasterizer as a first step in raytracer :U

tnx for help if someone was doing close thing...

So you want to recycle simple rasterization of triangles and spheres to build acceleration data structures?

Unfortunately, it doesn't seem a valid approach.

  • Even a single straight ray interacts with an arbitrarily large number of surfaces and materials, not one. Fitting this sort of variable and unpredictable data in a simple image-like array is difficult.
  • The raytracer has to support geometric queries for any ray, from any point, in any direction, not only the rays from the camera through a single planar grid of pixels which are merely the first step.

On top of that, your rasterization routines are completely wrong. Not using them is just common sense.

Omae Wa Mou Shindeiru

So you want to recycle simple rasterization of triangles and spheres to build acceleration data structures?

Unfortunately, it doesn't seem a valid approach.

  • Even a single straight ray interacts with an arbitrarily large number of surfaces and materials, not one. Fitting this sort of variable and unpredictable data in a simple image-like array is difficult.
  • The raytracer has to support geometric queries for any ray, from any point, in any direction, not only the rays from the camera through a single planar grid of pixels which are merely the first step.

On top of that, your rasterization routines are completely wrong. Not using them is just common sense.

I know it would be only optimization for primary rays,

as i said the advetage of thi is that i 'almost* got my own rasterizer

code so using this would be banal

* almost, as i said i got no proper ball rasterizing routine , and not sure

id this 1/z interpolation would work ok for triangles, so i am asking if someone can

explain a bit


I know it would be only optimization for primary rays,

Not even that. If you don't compute the intersection point and the corresponding surface normal for each "primary" ray and the first surface it intersects, you are doing ugly approximations, not optimizations. And if you do, there is no way to spare effort compared to a more natural approach: .

Moreover, you would be introducing a completely different special mechanism without removing normal raytracing, which means much more work, unavoidable problems (e.g. quality reduction) and no benefits.

And apart from the general consequences of "clever" code you would still need proper spatial indexing data structures to process reflected and refracted rays, which could be used for all rays.

Omae Wa Mou Shindeiru


I know it would be only optimization for primary rays,

Not even that. If you don't compute the intersection point and the corresponding surface normal for each "primary" ray and the first surface it intersects, you are doing ugly approximations, not optimizations. And if you do, there is no way to spare effort compared to a more natural approach: .

Moreover, you would be introducing a completely different special mechanism without removing normal raytracing, which means much more work, unavoidable problems (e.g. quality reduction) and no benefits.

And apart from the general consequences of "clever" code you would still need proper spatial indexing data structures to process reflected and refracted rays, which could be used for all rays.

I think it would be optymization for primary rays - testing which triangle

collides witch such ray-point seem to be expensive - or you know some cheep method? here by the cost of rasterizing the scene (which will be low i dont know but probably few miliseconds) i got this searching

done - I am doing only simple raytracing for now and as i said

i do it not want to check it because it is good but because i already got* my own rasterization code so i just could use it for free for test

(*almost, should only mend my rsterizer :U)

Let's compare ray tracing to travel from Berlin to New York and spatial indexing data structures to airplanes.

The normal approach to ray tracing, i.e. using appropriate data structures in an elegantly uniform way, is like a flight from Berlin to New York.

Your approach is like starting travel by car because you are afraid of flying, going from Berlin to Lisbon (as far as possible), and then leaving the car and taking a plane anyway, except that you admit that your car is missing wheels.

I don't like to be harsh, but falling in love with clever bad ideas (chiefly because they are your ideas) is a serious problem that puts a lot of your time, motivation and energy at risk. Don't let it happen to you.

Omae Wa Mou Shindeiru

Let's compare ray tracing to travel from Berlin to New York and spatial indexing data structures to airplanes.

The normal approach to ray tracing, i.e. using appropriate data structures in an elegantly uniform way, is like a flight from Berlin to New York.

Your approach is like starting travel by car because you are afraid of flying, going from Berlin to Lisbon (as far as possible), and then leaving the car and taking a plane anyway, except that you admit that your car is missing wheels.

I don't like to be harsh, but falling in love with clever bad ideas (chiefly because they are your ideas) is a serious problem that puts a lot of your time, motivation and energy at risk. Don't let it happen to you.

you do not understand the thing - i dont know why you do not understand it but you dont

I was sayin i do not consider this as a good choice so i agree with you

(If i agree with you why you do not agree with me?)

i just say that i want to spend an hour or two and check this bad

option i want to check (i belive it may be bad)

i just want to check thuis becouse this is easy to check, not to use this later,

besides this i just want to get a proper rasterizer not for the sake/case of raytracing but to get the rasterizer properly done

Rasterizing the first ray bounce (the only coherent one, really) can make sense if you're ray tracing on the GPU, because you can get it more or less for free, and the accuracy issues can be worked around sufficiently easily that it is worth it. But if you're working on the CPU it's pretty stupid as the overhead of setting up a rasterizer and translating your data structures and algorithms from ray tracing to rasterization and back has the potential for plenty of bugs, for ultimately not much gain at all, because CPU rasterization is actually pretty slow at a high resolution with the number of triangles a ray tracer can usually handle without issues. So your rasterization algorithm would really be holding you back.


besides this i just want to get a proper rasterizer not for the sake/case of raytracing but to get the rasterizer properly done

Then write a rasterizer. But the title of the thread is "coordinating raytracing with rasterization", you can't expect people to read your mind.

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

This topic is closed to new replies.

Advertisement