mapping point in scene to screen pixel

Started by
9 comments, last by svnstrk 14 years, 1 month ago
hi, i have question about basic perspective view. im doing a raytracing and shooting rays from monitor pixel. so i do a perspective effect to my 1900*1200 scene by widening the far plane pixel: ray.setDirection(Vec3(-4.5 + (0.005*col), -3 + (0.005*row), -1)); so the far plane would have width +- 9.5 and heigh +- 6. no i need to reverse the effect, ie. given a point in a scene, how to find the corresponding pixel in the screen. anyone has any idea how to do this? im having trouble finding it since its not a line equation. thanks in advance
Advertisement
Compute the difference vector from the camera to the point of interest and scale the vector to unit length. This vector will be given in world co-ordinates. The view plane can be given in world space, too, as a plane at distance D apart from the position of the camera, where the distance D is to be traveled along the viewing vector. Having this plane and a ray (from the camera position and difference vector above) you can compute the hit point of the ray with the plane. After doing so you have to check whether the hit is inside the width and height of the view in world space. Then re-interpret the hit point in co-ordinates of the view plane, and finally quantize them with the pixel resolution of the display view.

Alternatively you can step into the view space early, making the one or other computation more easy.

If you need some advice for how doing all this mathematically, don't hesitate to ask for...
hi,

actually i have a confusion about my project. so what im trying to do is a pre-calculated rendering: raytrace, render image, save result as object material., update light position, raytrace, render image, save result as object material and so on and so on. i actually come up with two concept to start:

1. for every vertex inside a triangle, find a corresponding pixel on screen. if exist on screen, calculate, if not, dont calculate. my first question relate to this one.

2. for ever pixel on screen, find the related vertex on the scene, calculate, and map it to objects material.

im thinking that the second one is more efficient, but the first one is quite straightforward. do you have any advice on this? my question from the second solution is how to map the corresponding pixel on screen to object map.


thanks in advance.
I'm not sure whether I understand you. Which informations do you want to "pre-calculate"? From the steps enumerated as "raytrace, render image, save result as object material., update light position, raytrace, render image, save result as object material and so on" I can't imagine which informations are frame invariant, as long as "raytrace" and "raytrace" denote the same procedure. Please clarify this issue for me.

However, it seems me that you want to deal with a so-called object map, e.g. the information which object/material is visible at each individual pixel on the screen.

Quote:Original post by svnstrk
1. for every vertex inside a triangle, find a corresponding pixel on screen. if exist on screen, calculate, if not, dont calculate. my first question relate to this one.

2. for ever pixel on screen, find the related vertex on the scene, calculate, and map it to objects material.

The common usage for vertex is, AFAIK, the position, surface normal, material properties, ... of a corner of a polygon. Probably you use it here as each surface location of a triangle that is mapped to a pixel on the screen (similarly to a "fragment" in OpenGL), right?
hi,

so the concept is like spherical harmonics lighting, where you calculate the scene, find the basis function, and the second time you render it, you dont have to calculate everything all over again.

im trying to apply the same concept here and go nuts. basically i want to render the scene for the first time. the result is then saved as materials of the object. when i render my scene for the second time, all i have to do is to put those materials to my object. yes it might require lots of memory but thats the later problem for now.

my question is, when i try to apply my first calculation, im supposed to save it as object material. im familiar with the concept, but have no clue how to implement it. currently i have raytracing method and all the scene built using triangles.

any idea or link that might be helpful?


thanks in advance.
Quote:Original post by svnstrk
any idea or link that might be helpful?

Yes: Still an object map.

Allocate a 2D array of integers, where the extent of the array is those of the amount of (primary) rays you shoot at the scene (say usually pixel columns by rows). Further allocate a 1D array of pointers to objects and/or materials. Parametrize all objects and/or materials in the scene each one with a unique integer as small as possible, and store a pointer to the object and/or material in the 1D array at the index given by the unique integer; this gives you a look-up table. Let index 0 be free for special use.

Then fire the primary rays at the scene and make the usual hit tests with the objects, keeping the closest hit on the ray's track. Copy the integer index stored with the belonging object and/or material in the 2D array at the location denoted by the pixel column and row that was used to compute the current ray. Store the special 0 if no hit was detected.

Later on, when processing the scene in the 2nd pass, you have a pixel accurate map whether and which object and/or material is visible at that pixel.


BTW: You can also use a scanline method instead of ray-tracing.
hi,

sorry if im a little bit miss your point, but from what i know from your post is saving object visibility in 2d array mapped into collumn and row. thats not what my intention is.

so i did my first raytracing calculation, and got my result which is color, based on objects diffuse, specular, reflection refraction etc. now if i want to render my scene again, i dont want to do that calculation all over again. so my idea is, based on the first calculation, i generate a material for the object which represent it color(diffuse, specular, reflection effect).

the purpose of this is when i rotate my camera, i can simply create n amount of material and put it on the scene based on the position of my camera. i simply has no idea how much its going to boost the scene but im just want to go nuts currently.

my question is, when i shoot a ray to an object, how can i transfer those x, y, z value material u v coordinate? any idea?


thanks in advance
Quote:Original post by svnstrk
sorry if im a little bit miss your point, but from what i know from your post is saving object visibility in 2d array mapped into collumn and row. thats not what my intention is.

If you store a pointer to the object than you have the visibility. You know implicitely also what material there is, because a material is a property of the object. Although that was the example I've demonstrated in the previous post, the same principle works for every other temporary result, too, perhaps without the need for a look-up table.

If, for example, you want to store the global position where the ray hits the object surface (as another temporary result computed during the hit tests, suitable for lighting computations later on), then allocate a 2D array of Vector3f (or of 3 floats, or whatever) again with the extent of pixel columns by rows. (It would probably better to store the position in view space, though.)

If you have to interpolate the diffuse color because it is allowed to assign different color values to the vertices, then you may compute that color and store it in a color map. Its always the same principle.

Quote:Original post by svnstrk
so i did my first raytracing calculation, and got my result which is color, based on objects diffuse, specular, reflection refraction etc. now if i want to render my scene again, i dont want to do that calculation all over again. so my idea is, based on the first calculation, i generate a material for the object which represent it color(diffuse, specular, reflection effect).

the purpose of this is when i rotate my camera, i can simply create n amount of material and put it on the scene based on the position of my camera. i simply has no idea how much its going to boost the scene but im just want to go nuts currently.

my question is, when i shoot a ray to an object, how can i transfer those x, y, z value material u v coordinate?

You are already aware that storing too much informations will fill up your RAM really fast. So you have to decide for a compromise: When is a kind of temporary result to be stored, and when is it to be computed on the fly?

For some temporary results compressed data can be used. This doesn't mean to compress the array, but to compress each single entry in the array. Normals are a good candidate for this.

Other temporary results may be supressed because their informations can be stored implicitely in follow up results. E.g. instead of storing which diffuse color is interpolated from the vertices, which (u,v) is found for the texture, and which texture it is at all, compute the final diffuse color and store it. (Of course, this diffuse color is the object color, i.e. before lighting is considered.)
well thats what im planning to do. saving diffuse/specular colors etc from a given light and camera position.

my question is, given a hit point x, y, z in global coordinate, how can i find the corresponding position at the triangle with given vertex v1, v2, v3? so its like transforming the global coordinate to local coordinate according to the triangle. i dont know about material/texture coordinate probably u, v?

thanks in advance
Quote:Original post by svnstrk
..., given a hit point x, y, z in global coordinate, how can i find the corresponding position at the triangle with given vertex v1, v2, v3? so its like transforming the global coordinate to local coordinate according to the triangle. i dont know about material/texture coordinate probably u, v?

You have to compute weights w0, w1, w2 for the vertex positions, so that
w0 * V0 + w1 * V1 + w2 * V2 == H
where p denotes the hit point. Since the vertices are usually given in local co-ordinates, also the hit point in the above formula has to be. Since all 3 weights have to sum up to 1, we can substitute
w0 => 1 - w1 - w2

Well, earlier when you computed the hit point, you may have used the parametric ray / triangle method. It uses a ray
R( k ) := R0 + k * r
given in the local space of the object mesh, as well as a triangle
T( w1, w2 ) := V0 + w1 * ( V1 - V0 ) + w2 * ( V2 - V0 )
with the conditions
w1, w2 >= 0 and w1 + w2 <= 1

Setting
R( k ) == T( w1, w2 )
gives you a linear equation system with 3 equations and 3 unknowns. Solving this for
{ k, w1, w2 }
and checking the above conditions means to compute the hit point and whether the interior of the triangle is hit. With this k the hit point will then be
R( k ) =: H
in local space.

Now, comparing the parametric formula of the triangle with the barycentric formula above, you may see the identity after some minor transformation. That said, the w1 and w2 computed during the hit test are already the weights you're looking for. Use these weights to interpolate linear properties of the vertices over the surface of the triangle.

This topic is closed to new replies.

Advertisement