• Advertisement

Archived

This topic is now archived and is closed to further replies.

Yet another Light question

This topic is 5560 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Imagine, that in a raytracer, the ray wouldn''t go from the eye to the wall and then from the wall to the lightsource to calculate the shadow, as is done in most raytracers, but imagine instead that we have an extremely fast computer, and we go from the lightsource to the eye: from each lightsource a lot of lightrays start in all different directions. One such ray starts from the lightsource and has a certain direction, it travels until it hits an object. To make it extremely elite we could even do the calculations for each frequency independently: R, G and B or even more frequendcies. A part of the ray is absorbed, another part is reflected, and another part is transmitted and refracted. So here the ray splits in two, let say the computer first continues with the transmitted ray, this one keeps losing some intensity because it''s inside material that absorbs it. The ray reaches the end of the material, so again the ray splits into two, and so on. These calculations are done for a branch of the ray until it''s absorbed to less than 1% of the original intensity, or until it intersects with the camera plane. If it intersects the camera plane, the exact position of the intersection is determinated, and that way is calculated which screen pixel that was. Here starts my problem: Imagine there was a white ray that started from a lightsource, got reflected on a red wall (so only the red frequency of it remains) and then intersects the camera plane at pixel 5, 66. Then that pixel is colored red. Then, accidently, another white ray from a light source somewhere goes through a blue transparent window that was ther esomewhere, and, oh the coincidence, also intersects the camera plane at pixel 5, 66. In that case, pixel 5, 66 has received a red and a blue light ray. That''s great you''d think, then we color the pixel purple. BUT my problem is, in real life, you could say there start an infinite number of light rays from each light source, so if we try to get something similar on a computer, by letting 100000000000000000 rays start from each lightsource (remember I told about an extremely fast computer). These are so many rays, that probably more than one of them will bounce agains the red wall and make pixel 5, 66 a bit red, but if 1000 lightrays do this, this pixel will become EXTREMELY BRIGHT red, while in real life you don''t see it extremely bright red... If you could follow all this, what could I do to make this somewhat realistic?

Share this post


Link to post
Share on other sites
Advertisement
If I understand you correctly, and I think I do, you are asking how is it possible that global illumination lands only one set of similar rays onto each pixel.

You must remember that in the presense of a pinhole camera the focal point only lets rays from ONE direction to come in. Lets take a pixel m=[5 66 1]. Lets say we have a projection matrix P, a focal point F. Then the ray that hits the pixel m is computed m*P^(-1) and gives you the ray (m, F). This is why a raytracer would go from the pixel coordinates backwards to the light source.

Imagine now any point (in your example a red plane) M. There is only one unique ray that would hit the retinal plane at (5, 66). P*M yields only one unique solution. This ray may intersect some transparent surfaces on its way and then the raytracer would blend the changed light ray color.

Also note that the ray''s radiance changes once it goes through the atmosphere. The radiance of any ray is measured in Wxm^(-2)xsr^(-1) watts per square meter per steradian. THe relationship between the image irradiance E and the scene radiance L is in linear relationship:

E=L*(Pi/4)*(d/f)^2*cos^4(a)
where f is focal distance, d is retinal plane resolution and a is the angle of the light ray to the central axis of the camera.

Hope this makes sense.

EPHERE

Share this post


Link to post
Share on other sites
> If you could follow all this, what could I do to make this
> somewhat realistic?

You can''t. There are perhaps not infinity but an implausibly large number of light ''rays'' to consider. You don''t get ''EXTREMELY BRIGHT red'' because the different paths each carry very little light. But doing this realistically is impossible on any computer available today, even the multi-million $ systems used to produce films like Monsters Inc., Ice Age, etc..

In games and simulations today approximations are used which generally reduce the lights to a collection of points + overall ambient effects, which are applied to all lit surfaces. Shadows are done seperately using geometric techniques. Indirect lighting like you describe is ignored or faked. Other lighting effects such as pools of light are faked using special textures. It''s pretty crude but we''re so used to it in games that we don''t notice it.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
quote:
Original post by Lode
BUT my problem is, in real life, you could say there start an infinite number of light rays from each light source, so if we try to get something similar on a computer, by letting 100000000000000000 rays start from each lightsource (remember I told about an extremely fast computer). These are so many rays, that probably more than one of them will bounce agains the red wall and make pixel 5, 66 a bit red, but if 1000 lightrays do this, this pixel will become EXTREMELY BRIGHT red, while in real life you don''t see it extremely bright red...

Sometimes you do... But really, the more rays you divide the light source into, the less energy each ray will carry, so increasing the number of rays won''t make the pixel brighter.

Share this post


Link to post
Share on other sites

  • Advertisement