The missing Raytracing for GI thread continued

Started by
17 comments, last by ApochPiQ 20 years, 4 months ago
....

4 weeks to go..



If that''s not the help you''re after then you''re going to have to explain the problem better than what you have. - joanusdmentia

davepermen.net
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

Advertisement
In fact, john(can I call you john? it is less long I knew from which came the noise, what I didn't understood is why you want to remove it, and if so, why do you generate it. Hope your explanation allowed others rookies to understand the noise 's origin. So thx=)
For my part, this is the continuation which me interress ...

quote:Original post by Anonymous Poster
Quasi Monte Carlo (QMC) is trying to solve the problem by kindof removing the randomness, but still somehow keeping it Yes I know that sounds vague


You will laugh, but it's this sentence which allowed me to understand. But maybe i'm still wrong. For each pixel(or point) which are intersected with rays, you want to know what is the contribution of the neighboring enviroment. Then you trace random rays (mmm using just a half sphere which are determinate with the face which belongs the point?). It is what you explained by keeping the randomness. But random rays generate noise, that is not exatly the same things as the pseudo random global contribution you want.

More exactly, you need random rays why you don't know where is the most interresting object for the GI contribution, and in more you want all of them.

So because you want to remove the noise, you subdived the environment into samples and trace the random rays into each one. Then the probability that the variance are less, is increasing when you increase the number of samples because in a little sample, the objects have a big probability to have the same color or texture.

I had never to think of that. It's brilliant ! The most important I think is the fact that the noise that I found looking good, is not exactly the best effect. The pseudo random global contribution of the scene is it !

I'm surely wrong on lots of point. I feel you will explain where lol

[edited by - Chuck3d on November 28, 2003 5:22:10 PM]
!o)
Lol, of course you can call me John

You are right Chuck Thats basically the idea what you typed there! A little noise makes it look more real though if I say so myself But it is still better to get the same quality image in 10 MC samples instead of 100 for example So stil you kindof want to get rid fo the noise When a method is not threating all the directions the same, so when it basically won''t result in a physically correct result, then they name the algorithm biassed, and when your samples are all distributed uniformly it is unbiased.

Also you mention the half sphere (hemisphere). For perfectly diffuse materials you would indeed use a hemisphere. If however you want glossy reflections for example, you can narrow this ''cone'' angle from 180 degrees to say 45. That would give you glossy reflections.

Btw, you can also do depth of field using this way as well as motion blur.

But with the stratisfied sampling, you still use random directions, but you choose a random sample within the smaller sampling region (gridcell). Just imagine a half sphere, and you stick a grid on it and fold it over the sphere. Of course you have to keep the area of all the gridcells on the hemisphere equal, otherwise you work biassed already

So you don''t subdivide the environment into areas, but your hemisphere in which you generate the directions. That prevents you from all 10 samples you take to go to one side of the hemisphere. Since you use random, that is possible in that case. So that means you don''t have an even distribution on the hemisphere. While if you would have a more even distribution (with stratisfied or Halton or Sobol sequences) you kindof are guaranteed for a good distribution of samples (so less noise).

It''s easy to implement and gives a nice noise reduction already But it''s not enough if you want very high quality.

QMC + Importance Sampling + Irradiance cache = sweet

Russian roulette is more a way to prevent an exponential boom of nr of rays you trace when you want to sample the incoming light. And since it introduces some randomness it can even make your results more noisy, but it can give a nice speedup.
Irradiance caching is very interesting too if you like a speedup

- John
And I forgot to mention adaptive sampling, which you can use to give a more evenly spread noise on your image, making it look more smooth

There is lots of stuff to say about this subject. Maybe I''ll write a tutorial sometime

- John
Well well, It''s a very interresting topic, I can discuss on it all the night ! Unfortunatly it is 5:58 PM in our friend from gamedev, but it is midnight with my watch ... damn life is hard

Nevertheless I have an idea about the problem. Let us lean on the hemisphere. I came from the software enginering, and that makes me think of the scalar product in Gouraud shading. When you want a light intensity of a vertex, you calculate the scalar product between the vertex normal and the incoming light as you know. When the scalar tends towards zero, it''s why the contribution of the light is less in this point. Where I want to come from is that there is the big part of the incoming light in the hemisphere which have a little contribution for the final result. Then after have sampling your hemisphere, and when you obtain a value by a ray, you must take into account the spherical coordinates of the point which are the intersection between the hemisphere and the ray.
I don''t know what is the most important angle, but around it, you have the most important random rays.

I think this is physically correct, and then you preserves an unbiased result.

!o)
!o)
Hummm , adaptive sampling you said ? It''s near the idea I exposed I think

Thx for all.
!o)
Importance sampling and adaptive sampling are closely related; however, since they both rely on random samples, they still produce noise, just not nearly as much as totally random sampling.

Adaptive sampling is where you send as many samples as you need to reduce noise to an acceptable level. In practice a good adaptive sampler will take a lot of samples in the same directions as a good importance sampler; the difference is just in how they are coded. An importance sampler decides where to send its samples using the BRDF (which gives a scalar coefficient similar to the one used in Gouraud shading) while an adaptive sampler uses the image quality to decide where more samples should be sent in order to reduce error. In practice adaptive samplers are a little bit messier to implement since they depend on the image rather than a simple mathematical function (the BRDF).


Noise is a fairly controversial subject; some people like a bit of noise for realism, while others (myself included) prefer a cleaner, smoother image. However, in general, the noise in Monte Carlo methods is considered a bad thing because it is not a "correct answer" so to speak. The trick to Monte Carlo methods is deciding how much noise is acceptable and how much makes the image look bad.

However, methods like photon mapping aren''t really specifically designed to remove noise. Photon mapping produces smooth, clean results as a perk or side effect. The main purpose for choosing photon mapping (or radiosity, or other similar methods) is for speed. Monte Carlo rendering is very slow, and to reduce noise to an acceptable level takes a lot of samples, and therefore a lot of time. Its perfectly reasonable to render an image with photon mapping for a fast and accurate result, and then apply a post-processing filter to add the noise or graininess back into the image. For example, Silent Hill 3 renders its scenes using standard polygon rasterization methods, but adds a bit of grain to the final image to help create a haunting atmosphere.

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

ApochPIQ: thank you, I did not hope to learn as much on the subject.

I have a last question: do you use some hardware acceleration for that kind of engine ?

!o)

[edited by - Chuck3d on November 29, 2003 9:43:46 AM]
!o)
quote:Original post by Chuck3d
ApochPIQ: thank you, I did not hope to learn as much on the subject.

I have a last question: do you use some hardware acceleration for that kind of engine ?

!o)

[edited by - Chuck3d on November 29, 2003 9:43:46 AM]





Not yet


There''s currently no hardware that is dedicated to that kind of thing, although some people have done similar things using shader programming languages on the newest video cards. However, as I''ve discussed elsewhere on this forum, raytracing acceleration hardware is "coming soon."

Here''s a very good discussion about raytracing hardware, with links to similar threads. Be warned, though, you''re in for a lot of reading

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

This topic is closed to new replies.

Advertisement