Archived

This topic is now archived and is closed to further replies.

The missing Raytracing for GI thread continued

This topic is 5131 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''m pretty sure this isn''t considered kosher but I''d hate for these thoughts to go to waste... mods feel free to take your displeasure out on me To Max_payne: Look into quasi-Monte Carlo importance sampling. As I''ve said before, the BRDF represents the probability that a given ray will reflect in a given direction, so you can send a large number of samples towards a direction with high probability, and a small number towards a direction with low probability. Using QMC you can radically reduce noise and other error. Good distrubtion/Monte Carlo raytracers will use importance sampling, and the best use QMC. There are complex theoretical reasons why random vectors are a bad idea, but they are best explained in a book or paper (yes, I know, you hate hearing that). Your best bet to get good results is to use QMC importance sampling. Also, radiosity is not a rasterization-only technique. POVRay, for instance, implements a very interesting form of radiosity using only raytracing. I believe the POVRay manual from version 3.x should include details on how they accomplish this, and there are other papers as well that describe hybrid radiosity/raytracing methods. The citeseer site mentioned above is an excellent place to start; its one of few sites in my Favorites. Still, most people (except for Junc, apparently ) are starting to embrace photon mapping as the superior method of attaining global illumination, because of its similarity to real-world light behavior without the expense and waste of path tracing-based methods. I''d recommend exploring photon mapping after you implement a distribution/MC raytracer. Make no mistake - implementing MCRT is an excellent learning process and good preparation for photon mapping. But you will learn much more effectively - and save yourself a lot of pain in the future - if you learn the correct theory instead of trying to reinvent the wheel. As Chuck3d said, once you have learned the proper theory, you are much, much more likely to develop a new and interesting technique. As it stands, you''re really only reinventing current techniques, without the refinement and precision that a well-educated CGI researcher can contribute. I speak from experience here; learn the basics first, and then get into exploration. It really sucks to "discover" a technique and then find out that it has already been discovered, refined, and perfected for half a decade. Junc - you may be very interested in some photon-mapping animations from the master himself, Jensen. Especially check out the first one, The Light of Mies van der Rohe. Physically-Based Modeling and Animation of Fire is also very good, and Caustics from a rotating glass cube including dispersion will show you something that literally no other currently known GI method can even touch in terms of accuracy and efficiency. In fact most of those videos - and comparing them to state-of-the-art radiosity/MCRT images - is what convinced me to start researching photon mapping to begin with. Your opinion, of course, is your own - but everyone is wrong once or twice

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Indeed, you still want to do some randomness when you use QMC, otherwise you will get weird patterns and banding. A thing I found useful was setting random offsets in the quasi-random sequences.
When you use Halton sequences you basically have stratisfied sampling, but you don''t have to know about how many samples you are gonna take, which because useful when you are gonna do adaptive sampling etc.

I don''t think Photon mapping is THE thing. It is an estimation and it can take up huge amounts of memory, especially on huge scenes. What is nice though is to use photon maps for caustics and participating media. But that''s just my personal preference.
Photon mapping has some nice things, but I still like not to use them though. It can be pretty fast though, because you get low frequency noise instead of high frequency. But still you need shitloads of photons in order to get it at very high quality. Depending on the environment / lighting of course.

Cheers,
- John

Share this post


Link to post
Share on other sites
quote:

Junc - you may be very interested in some photon-mapping animations from the master himself, Jensen. Especially check out the first one, The Light of Mies van der Rohe. Physically-Based Modeling and Animation of Fire is also very good, and Caustics from a rotating glass cube including dispersion will show you something that literally no other currently known GI method can even touch in terms of accuracy and efficiency. In fact most of those videos - and comparing them to state-of-the-art radiosity/MCRT images - is what convinced me to start researching photon mapping to begin with.



hehe, don''t get me wrong, photonmapping is an excellent technique, but you''ve got to admit there is quite a lot of zealots proclaiming it to be ''the big thing''. I''m researching MLT for the time being, maybe after that I''ll be convinced too. Perhaps a combination of MLT & photonmapping will be a valid avenue to explore, who knows

quote:

but everyone is wrong once or twice



Yup.

Share this post


Link to post
Share on other sites
Actually some clarification is in order - I personally think the only good point of photon mapping is the geometry-independent storage and the idea of density estimation to obtain radiance estimates. I think the current visualization methods (basically either direct or with a slow distribution raytracer) are very lacking, and we''re just a good density estimation kernel away from a far superior method of visualizing the photon maps. We shall have to see.

Share this post


Link to post
Share on other sites
quote:

It really sucks to "discover" a technique and then find out that it has already been discovered, refined, and perfected for half a decade.


hehe i can second that one

i can still remember how much it sucked when i had discovered ''3d raycasting'', and when i posted it on a forum people went like: isnt that just raytracing?

Share this post


Link to post
Share on other sites
It seems very interresting what you said ! especially the photons. I was scotched by the john''s site too and I want to know more about the subject...
What I don''t understand is why do you generate noise and will remove it after ? The noise is wanted no? You trace ramdom ray for it (if I have understood:D). It gives a good effect I think. i''m wrong somewhere or i don''t know all the holding. Can you light me :\


Chuck.

oh ! I think I understand now. With rays (or photons) you arrive to have informations about the GI of the scene but It add noise and you will remove it . As you see I come with noob-eyes but I want to benefit it for fixe some points=)


Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Chuck, the noise comes from the fact that you trace rays in random directions. What you do is estimate an unknown value by taking random samples and taking the average of all samples. As you can imagine, if you take say 4 samples, and call the result A. And then you take another 4 samples, and call that result B, that the difference between A and B can be quite big, because there is a big chance that the estimated value of the unknown is way off the real value. This is called variance. So if you do this for every pixel, the neighboring pixels might have very different values, which result in noise. Taking more samples will bring your estimated value closer to the real value, but it will also slow down a lot.

So the trick is to take as few samples as possible and still get a good estimate of the unknown value. Btw, this unknown value we talk here is the incoming light at a given intersection point for example.

Importance sampling is one technique. It concentrates the samples on important parts of the sampling domain so that your average sampled estimate value will be closer to the real value compared to when you use no importance sampling.

Quasi Monte Carlo (QMC) is trying to solve the problem by kindof removing the randomness, but still somehow keeping it Yes I know that sounds vague Basically it are just sequences where every time you generate the sequence the values are the same, but they still look random. Halton sequences evenly space the samples, so that you get a nice distribution. The same way like how you would subdivide your sampling area into different pieces (like you divide a rect into a grid) and take samples inside each of these subdivided areas. That will make the distribution more evenly, resulting into a better estimate in most cases. This is called stratisfied sampling. With QMC you basically do the same, but you don''t need to subdivide, and there is no real randomness going on. But that results into some artifacts as well, so you can still add some randomness to it, while you keep the nice distribution.

Anyway, I hope that made things a bit more clear. If not, I can put some images on my site that show the results of QMC versus non-QMC etc.

Cheers,
- John (http://www.mysticgd.com in the Mystique section)

Share this post


Link to post
Share on other sites
quote:
Original post by ApochPiQ we''re just a good density estimation kernel away from a far superior method of visualizing the photon maps


is that a general statement of the kind "what we would need is a bether visualising method"

or a specific "i can''t talk before christmas, but i know more than you expect about this far superior method" ??

:D




If that''s not the help you''re after then you''re going to have to explain the problem better than what you have. - joanusdmentia

davepermen.net

Share this post


Link to post
Share on other sites
quote:
Original post by davepermen
quote:
Original post by ApochPiQ we''re just a good density estimation kernel away from a far superior method of visualizing the photon maps


is that a general statement of the kind "what we would need is a bether visualising method"

or a specific "i can''t talk before christmas, but i know more than you expect about this far superior method" ??

:D




If that''s not the help you''re after then you''re going to have to explain the problem better than what you have. - joanusdmentia

davepermen.net





A combination of both

Share this post


Link to post
Share on other sites
In fact, john(can I call you john? it is less long I knew from which came the noise, what I didn't understood is why you want to remove it, and if so, why do you generate it. Hope your explanation allowed others rookies to understand the noise 's origin. So thx=)
For my part, this is the continuation which me interress ...

quote:
Original post by Anonymous Poster
Quasi Monte Carlo (QMC) is trying to solve the problem by kindof removing the randomness, but still somehow keeping it Yes I know that sounds vague


You will laugh, but it's this sentence which allowed me to understand. But maybe i'm still wrong. For each pixel(or point) which are intersected with rays, you want to know what is the contribution of the neighboring enviroment. Then you trace random rays (mmm using just a half sphere which are determinate with the face which belongs the point?). It is what you explained by keeping the randomness. But random rays generate noise, that is not exatly the same things as the pseudo random global contribution you want.

More exactly, you need random rays why you don't know where is the most interresting object for the GI contribution, and in more you want all of them.

So because you want to remove the noise, you subdived the environment into samples and trace the random rays into each one. Then the probability that the variance are less, is increasing when you increase the number of samples because in a little sample, the objects have a big probability to have the same color or texture.

I had never to think of that. It's brilliant ! The most important I think is the fact that the noise that I found looking good, is not exactly the best effect. The pseudo random global contribution of the scene is it !

I'm surely wrong on lots of point. I feel you will explain where lol

[edited by - Chuck3d on November 28, 2003 5:22:10 PM]

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Lol, of course you can call me John

You are right Chuck Thats basically the idea what you typed there! A little noise makes it look more real though if I say so myself But it is still better to get the same quality image in 10 MC samples instead of 100 for example So stil you kindof want to get rid fo the noise When a method is not threating all the directions the same, so when it basically won''t result in a physically correct result, then they name the algorithm biassed, and when your samples are all distributed uniformly it is unbiased.

Also you mention the half sphere (hemisphere). For perfectly diffuse materials you would indeed use a hemisphere. If however you want glossy reflections for example, you can narrow this ''cone'' angle from 180 degrees to say 45. That would give you glossy reflections.

Btw, you can also do depth of field using this way as well as motion blur.

But with the stratisfied sampling, you still use random directions, but you choose a random sample within the smaller sampling region (gridcell). Just imagine a half sphere, and you stick a grid on it and fold it over the sphere. Of course you have to keep the area of all the gridcells on the hemisphere equal, otherwise you work biassed already

So you don''t subdivide the environment into areas, but your hemisphere in which you generate the directions. That prevents you from all 10 samples you take to go to one side of the hemisphere. Since you use random, that is possible in that case. So that means you don''t have an even distribution on the hemisphere. While if you would have a more even distribution (with stratisfied or Halton or Sobol sequences) you kindof are guaranteed for a good distribution of samples (so less noise).

It''s easy to implement and gives a nice noise reduction already But it''s not enough if you want very high quality.

QMC + Importance Sampling + Irradiance cache = sweet

Russian roulette is more a way to prevent an exponential boom of nr of rays you trace when you want to sample the incoming light. And since it introduces some randomness it can even make your results more noisy, but it can give a nice speedup.
Irradiance caching is very interesting too if you like a speedup

- John

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
And I forgot to mention adaptive sampling, which you can use to give a more evenly spread noise on your image, making it look more smooth

There is lots of stuff to say about this subject. Maybe I''ll write a tutorial sometime

- John

Share this post


Link to post
Share on other sites
Well well, It''s a very interresting topic, I can discuss on it all the night ! Unfortunatly it is 5:58 PM in our friend from gamedev, but it is midnight with my watch ... damn life is hard

Nevertheless I have an idea about the problem. Let us lean on the hemisphere. I came from the software enginering, and that makes me think of the scalar product in Gouraud shading. When you want a light intensity of a vertex, you calculate the scalar product between the vertex normal and the incoming light as you know. When the scalar tends towards zero, it''s why the contribution of the light is less in this point. Where I want to come from is that there is the big part of the incoming light in the hemisphere which have a little contribution for the final result. Then after have sampling your hemisphere, and when you obtain a value by a ray, you must take into account the spherical coordinates of the point which are the intersection between the hemisphere and the ray.
I don''t know what is the most important angle, but around it, you have the most important random rays.

I think this is physically correct, and then you preserves an unbiased result.

!o)

Share this post


Link to post
Share on other sites
Importance sampling and adaptive sampling are closely related; however, since they both rely on random samples, they still produce noise, just not nearly as much as totally random sampling.

Adaptive sampling is where you send as many samples as you need to reduce noise to an acceptable level. In practice a good adaptive sampler will take a lot of samples in the same directions as a good importance sampler; the difference is just in how they are coded. An importance sampler decides where to send its samples using the BRDF (which gives a scalar coefficient similar to the one used in Gouraud shading) while an adaptive sampler uses the image quality to decide where more samples should be sent in order to reduce error. In practice adaptive samplers are a little bit messier to implement since they depend on the image rather than a simple mathematical function (the BRDF).


Noise is a fairly controversial subject; some people like a bit of noise for realism, while others (myself included) prefer a cleaner, smoother image. However, in general, the noise in Monte Carlo methods is considered a bad thing because it is not a "correct answer" so to speak. The trick to Monte Carlo methods is deciding how much noise is acceptable and how much makes the image look bad.

However, methods like photon mapping aren''t really specifically designed to remove noise. Photon mapping produces smooth, clean results as a perk or side effect. The main purpose for choosing photon mapping (or radiosity, or other similar methods) is for speed. Monte Carlo rendering is very slow, and to reduce noise to an acceptable level takes a lot of samples, and therefore a lot of time. Its perfectly reasonable to render an image with photon mapping for a fast and accurate result, and then apply a post-processing filter to add the noise or graininess back into the image. For example, Silent Hill 3 renders its scenes using standard polygon rasterization methods, but adds a bit of grain to the final image to help create a haunting atmosphere.

Share this post


Link to post
Share on other sites
ApochPIQ: thank you, I did not hope to learn as much on the subject.

I have a last question: do you use some hardware acceleration for that kind of engine ?

!o)

[edited by - Chuck3d on November 29, 2003 9:43:46 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by Chuck3d
ApochPIQ: thank you, I did not hope to learn as much on the subject.

I have a last question: do you use some hardware acceleration for that kind of engine ?

!o)

[edited by - Chuck3d on November 29, 2003 9:43:46 AM]





Not yet


There''s currently no hardware that is dedicated to that kind of thing, although some people have done similar things using shader programming languages on the newest video cards. However, as I''ve discussed elsewhere on this forum, raytracing acceleration hardware is "coming soon."

Here''s a very good discussion about raytracing hardware, with links to similar threads. Be warned, though, you''re in for a lot of reading

Share this post


Link to post
Share on other sites