• Create Account

Path Tracing BSDF

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

18 replies to this topic

#1Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 25 March 2012 - 11:32 PM

*** Follow this project on this journal: http://www.gamedev.net/blog/1455-path-tracing-is-fun/ ***

Hello,
I've been working on my path tracer for a while, and I need some feedback on how to go forward and check if what I've done is correct. Basically what I have is the following (pretty standard):

- generate a camera ray
- for each light ray bounce
--- intersect the ray with the scene
--- calculate the BSDF at the intersection, updating the radiance and generating a new ray according to the BSDF's density function

I've tried to make the path tracer as modular as possible, heavily using classes for primitive types (I've got sphere, plane, triangle, and vertical bounded-open infinitely thin cylinder - the cylinder was mostly for tests and needs to be generalised) and materials (diffuse, specular, plastic, "metal-like", light source, dielectric glass with fresnel reflection, and a few other test materials). I've got a standard perspective camera with a tweakable FOV setting (hooray), I am a little bummed about my scene graph implementation, I seem to be hopeless at implementing octrees, kd-trees, etc... I don't know why but I keep failing miserably. But that's an issue for a later time (sort of).

Now I am mostly wondering, have I actually got the theory right? I've looked online and BSDF's apparently deal with angles and ratios but I fail to understand how one implements that, so this is my interpretation of it (with some help from trusty google):
BSDF (Bidirectional Scattering Distribution Function)
-> Input: an incident ray, a normal to the intersected surface.
-> Output: a new ray, and an RGB color/emittance pair for the surface (it should be a spectral result but I'm approximating here).

So for instance, a diffuse BSDF would return a random ray within the hemisphere centered on the normal, and return a color equal to the surface's color, and an emittance of zero (since it emits no light). A "light source BSDF" (which makes no sense but standardises the material types), would return a color of zero and an emittance equal to the intensity of the light times its color. Then you get semi-emissive materials, which return nonzero emittance and color (such as a glowy sphere). For glass, you get a BRDF+BTDF (reflection+refraction) combo, which are separated by a random trial controlled by the Fresnel coefficient. And so on... then the color/emittance pairs are combined into a final radiance value which becomes the pixel's color.

This is the results I get on a few scenes:
Standard Cornell Box
Sphere Pyramid
Cylinder caustics
Random stuff (the dragon on the yellow background is made of glass)

What do you guys think? Is my BSDF implementation physically correct? Or is it blatanty wrong? I have nothing to compare it to (except the standard cornell box scene but that's the simplest case and only uses a couple materials). And I have no glass dragon or pocket cornell box to test it in real life

Thanks for the feedback and help, because I would like to know if what I've done is correct before moving on to the more complex stuff (more complex models and materials, and perhaps bidirectional path tracing). I can give some code if needed but be warned that it's written in Pascal (but I made sure it was readable so it should be ok).

Also - how should I go about simulating subsurface scattering (which is needed for realistic metal materials)? I was thinking of passing the primitive object to the BSDF so it can calculate limits on where the ray should exit, but that's not very efficient. It is good enough to just approximate it and have the ray exit from a random origin on the surface normal's plane in a small radius?

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#2pcmaster  Members   -  Reputation: 685

Like
2Likes
Like

Posted 26 March 2012 - 05:59 AM

The renders look very nice. What you wrote makes sense to me. You don't have any acceleration structure in place, yet? How long do you trace such scenes then?
Regarding SSS you'll have to read some papers on that, I'm afraid. I could help just with realtime rasterised SSS (mostly for skin, which is quite fake but nice and fast ) However, it does sound reasonable that the reflected (scattered) ray should exit randomly somewhere near the entrance with a changed direction and radiance (colour), depending on surface properties (where to get them from?). That is how it obviously works.

#3Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 26 March 2012 - 06:10 AM

The renders look very nice. What you wrote makes sense to me. Tou don't have any acceleration structure in place, yet? How long do you trace such scenes then?

I do have some kind of octree in place but it's not efficient (only a 10x speedup or so for the 100k-triangle-each dragons) and extremely fragile (bad parameters will cause holes in the meshes). The dragon renders took several hours. I'm desperately trying to implement something efficient, but all the kd-tree stuff I find online is either a) beginner-level tutorials, b) impossibly academic papers and c) nearest-neighbour searches which are irrelevant to my problem.

Let's just say it's not quite realtime . The simple scenes take a few seconds to render with average noise, and minutes to hours to make them sweet and smooth looking. The more complex scenes take one to two orders of magnitude more time, which is why I need to find a way to accelerate that. I'd be happy with 12-hour very high quality renders (since I can parallelize them over multiple computers), but I'm not ok with 2-week renders

As an idea the "Random stuff" scene such as it is took exactly 27 hours, 11 minutes and 40 seconds (I logged it for performance comparison later on).

However, it does sound reasonable that the reflected (scattered) ray should exit randomly somewhere near the entrance with a changed direction and radiance (colour), depending on surface properties (where to get them from?)

I grab the surface properties from the material "index" for each primitive, which points towards a particular material which defines surface properties. What I was wondering if whether it was acceptable to have rays exit at some point which isn't actually part of the primitive's surface, as an approximation (since ensuring rays always exit directly from a point located on the primitive's surface is a much harder problem as it's not generic and not even local to a given primitive).

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#4cignox1  Members   -  Reputation: 723

Like
1Likes
Like

Posted 27 March 2012 - 02:05 AM

I must say that you have indeed beatiful renders, wich would deserve more complex scenes (i.e. Sponza Atrium).

If you have troubles with kd-trees and the like, try with something easier: BIH (Bounding interval hierarchy) is not as fast as kd-tree with SAH, but estimates suggest it is 60/70% its speed, so it may help you a lot, and is not so hard to implement (I did it as well, but I would rather not give you my implementation as for some reason it is really slow :-(

You can find the paper by googling and the ompf.org forum used to have several threads on that topic, with many implementations discussed (included my own). Now the ompf.org domain no longer points to the forum, but a (temporary?) replacement has been opened. I don't know if those threads have been moved, but if you ask them they will help you.

how should I go about simulating subsurface scattering (which is needed for realistic metal materials)

Honestly this is the first time I hear someone willing to implement SSS for metals (instead than skin, milk or marble)

And you could consider buying the "Physically based Raytracing: from theory to implementation": I have the first edition and it is truly amazing. In the second (IIRC) the show how to implement both BVH and SSS.
If you are interested in Raytracing/GI you should buy that book.

Hope this helps

#5Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 27 March 2012 - 02:26 AM

I will definitely need to get my hand on Physically based Raytracing, I've only heard good things from it. I hope to purchase it soon as it seems to be an excellent resource.

I have seen several mentions of the ompf.org forum scattered around the internet, but as you said the site is down (has been for a while, apparently). I wasn't aware a replacement forum was up, I will need to check that out. These guys at ompf seem to be the experts on all that is ray tracing.

For the SSS, well not all metals obviously (and my use of the word "metals" is a bit general), but apparently some metals exhibit some form of SSS. But I think that has more to do with the actual surface geometry than with the phenomenon. SSS definitely benefits more to marble and milk, etc... it is still a required ingredient in realistic path tracing, plastic dragons just don't cut it. That said it will probably require me to rethink my BSDF implementation design.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#6InvalidPointer  Members   -  Reputation: 1443

Like
0Likes
Like

Posted 27 March 2012 - 12:57 PM

Offhand, I think metals are the only substances that explicitly *do not* exhibit scattering behavior. On the other hand, all dielectric materials (read: nonmetallic substances) do exhibit at least minor scattering tendencies.
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#7Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 27 March 2012 - 07:38 PM

Offhand, I think metals are the only substances that explicitly *do not* exhibit scattering behavior. On the other hand, all dielectric materials (read: nonmetallic substances) do exhibit at least minor scattering tendencies.

Quite true. I'm currently experimenting with some SSS models for dieletrics and it is fairly hard to implement. I mean the concept is very simple but the actual math of how it works is quite tricky. And it slows down the rendering quite a bit since a physically correct SSS model must do several small ray steps until it exits the material.

I also noticed that SSS actually naturally reduces to a standard BSDF (diffuse, specular, even dieletric refraction) if the material is completely opaque or completely transparent. But I guess the standard BSDF's can stay as faster, specialized versions of the actual underlying SSS model.

It the restriction of having closed models not too... restrictive? As far as I know SSS requires a bounded volumetric shape to work... otherwise it doesn't make sense.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#8InvalidPointer  Members   -  Reputation: 1443

Like
1Likes
Like

Posted 28 March 2012 - 07:17 AM

Watertight meshes are pretty par for the course, I don't think you'd earn any ire by placing that restriction.

Also, instead of mucking about with actually bouncing some rays around, I would *highly* suggest using some of Jensen's top-notch work on dipole transmittance. It's really good stuff
clb: At the end of 2012, the positions of jupiter, saturn, mercury, and deimos are aligned so as to cause a denormalized flush-to-zero bug when computing earth's gravitational force, slinging it to the sun.

#9Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 29 March 2012 - 03:03 PM

Watertight meshes are pretty par for the course, I don't think you'd earn any ire by placing that restriction.

Also, instead of mucking about with actually bouncing some rays around, I would *highly* suggest using some of Jensen's top-notch work on dipole transmittance. It's really good stuff

Thanks for the link, I will have a look. Simulating SSS with tiny ray bounces is far too computationally intensive

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#10Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 10 April 2012 - 10:57 PM

Sorry for the long absence from this thread, I am currently looking at bidirectional path tracing to improve my render speeds, then I will implement a BVH (and then I'll be looking at implementing all the materials I'm missing i.e. SSS, and other stuff).

I like bidirectional path tracing because it completely solves the inconsistency issue I'm facing with lights - right now I am forced to give them a null BSDF which is kind of stupid, however using BDPT they will have their own status which is more logical.

I have come to the conclusion that BDPT with naive importance sampling (i.e. averaging all the paths) is fairly easy to implement, just a bit tricky with all the special cases. However multiple importance sampling is much more difficult.

Hopefully I can make some time for trying to implement BDPT in between uni work.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#11Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 22 April 2012 - 10:42 PM

I just noticed I was missing normal smoothing (my glass buddha I'm currently rendering looks particularly bad because of this). This should be an easy fix provided I can get my hands on some vertex normal data for models I use. I am not sure yet how to handle it in case this information is missing, it would require doing some mesh analysis to see which vertices are shared and averaging the normals of adjacent faces. This would require rethinking how primitives classes are designed (I will look at it when I implement geometry instancing).

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#12jameszhao00  Members   -  Reputation: 271

Like
0Likes
Like

Posted 23 April 2012 - 06:04 AM

Are you randomly choosing diffuse / specular / etc. samples per intersection? Or are you generating N rays (if a surface has diff/spec N = 2) per intersection?

Also, have you experienced any gotchas with importance sampling the specular?

#13Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 23 April 2012 - 12:27 PM

I'm randomly choosing a ray according to the material's BSDF (and only one ray per intersection, so if there's for instance possibility for a ray to be refracted or reflected, I choose the outcome using a random trial by calculating the Fresnel coefficient). I'm going to be implementing more features soon when I get the time though.

Also, have you experienced any gotchas with importance sampling the specular?

I don't follow you, there is only one possible outcome for specular surfaces (the PDF is a Dirac function) so afaik you can't importance sample it. Or do you mean when the ray can either be diffusely reflected or specularly reflected? I don't use importance sampling either in that case, I do a random trial (the probability describes how specular the surface is).

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#14jameszhao00  Members   -  Reputation: 271

Like
0Likes
Like

Posted 23 April 2012 - 02:14 PM

Ah sorry I meant a microfacet based specular (i.e. phong distribution).

diffusely reflected or specularly reflected

By 'specularly reflected' do you mean mirror? and 'diffusely' glossy? In the glossy case, there're direction generators/pdfs for importance sampling the distributions (phong, etc). Don't know how it compares to 'random trials'.

I choose the outcome using a random trial by calculating the Fresnel coefficient

Ah ok.

#15Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 23 April 2012 - 02:50 PM

Ah no when I mean specular I mean perfectly specular (mirror). I'm not sure what the correct term is but yeah glossy would be my "wet plastic" material which basically chooses a random diffuse ray, calculates the perfectly specular ray, and then does a random trial based on a parameter between 0 and 1 to decide which ray should be used. It works pretty well (of course there are other generators to do it, this is like the most basic and obvious one).

Random trials have the issue that they are quite noisy, so if one can importance sample the distribution instead it's much better but sometimes it isn't possible (for instance dieletric refraction cannot be importance sampled, but then again the Fresnel coefficient is in general either very high or very low, so the noise is manageable).

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#16Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 25 April 2012 - 05:56 PM

Here is the latest render. As you can see non-smoothed normals are pretty obvious but look especially bad on the glass buddha. I will be using this as my test scene from now on as it has a fairly large number of triangles (270k roughly) and allows me to test various materials and settings. Later I might wrap an environment map around it when I get around to implementing that. The lighting is a bit too extreme, I'm trying to put a table light model in there but I can't seem to find a nice high-poly one for free.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#17jameszhao00  Members   -  Reputation: 271

Like
0Likes
Like

Posted 26 April 2012 - 03:07 AM

Wow how long did this render take? Love the stained glass buddah. The indirect lighting in the backside/shadows look a tad dark.

By the way, what parts of your code is vectorized? Did you experience any tradeoffs between vectorizing an entire computation vs splitting the computation at branches? For example,

k = a() * condition() * b()

vs

k = a()
if(condition()) k *= b()

#18Bacterius  Crossbones+   -  Reputation: 9266

Like
0Likes
Like

Posted 26 April 2012 - 01:20 PM

I don't really know how long it took because I had to stop and resume it several times because I needed the processor for something else, but it did take a while. It was originally rendered at 1920x1080 but I downsampled it to make it look nicer.

By the way, what parts of your code is vectorized? Did you experience any tradeoffs between vectorizing an entire computation vs splitting the computation at branches? For example,

I didn't actually vectorize anything yet because I am still figuring out a lot of the theory, but in general I tried to code it in an efficient way (so lots of loop hoisting, precomputations, and organizing computations in a way that I can nicely vectorize when I get around to that). It's mostly all simple standard code at the moment.

Note that I will be releasing the code freely, but for that I need to have more time (quite busy right now, and I can't really do anything by working by 10 minute sessions, I need to really sit down for a few hours to be productive)

EDIT: I will be creating a journal for this project, because I think people are getting annoyed seeing this thread bubble back to the surface repeatedly. Link here: http://www.gamedev.n...tracing-is-fun/

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

- Pessimal Algorithms and Simplexity Analysis

#19jameszhao00  Members   -  Reputation: 271

Like
0Likes
Like

Posted 27 April 2012 - 06:38 PM

I do have some kind of octree in place but it's not efficient (only a 10x speedup or so for the 100k-triangle-each dragons) and extremely fragile (bad parameters will cause holes in the meshes).

Had the same problem today when I was implementing an acc. structure (AABB with uniform grids). Turns out holes in my renders are caused by
1. floating point precision issues (e.g. axis aligned planes extremely close to a boundary). Have to be relatively conservative now
2. not invalidating intersections that occur outside the current cell/voxel/... Turns out something in another cell can actually be closer than the intersection.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS