# macnihilist

Member

136

377 Neutral

• Rank
Member

## Personal Information

• Interests
Art

That is usually called 'analytic prefiltering'. The idea is to use ddx/y to get an estimate of the area the target pixel covers in texture space, and then to convolve the procedural texture with a pixel reconstruction filter analytically. Slightly simplified you need to find the average value of the procedural texture over the projected pixel. As you can imagine this quickly gets compicated, but for box filters and step functions it's often doable. I can't tell you how exactly it would work in your case, but I think with the smoothstep function you are on the right track, because it can be seen as the convolution of a step function with a quadratic filter. (Well, that depends on the smoothstep, but at least for the cubic smoothstep it's true). If you have access to 'Texturing and Modeling - A Procedural Approach', you could take a look at the chapters about AA; this book also contains some numerical techniques you can use when the analytical stuff fails (or gets too complicated, rather). EDIT: Depending on your speed and quality requirements you may want to save yourself the hassle of analytic prefiltering and jump directly to numerical integration. If your function is simple (which it is), you can just brute force supersample it by evaluating it at a few points inside the projected pixel and averaging. If you want to do something a little fancier you could use Simpson's rule, but I doubt this will be an advantage here, because your function is not smooth enough.
2. ## help with light problem

I'm not sure this is right, but I would attack the problem with simple geometry.   [attachment=18236:light.png]   - For the (originally parallel) rays to meet they have to have the same optical path length after crossing the h-axis (Fermat's principle). So you have a*n1 + b*n2 = f*n2 (upper ray = lower ray)   - Pythagoras for the triangle gives b^2 = (f-a)^2 + h^2   - Putting both together gives a = ( sqrt(f^2*n2^2*(2*n2-2*n1)^2+4*h^2*n2^2*(n1^2-n2^2)) - f*n2*(2*n2-2*n1) ) / (2*(n1^2-n2^2)) [ If I have entered it correctly into Wolfram alpha ;) ]   I have to say the solution seems suspiciously complicated, but I'm pretty sure the basic idea is correct.

I think I understood some of it, but some things elude me. And I'm probably not alone. One last remark, again, mostly for the grinning and/or confused bystanders: Indeed, you can measure radiance only with respect to surface orientation. It's even better: you can choose any surface orientation you like. The point is you'll always get the same result. If you tilt the surface normal of the _imaginary_ surface away from the direction in which you're probing for radiance the _irradiance_ will get lower. Less energy per unit area. Because of the tilt. Now the cute little cosine factor in the denominator comes in, counterbalances this (exactly like Bacterius said), and brings the result back up again. So, the surface orientation you choose for 'measuring' radiance doesn't matter. In other words: radiance is independent of surface orientation. When you go from irradiance to radiance you do two things: 1. You look only in a single direction instead of the whole hemisphere (that's dw) 2. You go to projected area from oriented (or real) area, in order to detatch yourself from any concrete surfaces (that's cos(t))

7. ## What causes light scattering and absorbtion?

We've had some mixtures of photon and wave pictures, so I thought I'd try it with a purely wave-based explanation. I think that's enough to get a rough idea and you don't have to resort to diffusely defined 'photons'. I'm not a physicist, though, so you should take the following with a grain of salt. Let's take a Hydrogen atom with a single electron. An electromagnetic wave passes it. The atom sees an oscillating electric field and the charge distribution changes a little as the electron and the proton try to follow the electric field. The oscillation is rather weak, because the proton is heavy and can hold the electron in place, but nonetheless the atom behaves like a little oscillating dipole. This means it radiates an electromagnetic wave itself -- with the dumbbell-shaped distribution typical for Rayleigh scattering. So, the net effect is that a little energy gets taken out of the primary wave and transferred into a secondary 'scattered' wave. (A nice derivation is given here: (starts around 45:00, but you probably have to watch the whole thing for it to make sense).) Let's say we have more and more of these little dipoles clumped together to a (say) spherical particle. Each dipole will scatter as described above, but interference will happen if the dipoles are close together. It is not immediately clear, but the denser and the more regular the dipoles become, the more destructive interference will happen laterally and backwards and the more constructive interference will happen forwards. In the limit all scattering is forward. Think of the transition from (very dilute) water vapor to water droplets in clouds to liquid water to ice (that's maybe a bit oversimplyfied, but it's a good picture). The above assumes that no appreciable absorption happens. However, all materials have resonance frequencies, where they absorb very strongly (i.e. they can follow the electric field very well). But they cannot move freely because of interaction with their neighbors (e.g. collisions) and thus lose a lot of the energy to motion (heat) instead of re-radiating it (dissipative absorption). For visible light, these resonances mostly are easy to excite modes of electrons in molecules where electrons have more 'room to move' (e.g. in carotene). Most color pigments work this way. The corresponding resonances in O2, N2, etc. are in the ultraviolet, so they appear transparent in visible light (except for the weak non-resonant or elastic scattering described above). It is actually quite interesting to look at resonances outside of the visible range. For example water (obviously a polar molecule) has a rotational resonance (picture the molecule flipping around) in the microwave range. So, if you want to heat things up that contain water (like food or hamsters), microwaves are a good choice. In addition to rotational modes, many molecules have vibrational modes in the infrared (picture the atoms vibrating relative to each other). That's (more or less) why IR heats up things quite well (e.g. via IR-lamps). In the visible light, the modes are mostly low-energy electron transitions. But for most molecules these are in the UV or above. Phew, that got a bit out of hand... ;) So, finally to the questions: I wonder, is scattering mostly the result of light getting refracted multiple times in non-(perfectly)homogeneous materials? - For very large particles (relative to the wavelength, e.g. raindrops) I'd say you can treat them by geometric optics and think of a ray entering and exiting. For smaller sizes I prefer the picture I outlined above, because I don't know if 'refraction' makes sense at that level. It seems to be a bit more complicated for Rayleigh and Mie scattering, but do these effects have any (significant) relevance for scattering effects in f.i. wax, textiles or opaque materials in general? (for atmospheric effects they clearly have, without any doubt) - Mie scattering is applicable to spherical particles of any size, for example fat 'droplets' in milk. But to simulate appearance for dense media, you'd probably use a statistical tool like a BSSRDF. And what about absorbtion? What kind of interaction between light and matters leads to wavelength dependend absorbtion? - The resonance frequencies I described above in combination with energy 'loss'. (Dissipative absorption.) Also, does the light change its wavelength or is it more the way that photons with a certain wavelength are "sorted out"? (what now that I think of doesn't really make sense, where should they go?) - They are mostly sorted out (absorbed and their energy is converted into something else). However, it is possible that a photon is absorbed and then photon with lower energy is emitted (and the remaining energy is put into something else, e.g. momentum of the particle). In the wave picture simply some frequencies are absorbed and converted into heat.
8. ## Oren-Nayar with Blinn-Phong Specular

I've been pondering these questions myself and haven't got a satisfactory answer. But so far I can make out 3 general approaches for combining glossy and diffuse parts: 1. Simply add two independent parts and make sure the combined weights are <= 1. Obviously this will not reflect more than comes in if both parts are energy 'conserving' (in the sense that they don't reflect more than comes in...). Whether or not you call that 'physically plausible' is up to you. 2. Scale down diffuse with a factor resembling C*(1-F(N.L))*(1-F(N.V)), where C is some strange normalization constant obtained by doing the corresponding integral (usually with some approximations and worst-case assumptions). This gives a simple layered material, which is symmetric and energy 'conserving'. For example, Ashikhmin-Shirley uses this (although I have no idea how they came up with the strange 1/2s -- probably just empirically). In real-time rendering the 1-F(N.V) is often omitted, which sacrifices symmetry (and some plausibility) for speed. 3. Use a factor C*(1-H(L))*(1-H(V)), where H is the directional hemispherical reflectance of the glossy part. This is slightly nicer than version 2, because it respects all directions in which the glossy part scatters, but you have to know H (approximately). I don't know how plausible the H(V) part is, but at least it keeps things symmetric. A paper that uses this approach is "A Microfacet-based BRDF Generator" by Ashikhmin et al. Unfortunately no answers to your concrete questions, but maybe it helps a little.
9. ## Physically Based Blinn Phong questions

The way I see it, the derivation in the link is a special case for a small circular area light. Small means small enough (as seen from the surface) that you can assume the fr*cos term is constant over the projected area and take it out of the integral. For this case it's correct, but I'm pretty sure you cannot generalize this to arbitrary lights. But, as Bacterius said, for _direct_ illumination from delta lights, you can always pull the pi into the light's intensity. Example: Consider a point light 1 unit above a surface normal to the light. Let's say the light causes an irradiance of E=1 at the surface point closest to the light. For a diffuse BRDF we get for every direction L_o(wo) = fr*E. Case 1: With fr = 1 we get L_o = 1 for every direction. The radiant exitance (Lo*cos integrated over hemisphere) is M = pi. So we have M>E which means we reflect more than came in. Case 2: With fr = 1/pi we get exactly M=E, which is correct. Of course, in case 1 you can always say "My light source was really causing E=1*pi and my BRDF was really 1/pi". You'll get the exact same image (if only direct illumination from this point light is considered), but you safe a division by pi. Bottom line: In my opinion, the pi should always be there. But if you're only doing direct illumination from delta lights and every multiplication counts, you can pull the pi into the light sources. But that's only my personal opinion, so you should take it with a pinch of salt. EDIT: Of course, if you decide to "pull pi into the light source", you have to multiply _every_ BRDF by pi. (And every BRDF component, e.g. if you only remove pi in the diffuse term, you'll obviously shift the balance between diffuse and glossy.)
10. ## Physically Based Blinn Phong questions

You have to distiguish between normalizing a BRDF and normalizing a microfacet distribution. (And between Phong and Blinn-Phong). This is a relatively nice online resource on the topic: http://www.thetenthplanet.de/archives/255 (n+2)/(2*pi) is for the modified Phong BRDF (i.e. the BRDF without the cosine in the denominator). It is also correct for the Blinn-Phong MF-distribution. (n+8)/(8*pi) is an approximation commonly used for the modified Blinn-Phong BRDF. As far as I remember, it is slightly too large, but in most cases where this BRDF is used it doesn't really matter.
11. ## Image Based Specular Problem

First, there seems to be some confusion about coordinate spaces. You probably want to convert R form world space into tangent space for the texture lookup. But, as pointed out by CryZe, R is not good for this kind of thing. Consider using a function of the half-way vector. Here is a slightly more complicated, but much better way to do it: http://www.cs.utah.edu/~premoze/dbrdf/ Basically, you paint a microfacet distribution. Given its simplicity, the d-BRDF works pretty well. EDIT: Here is the pdf I wanted to link in the first place, but couldn't find: http://www.cs.utah.edu/~michael/brdfs/facets.pdf
12. ## Behavior of energy conserving BRDF

Well, this strange Kd-renormalization business is something to reconsider. Someone should tell you. Maybe I'm missing something, but for me it doesn't work out. Let's say you have (1,1,1), and let's say it's white. Then the renormalization factor is 1, end result (1,1,1). Ok. Let's say you have (.1,.1,.1), a dark gray. Then the factor is 10, end result (1,1,1) again. So the dark gray turned into white. Probably not what you wanted. It is also implausible from a physical point of view. "Three photons come in, three have to go out"? Why? It's perfectly valid for a surface to absorb photons at certain energies, that's why most colored things are colored. (Let's stick to the photon picture, also it is maybe not ideal in this case.) With your logic you are converting 2 "photons" of a certain energy into photons of another (quite different) energy just so three come out in the end. If this effect is strong enough to significantly change the color (energy) of photons it is called fluorescence (or, with time-delay phosphorescence). This is not something that happens for normal materials to an extent that would be relevant for image generation. To answer the OP's original question: The problem is most likely that you are not tone-mapping your image correctly and everything above 1 is simply clamped. This lets the highlight appear sharper, because part of the soft fall-off is not visible. Highly glossy normalized BRDFs without a proper HDR-pipeline are problematic is this regard.
13. ## Rendering multi BRDF materials

You just multiply the pdf of all decissions that lead to the new path. (This is only valid if they are independent, but they usually are.) The code at a path vertex typically looks like this: [0. Shade at current vertex] 1. Sample new direction 1.1. Select BSDF component according to pdf_component 1.2. Sample direction wi according to pdf_fr 1.3. pdf = pdf_fr * pdf_component; 2. Attenuate throughput 2.1. Evaluate BRDF-component for sampled wi: fr = brdf_comp(wo,wi) 2.2. throughput *= fr/pdf * dot(n,wi) 3. shoot ray to get next vertex Lines 82-91 of PBRT's path.cpp do exactly this, although the actual sampling is hidden in BSDF::Sample_f. EDIT: Ah ok, I just looked at Sample_f and they do things a bit differently. They sample the direction from ONE component, but evaluate ALL with that single direction for the attenuation and consequently also use the sum of ALL pdf as the resulting pdf. I guess this is more efficient, but a bit harder to implement. In the end both methods should converge to the same result. Sorry if that confuses you only more -- one more reason to heed the advice below. Again: I'd recommend you read a good book on the topic or at least look at PBRT's source code. The devil is the details here and it's not easy to explain the interrelationships correctly in a forum.
14. ## Rendering multi BRDF materials

Wouldnt that make that paths result be 3 times brighter? Or is it it a rough approximation for the contribution of the other 2 paths, which if this process is done many times, will then converge? [/quote] No, it should come out exactly right. Consider a surface that just reflects 50% and transmits 50%. You have two components 'reflect .5' and 'transmit .5'. If you hit the surface you first sample a component, let's say with probability 1/2 for each. Then you sample a new direction, attenuate the throughput of the path according to the BRDF (the component) and continue. Let's say we had done this with 4 paths that carry '1 radiance' on average: Without 1/p: 2 pick reflection: .5*1 + .5*1 gets reflected => '1 radiance' accumulated for reflection, divide by 4 paths => 1/4 radiance reflected 2 pick transmission: [...] => 1/4 radiance transmitted. With 1/p: Well, it's easy to see that if you multiply each result by 2 (1/p) you get the correct answer: 1/2 reflected, 1/2 transmitted. If you have time you can work this out on paper with different reflection/transmission ratios and probabilities. It should always come out right in the long run as long as you don't completely forget one component (i.e. assign it a probability of 0).
15. ## Rendering multi BRDF materials

You can sample your paths (almost) like you want, as long as you weight them correctly (with 1/p) and you don't completely ignore any possible path. The worst thing that can happen is that your path tracing gets very inefficient. (But even then you can expect it to converge to the exact solution eventually.) For example if your BSDF has 3 components you can choose one of the components with probability 1/3 and then weight the path *3. In general you should try to pick a component with a probability proportional to the contribution you expect (importance sampling), in order not to waste samples. Your third example does exactly this. Importance sampling is a science in itself, especially for glossy surfaces. But a strategy that is relatively easy to implement (besides uniform sampling) is to pick a component proportional to the hemispherical (directional) reflectance of the component. If you are after layered materials like plastic you should be aware that they are usually modeled by a Fresnel layer 'above' the diffuse component. So only light that penetrates into the subsrate (i.e. light that is not reflected specularly) has a change to be scattered diffusely. You cannot simply treat these as independent components, but have to blend them according to the angle of incidence. I hope this helps at least a bit -- unfortunately the details are too complex to explain here. If you really want to dive into path tracing and global illumination I'd recommend the book 'Physically Based Rendering'.