• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

macnihilist

Members
  • Content count

    136
  • Joined

  • Last visited

Community Reputation

377 Neutral

About macnihilist

  • Rank
    Member
  1. That is usually called 'analytic prefiltering'. The idea is to use ddx/y to get an estimate of the area the target pixel covers in texture space, and then to convolve the procedural texture with a pixel reconstruction filter analytically. Slightly simplified you need to find the average value of the procedural texture over the projected pixel. As you can imagine this quickly gets compicated, but for box filters and step functions it's often doable. I can't tell you how exactly it would work in your case, but I think with the smoothstep function you are on the right track, because it can be seen as the convolution of a step function with a quadratic filter. (Well, that depends on the smoothstep, but at least for the cubic smoothstep it's true). If you have access to 'Texturing and Modeling - A Procedural Approach', you could take a look at the chapters about AA; this book also contains some numerical techniques you can use when the analytical stuff fails (or gets too complicated, rather). EDIT: Depending on your speed and quality requirements you may want to save yourself the hassle of analytic prefiltering and jump directly to numerical integration. If your function is simple (which it is), you can just brute force supersample it by evaluating it at a few points inside the projected pixel and averaging. If you want to do something a little fancier you could use Simpson's rule, but I doubt this will be an advantage here, because your function is not smooth enough.
  2. I'm not sure this is right, but I would attack the problem with simple geometry.   [attachment=18236:light.png]   - For the (originally parallel) rays to meet they have to have the same optical path length after crossing the h-axis (Fermat's principle). So you have a*n1 + b*n2 = f*n2 (upper ray = lower ray)   - Pythagoras for the triangle gives b^2 = (f-a)^2 + h^2   - Putting both together gives a = ( sqrt(f^2*n2^2*(2*n2-2*n1)^2+4*h^2*n2^2*(n1^2-n2^2)) - f*n2*(2*n2-2*n1) ) / (2*(n1^2-n2^2)) [ If I have entered it correctly into Wolfram alpha ;) ]   I have to say the solution seems suspiciously complicated, but I'm pretty sure the basic idea is correct.  
  3. I think I understood some of it, but some things elude me. And I'm probably not alone. One last remark, again, mostly for the grinning and/or confused bystanders: Indeed, you can measure radiance only with respect to surface orientation. It's even better: you can choose any surface orientation you like. The point is you'll always get the same result. If you tilt the surface normal of the _imaginary_ surface away from the direction in which you're probing for radiance the _irradiance_ will get lower. Less energy per unit area. Because of the tilt. Now the cute little cosine factor in the denominator comes in, counterbalances this (exactly like Bacterius said), and brings the result back up again. So, the surface orientation you choose for 'measuring' radiance doesn't matter. In other words: radiance is independent of surface orientation. When you go from irradiance to radiance you do two things: 1. You look only in a single direction instead of the whole hemisphere (that's dw) 2. You go to projected area from oriented (or real) area, in order to detatch yourself from any concrete surfaces (that's cos(t))
  4. I can't quite follow your argument with this strange 'non-Lambertian geometry', but if you define your emitter in such a way that it emits inifinite radiance at grazing angles then you'll see infinite radiance when you look at it at grazing angles. But this has nothing to do with the general definition of radiance. It is also not a very useful definition. Anyway, I just wanted to say two things for other people who may be reading this: 1. Bacterius is absolutely right about the cosine factor. 2. Incident radiance at a point x from a direction w has nothing to with whether the surface on which x lies is Lambertian or not. Incident radiance is called incident radiance because it's measured before it interacts with the surface. It is also idependent of the surface orientation, as described above.
  5.   I do not understand exactly what you mean by this. Could you elaborate a bit?   The way I thought about it: imagine a hemisphere around the receiver center point's normal. In the first configuration, the emitting surface would be close to the Zenith of this hemisphere. In the second configuration, the surface would be closer to the Azimuth and thus the cosinus factor would be different. Is this what you mean?   That's what I meant. Although it wasn't quite correct of me to speak of 'the' cosine factor when a finite solid angle is involved, because there are many cosine factors; one for each direction in the direction bundle that makes up the solid angle. But each factor is smaller in (b) than the corresponding factor in (a), so it's save to say that the irradiance will be lower due to 'the' cosine factor.   Well in that case, yes, radiance is lower. Think about what happens when you aim the emitter perpendicular to the receiver, the projected area goes to zero and no light moves towards the receiver so radiance is zero, as expected. So a lower emitter projected area for the emitter -> lower radiance, by the factor you stated in your first post.   The blue illustration in the diagram was confusing, though. And it's important to not confuse the cosine term in the emitter's projected area with the cosine term for the irradiance's incidence area, they are not the same! I have to disagree, in my opinion you'll approximately get the same results. - Radiance is the same. You look up from the receiver with in a single direction. Either you see the emitter, then you get the radiance it emits, or you don't see the emitter, then you get zero. The orientation of the emitter doesn't matter since it's defined to emit equally in all directions. - Irradiance is lower in b than in a. This time not because of the cosine factor(s), but because of the smaller solid angle. - Power also lower.
  6. I think part of your confusion is that you're not asking the full question radiance answers. Radiance is energy flowing at a certain point in a certain direction. (Technically differential area and solid angle, but for all practical purposes (TM) it’s a point and a direction.) Radiance is always L(x,w) and you have to pick a point (which you did) _and_ a direction (which you didn't – at least not explicitly). For incident radiance imagine standing at x and looking into direction w with an extremely small fov, so small that it is just a ray. What you see then is the radiance. It doesn't make sense to ask about the radiance a point receives without specifying a single direction -- you can at most ask for the average radiance (averaged over a finite solid angle). If the receiver is left in your images, you are actually illustrating irradiance, because you are using a finite solid angle. Similarly, it doesn't make sense to ask about the radiance of a finite surface. I think it's best to always talk about radiance with respect to a imaginary surface perpendicular to the chosen direction. In all settings I ever encountered you were free to choose the surface normal, so why not chose the easiest configuration, in which the confusing cosine term simply disappears. Assuming the receiver is left and the emitter right in your images: Incident radiance (say) at the center of the receiver (x) coming from the direction pointing to the center of the emitter (w) is the same in both configurations. It's also the same as the exitant radiance from the center of the emitter toward the center of the receiver. Irradiance at every point of the receiver is smaller for the angled configuration. The solid angle covered by the emitter is the same, but in the case of irradiance you are not free to choose the normal of the surface (since you asking specifically about irradiance of a concrete surface). So in this case the cosine factor is not 1 and it'll give you a smaller value. Power collected in total by the receiver is smaller, too (obviously, if irradiance at every point is).  
  7. We've had some mixtures of photon and wave pictures, so I thought I'd try it with a purely wave-based explanation. I think that's enough to get a rough idea and you don't have to resort to diffusely defined 'photons'. I'm not a physicist, though, so you should take the following with a grain of salt. Let's take a Hydrogen atom with a single electron. An electromagnetic wave passes it. The atom sees an oscillating electric field and the charge distribution changes a little as the electron and the proton try to follow the electric field. The oscillation is rather weak, because the proton is heavy and can hold the electron in place, but nonetheless the atom behaves like a little oscillating dipole. This means it radiates an electromagnetic wave itself -- with the dumbbell-shaped distribution typical for Rayleigh scattering. So, the net effect is that a little energy gets taken out of the primary wave and transferred into a secondary 'scattered' wave. (A nice derivation is given here: [url]http://www.youtube.com/watch?v=4thJXIGovrU[/url] (starts around 45:00, but you probably have to watch the whole thing for it to make sense).) Let's say we have more and more of these little dipoles clumped together to a (say) spherical particle. Each dipole will scatter as described above, but interference will happen if the dipoles are close together. It is not immediately clear, but the denser and the more regular the dipoles become, the more destructive interference will happen laterally and backwards and the more constructive interference will happen forwards. In the limit all scattering is forward. Think of the transition from (very dilute) water vapor to water droplets in clouds to liquid water to ice (that's maybe a bit oversimplyfied, but it's a good picture). The above assumes that no appreciable absorption happens. However, all materials have resonance frequencies, where they absorb very strongly (i.e. they can follow the electric field very well). But they cannot move freely because of interaction with their neighbors (e.g. collisions) and thus lose a lot of the energy to motion (heat) instead of re-radiating it (dissipative absorption). For visible light, these resonances mostly are easy to excite modes of electrons in molecules where electrons have more 'room to move' (e.g. in carotene). Most color pigments work this way. The corresponding resonances in O2, N2, etc. are in the ultraviolet, so they appear transparent in visible light (except for the weak non-resonant or elastic scattering described above). It is actually quite interesting to look at resonances outside of the visible range. For example water (obviously a polar molecule) has a rotational resonance (picture the molecule flipping around) in the microwave range. So, if you want to heat things up that contain water (like food or hamsters), microwaves are a good choice. In addition to rotational modes, many molecules have vibrational modes in the infrared (picture the atoms vibrating relative to each other). That's (more or less) why IR heats up things quite well (e.g. via IR-lamps). In the visible light, the modes are mostly low-energy electron transitions. But for most molecules these are in the UV or above. Phew, that got a bit out of hand... ;) So, finally to the questions: I wonder, is scattering mostly the result of light getting refracted multiple times in non-(perfectly)homogeneous materials? - For very large particles (relative to the wavelength, e.g. raindrops) I'd say you can treat them by geometric optics and think of a ray entering and exiting. For smaller sizes I prefer the picture I outlined above, because I don't know if 'refraction' makes sense at that level. It seems to be a bit more complicated for Rayleigh and Mie scattering, but do these effects have any (significant) relevance for scattering effects in f.i. wax, textiles or opaque materials in general? (for atmospheric effects they clearly have, without any doubt) - Mie scattering is applicable to spherical particles of any size, for example fat 'droplets' in milk. But to simulate appearance for dense media, you'd probably use a statistical tool like a BSSRDF. And what about absorbtion? What kind of interaction between light and matters leads to wavelength dependend absorbtion? - The resonance frequencies I described above in combination with energy 'loss'. (Dissipative absorption.) Also, does the light change its wavelength or is it more the way that photons with a certain wavelength are "sorted out"? (what now that I think of doesn't really make sense, where should they go?) - They are mostly sorted out (absorbed and their energy is converted into something else). However, it is possible that a photon is absorbed and then photon with lower energy is emitted (and the remaining energy is put into something else, e.g. momentum of the particle). In the wave picture simply some frequencies are absorbed and converted into heat.  
  8. I've been pondering these questions myself and haven't got a satisfactory answer. But so far I can make out 3 general approaches for combining glossy and diffuse parts: 1. Simply add two independent parts and make sure the combined weights are <= 1. Obviously this will not reflect more than comes in if both parts are energy 'conserving' (in the sense that they don't reflect more than comes in...). Whether or not you call that 'physically plausible' is up to you. 2. Scale down diffuse with a factor resembling C*(1-F(N.L))*(1-F(N.V)), where C is some strange normalization constant obtained by doing the corresponding integral (usually with some approximations and worst-case assumptions). This gives a simple layered material, which is symmetric and energy 'conserving'. For example, Ashikhmin-Shirley uses this (although I have no idea how they came up with the strange 1/2s -- probably just empirically). In real-time rendering the 1-F(N.V) is often omitted, which sacrifices symmetry (and some plausibility) for speed. 3. Use a factor C*(1-H(L))*(1-H(V)), where H is the directional hemispherical reflectance of the glossy part. This is slightly nicer than version 2, because it respects all directions in which the glossy part scatters, but you have to know H (approximately). I don't know how plausible the H(V) part is, but at least it keeps things symmetric. A paper that uses this approach is "A Microfacet-based BRDF Generator" by Ashikhmin et al. Unfortunately no answers to your concrete questions, but maybe it helps a little.  
  9. The way I see it, the derivation in the link is a special case for a small circular area light. Small means small enough (as seen from the surface) that you can assume the fr*cos term is constant over the projected area and take it out of the integral. For this case it's correct, but I'm pretty sure you cannot generalize this to arbitrary lights. But, as Bacterius said, for _direct_ illumination from delta lights, you can always pull the pi into the light's intensity. Example: Consider a point light 1 unit above a surface normal to the light. Let's say the light causes an irradiance of E=1 at the surface point closest to the light. For a diffuse BRDF we get for every direction L_o(wo) = fr*E. Case 1: With fr = 1 we get L_o = 1 for every direction. The radiant exitance (Lo*cos integrated over hemisphere) is M = pi. So we have M>E which means we reflect more than came in. Case 2: With fr = 1/pi we get exactly M=E, which is correct. Of course, in case 1 you can always say "My light source was really causing E=1*pi and my BRDF was really 1/pi". You'll get the exact same image (if only direct illumination from this point light is considered), but you safe a division by pi. Bottom line: In my opinion, the pi should always be there. But if you're only doing direct illumination from delta lights and every multiplication counts, you can pull the pi into the light sources. But that's only my personal opinion, so you should take it with a pinch of salt. EDIT: Of course, if you decide to "pull pi into the light source", you have to multiply _every_ BRDF by pi. (And every BRDF component, e.g. if you only remove pi in the diffuse term, you'll obviously shift the balance between diffuse and glossy.)
  10. You have to distiguish between normalizing a BRDF and normalizing a microfacet distribution. (And between Phong and Blinn-Phong). This is a relatively nice online resource on the topic: http://www.thetenthplanet.de/archives/255 (n+2)/(2*pi) is for the modified Phong BRDF (i.e. the BRDF without the cosine in the denominator). It is also correct for the Blinn-Phong MF-distribution. (n+8)/(8*pi) is an approximation commonly used for the modified Blinn-Phong BRDF. As far as I remember, it is slightly too large, but in most cases where this BRDF is used it doesn't really matter.  
  11. First, there seems to be some confusion about coordinate spaces. You probably want to convert R form world space into tangent space for the texture lookup. But, as pointed out by CryZe, R is not good for this kind of thing. Consider using a function of the half-way vector. Here is a slightly more complicated, but much better way to do it: [url="http://www.cs.utah.edu/~premoze/dbrdf/"]http://www.cs.utah.edu/~premoze/dbrdf/[/url] Basically, you paint a microfacet distribution. Given its simplicity, the d-BRDF works pretty well. EDIT: Here is the pdf I wanted to link in the first place, but couldn't find: http://www.cs.utah.edu/~michael/brdfs/facets.pdf
  12. Well, this strange Kd-renormalization business is something to reconsider. Someone should tell you. Maybe I'm missing something, but for me it doesn't work out. Let's say you have (1,1,1), and let's say it's white. Then the renormalization factor is 1, end result (1,1,1). Ok. Let's say you have (.1,.1,.1), a dark gray. Then the factor is 10, end result (1,1,1) again. So the dark gray turned into white. Probably not what you wanted. It is also implausible from a physical point of view. "Three photons come in, three have to go out"? Why? It's perfectly valid for a surface to absorb photons at certain energies, that's why most colored things are colored. (Let's stick to the photon picture, also it is maybe not ideal in this case.) With your logic you are converting 2 "photons" of a certain energy into photons of another (quite different) energy just so three come out in the end. If this effect is strong enough to significantly change the color (energy) of photons it is called fluorescence (or, with time-delay phosphorescence). This is not something that happens for normal materials to an extent that would be relevant for image generation. To answer the OP's original question: The problem is most likely that you are not tone-mapping your image correctly and everything above 1 is simply clamped. This lets the highlight appear sharper, because part of the soft fall-off is not visible. Highly glossy normalized BRDFs without a proper HDR-pipeline are problematic is this regard.
  13. [quote name='hick18' timestamp='1305827589' post='4813099'] But if I was going to extend that to incorporate global illumination, and instead sample some direction in the hemisphere for the diffuse BRDF, with a cos distribution, then now I have another pdf. Where does this second pdf plug in? [/quote] You just multiply the pdf of all decissions that lead to the new path. (This is only valid if they are independent, but they usually are.) The code at a path vertex typically looks like this: [0. Shade at current vertex] 1. Sample new direction 1.1. Select BSDF component according to pdf_component 1.2. Sample direction wi according to pdf_fr 1.3. pdf = pdf_fr * pdf_component; 2. Attenuate throughput 2.1. Evaluate BRDF-component for sampled wi: fr = brdf_comp(wo,wi) 2.2. throughput *= fr/pdf * dot(n,wi) 3. shoot ray to get next vertex Lines 82-91 of [url="https://github.com/mmp/pbrt-v2/blob/master/src/integrators/path.cpp#L82"]PBRT's path.cpp[/url] do exactly this, although the actual sampling is hidden in BSDF::Sample_f. EDIT: Ah ok, I just looked at [url="https://github.com/mmp/pbrt-v2/blob/master/src/core/reflection.cpp#L513"]Sample_f[/url] and they do things a bit differently. They sample the direction from ONE component, but evaluate ALL with that single direction for the attenuation and consequently also use the sum of ALL pdf as the resulting pdf. I guess this is more efficient, but a bit harder to implement. In the end both methods should converge to the same result. Sorry if that confuses you only more -- one more reason to heed the advice below. Again: I'd recommend you read a good book on the topic or at least look at PBRT's source code. The devil is the details here and it's not easy to explain the interrelationships correctly in a forum.
  14. [quote name='hick18' timestamp='1305749381' post='4812709'] [quote]For example if your BSDF has 3 components you can choose one of the components with probability 1/3 and then weight the path *3.[/quote] Wouldnt that make that paths result be 3 times brighter? Or is it it a rough approximation for the contribution of the other 2 paths, which if this process is done many times, will then converge? [/quote] No, it should come out exactly right. Consider a surface that just reflects 50% and transmits 50%. You have two components 'reflect .5' and 'transmit .5'. If you hit the surface you first sample a component, let's say with probability 1/2 for each. Then you sample a new direction, attenuate the throughput of the path according to the BRDF (the component) and continue. Let's say we had done this with 4 paths that carry '1 radiance' on average: Without 1/p: 2 pick reflection: .5*1 + .5*1 gets reflected => '1 radiance' accumulated for reflection, divide by 4 paths => 1/4 radiance reflected 2 pick transmission: [...] => 1/4 radiance transmitted. With 1/p: Well, it's easy to see that if you multiply each result by 2 (1/p) you get the correct answer: 1/2 reflected, 1/2 transmitted. If you have time you can work this out on paper with different reflection/transmission ratios and probabilities. It should always come out right in the long run as long as you don't completely forget one component (i.e. assign it a probability of 0).
  15. You can sample your paths (almost) like you want, as long as you weight them correctly (with 1/p) and you don't completely ignore any possible path. The worst thing that can happen is that your path tracing gets very inefficient. (But even then you can expect it to converge to the exact solution eventually.) For example if your BSDF has 3 components you can choose one of the components with probability 1/3 and then weight the path *3. In general you should try to pick a component with a probability proportional to the contribution you expect (importance sampling), in order not to waste samples. Your third example does exactly this. Importance sampling is a science in itself, especially for glossy surfaces. But a strategy that is relatively easy to implement (besides uniform sampling) is to pick a component proportional to the hemispherical (directional) reflectance of the component. If you are after layered materials like plastic you should be aware that they are usually modeled by a Fresnel layer 'above' the diffuse component. So only light that penetrates into the subsrate (i.e. light that is not reflected specularly) has a change to be scattered diffusely. You cannot simply treat these as independent components, but have to blend them according to the angle of incidence. I hope this helps at least a bit -- unfortunately the details are too complex to explain here. If you really want to dive into path tracing and global illumination I'd recommend the book 'Physically Based Rendering'.