Bummel

Members
  • Content count

    73
  • Joined

  • Last visited

Community Reputation

1888 Excellent

About Bummel

  • Rank
    Member
  1.   I have trouble understanding where your problem lies then. You basically use the iFFT to efficiently sum a large amount of sine waves. That's it. The paper covers mainly how to calculate the complex coefficients of those waves. The Phillips spectrum really only provides a weight for each wave depending on its frequency/wavelength. Therefore, you get a radially symmetric frequency space texture at this point (considering the weights/coefficients of the low-freq waves are towards the middle of the tex. A 1D slice of the Phillips spectrum looks something like this then:  __...----==._.==----...__ ). Next you add some Gaussian noise since nature is never perfectly regular. You might also want to multiply with a directional weight to give the resulting waves a primary propagation direction. Up to this point you have calculated the magnitude of the complex coefficients - the height of each sine wave. But you also want the waves to actually move (I assume). For that you have to alter the phase of the wave by rotating its complex coefficient depending on a time variable. Here you want to consider that dispersion stuff which describes the relation of propagation speed to wavelength (low-freq waves travel faster). Also, adding a constant random offset to the phase angle might be a good idea (to break up regularities). Now perform an iFFT and you have got your height field. If instead of just sine waves you want to to sum Gerstner waves (which you totally want) you have some extra work to do. Basically you need two more textures + iFFTs to compute a horizontal offset vector in addition to the scalar height offset you already calculated (if I remember correctly the paper mentions something of multiplication with i in this context - this is really just a 90° rotation of the phase angle, nothing fancy). To ensure your FFT works correctly you could try to reproduce the low-pass filter results from the Fourier Transform section of this talk: http://www.slideshare.net/TexelTiny/phase-preserving-denoising-of-images It might also help if you could show us your frequency space results. 
  2. Link for Graphics Research Papers

    [url=http://www.opserver.de/ubb7/ubbthreads.php?ubb=showflat&Number=406621#Post406621]Papers about Rendering Techniques[/url]
  3. If you should decide to try Voxel Cone Tracing do it on (nested) dense grids. A Sparse Voxel Octree while not really being affordable performance-wise in a setting where you do other stuff beside dynamic GI is also a pain in the butt to implement and maintain. Trust me, I have been there. :\
  4. Radiance & Irrediance in GI

    Because radiance is flux differentiated with respect to two quantities (projected solid angle and surface area) which gives you two differential operators in the denominator and therefore two in the numerator as well. Radiometry isn't really that hard once you got it but it can take some time and practice(!) depending on your foreknowledge and basic intelligence. :)
  5. To be physically plausible, Bloom/Glare/Glow should added before tonemapping since it simulates scattering due to diffraction effects in the eye/camera lens, which are wavelength but not intensity dependent. Usually the wavelength dependency is neglected too, though. Tonemapping happens as soon as the photons hit the sensor (or later as a post-processing step if the sensor supports a high dynamic range as it is the case with our virtual sensors) and therefore after the scattering events.
  6. I might by wrong here, but isn't Hi-Z about culling whole triangles before they are even passed to the rasterizer? With other words based on the depth values of the three vertices of a triangle?        But for intersecting triangles that doesn't work...or does it somehow?
  7. I'm by no means an expert with regard to SHs, but I think what you are doing is correct. A directional light is basically described by a delta function which you can not reproduce with a finite amount of SH bands. Intuitively spoken, the best you can do with just two bands is to set the 'vector' part (2nd band (index 1)) to the direction of the delta function and use the constant SH term (1st band (index 0)) to account for the clamping in the negative direction. This is basically the same you do to represent a clamped cosine lobe with SHs (I'm not sure whether the weights are exactly the same, though). All you could therefore do to increase the quality, is to increase the number of used SH bands. I think.
  8.   I'm searching msdn but can't find any source I could cite in my thesis that explicitly states this. Can anyone help me? Thx.
  9. Does gpu guaranties order of execution?

    Stream Out from geometry shader is ordered as far as I remember.
  10. About the physics behind alpha: https://www.cs.duke.edu/courses/cps296.8/spring03/papers/max95opticalModelsForDirectVolumeRendering.pdf
  11. Is there an API call to do that? Or at least an elegant hack? Besides unanswered questions I wasn't able to find anything on that topic.   Thanks.     EDIT: One idea I have would be to issue an indirect draw call which produces the appropriate number of vertices which will be rendered as points so that I can decrement the counter in the pixel shader by 1 for every point (I'm working with 11.0 so I have no access to UAVs in the vertex shader).
  12. @MJP: Do you use a single curvature map regardless of facial animations?
  13. If you calculate your curvature map using screen space derivatives of lerped normals then it is only to be expected that the curvature values are constant over individual polygons. Normalizing your normals beforehand could help a bit, I'm not sure how much though.   EDIT: Ok, the prob is that you have to consider the change in position too, which is constant over triangles anyways.