• Create Account

# Tasty Texel

Member Since 16 Jun 2011
Offline Last Active Yesterday, 05:47 AM

### #5306964I'm Trying To Understand Tessendorf's Paper

Posted by on 20 August 2016 - 09:12 PM

I have implemented FFT in my project (I know what is the FT and its uses)

...

This is what I have understood:

1) First we need to calculate the Phillips' Spectrum, in a frequency domain.
2) Then we use the inverse FFT to convert it into a spatial domain.

I have trouble understanding where your problem lies then. You basically use the iFFT to efficiently sum a large amount of sine waves. That's it. The paper covers mainly how to calculate the complex coefficients of those waves.

The Phillips spectrum really only provides a weight for each wave depending on its frequency/wavelength. Therefore, you get a radially symmetric frequency space texture at this point (considering the weights/coefficients of the low-freq waves are towards the middle of the tex. A 1D slice of the Phillips spectrum looks something like this then:  __...----==._.==----...__ ). Next you add some Gaussian noise since nature is never perfectly regular. You might also want to multiply with a directional weight to give the resulting waves a primary propagation direction. Up to this point you have calculated the magnitude of the complex coefficients - the height of each sine wave. But you also want the waves to actually move (I assume). For that you have to alter the phase of the wave by rotating its complex coefficient depending on a time variable. Here you want to consider that dispersion stuff which describes the relation of propagation speed to wavelength (low-freq waves travel faster). Also, adding a constant random offset to the phase angle might be a good idea (to break up regularities). Now perform an iFFT and you have got your height field.

If instead of just sine waves you want to to sum Gerstner waves (which you totally want) you have some extra work to do. Basically you need two more textures + iFFTs to compute a horizontal offset vector in addition to the scalar height offset you already calculated (if I remember correctly the paper mentions something of multiplication with i in this context - this is really just a 90° rotation of the phase angle, nothing fancy).

To ensure your FFT works correctly you could try to reproduce the low-pass filter results from the Fourier Transform section of this talk: http://www.slideshare.net/TexelTiny/phase-preserving-denoising-of-images

It might also help if you could show us your frequency space results.

### #5298596Link for Graphics Research Papers

Posted by on 30 June 2016 - 01:03 AM

### #5254737What is the best indirect lighting technique for a game?

Posted by on 30 September 2015 - 02:38 AM

If you should decide to try Voxel Cone Tracing do it on (nested) dense grids. A Sparse Voxel Octree while not really being affordable performance-wise in a setting where you do other stuff beside dynamic GI is also a pain in the butt to implement and maintain. Trust me, I have been there. :\

### #5241691Radiance & Irrediance in GI

Posted by on 21 July 2015 - 04:26 AM

Because radiance is flux differentiated with respect to two quantities (projected solid angle and surface area) which gives you two differential operators in the denominator and therefore two in the numerator as well. Radiometry isn't really that hard once you got it but it can take some time and practice(!) depending on your foreknowledge and basic intelligence.

### #5240745Calculate HDR exposure from luminance Histogram

Posted by on 16 July 2015 - 03:53 AM

To be physically plausible, Bloom/Glare/Glow should added before tonemapping since it simulates scattering due to diffraction effects in the eye/camera lens, which are wavelength but not intensity dependent. Usually the wavelength dependency is neglected too, though. Tonemapping happens as soon as the photons hit the sensor (or later as a post-processing step if the sensor supports a high dynamic range as it is the case with our virtual sensors) and therefore after the scattering events.

### #5232916Questions about the final stages of the graphics pipeline

Posted by on 05 June 2015 - 03:48 AM

Also, most GPUs implement "Hi-Z" or some similar marketing buzzword, which conceptually is like a tiled min-max depth buffer. e.g. for every 8x8 pixel block, there is an auxiliary buffer that stores the smallest and largest depth value inside that region. The fixed-function rasterizer can make use of this information to sometimes very quickly accept (all pixels pass depth test without actually doing depth testing) or reject (all pixels fail depth test and are discarded immediately) whole blocks of pixels.

I might by wrong here, but isn't Hi-Z about culling whole triangles before they are even passed to the rasterizer? With other words based on the depth values of the three vertices of a triangle?

[...]it's instead implemented the depth test as a sorting algorithm!

But for intersecting triangles that doesn't work...or does it somehow?

### #5218019SH directional lights, what am I missing?

Posted by on 21 March 2015 - 04:21 AM

I'm by no means an expert with regard to SHs, but I think what you are doing is correct. A directional light is basically described by a delta function which you can not reproduce with a finite amount of SH bands. Intuitively spoken, the best you can do with just two bands is to set the 'vector' part (2nd band (index 1)) to the direction of the delta function and use the constant SH term (1st band (index 0)) to account for the clamping in the negative direction. This is basically the same you do to represent a clamped cosine lobe with SHs (I'm not sure whether the weights are exactly the same, though). All you could therefore do to increase the quality, is to increase the number of used SH bands. I think.

### #5209777Does gpu guaranties order of execution?

Posted by on 10 February 2015 - 04:45 AM

Stream Out from geometry shader is ordered as far as I remember.

### #5193581Real-time Physically Based Rendering and Transparent Materials

Posted by on 19 November 2014 - 04:25 AM

About the physics behind alpha: https://www.cs.duke.edu/courses/cps296.8/spring03/papers/max95opticalModelsForDirectVolumeRendering.pdf

### #5159667cosine term in rendering equation

Posted by on 10 June 2014 - 11:41 PM

I'm trying to reconcile that with the ideal diffuser (Lambert) case now... The lambert BRDF is just "k" (diffuse colour), so for a white surface we typically just use dot(N,L) in our per pixel calculations.
If we incorporate the view angle through, we get dot(N,L)/dot(N,V)... which results in a very flat and unrealistic looking surface.

There are actually three cosine terms: dot(incident light dir, surface normal), dot(emitted light dir, normal), dot(viewer dir, normal) -> the last two cancel out (the differential area over which the emitted light is distributed grows proportionally to the observed differential area).

Posted by on 17 May 2014 - 01:26 PM

On my Intel HD Graphics 3000 thingy extracting quads from points is about twice as fast as instancing quads.

### #5123450CG book suggestions

Posted by on 13 January 2014 - 08:37 PM

Just to throw some unmentioned stuff in: ShaderX-, GPU Pro-, GPU Gems-, Jim Blinn's Corner - series, Real-Time Shadows (really good), Visual Perception From A Computer Graphics Perspective

### #5108441Voxel Cone Tracing Experiment - Part 2 Progress

Posted by on 11 November 2013 - 04:43 AM

Just a general idea regarding the light-info accumulation concept which was floating around my head for some time now and I finally want to get rid of :

Instead of cone-tracing per screen-pixel (which is how the technique works default wise IIRC), couldn't you seperate your view frustrum into cells (similar to what you do for clustered shading, but perhaps with cube-shaped cells), accumulate the light information in these represented by spherical harmonics using cone-tracing and finally use this SH - 'volume' to light your scene?

You would of course end up with low frequent information only suitable for diffuse lighting (like when using light propagation volumes, but still with less quantization since you would not (necessarily) propagate the information iteratively (or at least with fewer steps if you choose to do so to keep the trace range shorter)) but on the other hand you could probably reduce the amount of required cone-traces considerably (you also would only need to fill cells with intersecting geometry (if you choose not to propagate iteratively)) and, to some extend, resolve the correlation between the amount of traces and the output pixel count.

Just an idea.

### #5104071Spherical Area Lights

Posted by on 24 October 2013 - 05:54 AM

I was plotting some values and noticed that the sin2 term looked just like the smoothstep function, and it turns out that it's coincidentally very close:

That's basically how the vegetation's wind animation in Crysis has been optimized: https://developer.nvidia.com/content/gpu-gems-3-chapter-16-vegetation-procedural-animation-and-shading-crysis

I'm always using  x * x *( 3.0 - 2.0 * x ) instead of Smoothstep, which I think is often times more optimal since Smoothstep maps the input to the range [0..1] at first even if it is already in this required range.

### #5103748Spherical Area Lights

Posted by on 23 October 2013 - 09:49 AM

The formula given in the paper needs to be converted from angles to vectors before being used in shader code, and it can be rearranged quite a bit too, which is why it often looks different.
Ok, but the last time I checked GGX on being normalized I actually got 1. Have you forgotten to include the cosinus term perhaps?

PARTNERS