Jump to content

  • Log In with Google      Sign In   
  • Create Account

Nik02

Member Since 18 Dec 2002
Offline Last Active Today, 12:28 PM

Posts I've Made

In Topic: float to unorm conversion

19 August 2016 - 03:19 AM

8 bits in range 0...1 is not enough to precisely store 65.55/255.0 .

In general, you should not assume anything about precision when handling floating-point values and their conversions to fixed point. All you are guaranteed to get is an approximation of the correct result.

In Topic: The use of spherical harmonics

18 August 2016 - 06:33 AM

The array of spherical harmonic coefficients (that you now presumably have) can be thought of as a discrete cosine transform results of the lighting signal, where the spatial parameter space is a sphere.

 

To reconstruct the approximation of the signal (the light strength), you basically sum the cosines of the coefficients together, using the frequency multipliers and weights that you used to calculate the coefficients. This integration operation closely resembles the process with which you obtain said coefficients. The forward and inverse operations on SH:s are extremely similar to those in DCT and FFT, for example.

 

Note that you should have three sets of sh coefficients; one for each channel in RGB. The summation is relatively simple if you perform the sh calculation on the RGB signal so that you get an array of float3's (for RGB triplets) as a result; this is because in HLSL or GLSL, you can directly obtain both the cosine and the sum of 1 to 4-element vector primitives.


In Topic: Questions About Storing Opcodes With Operands

14 July 2016 - 03:34 AM

Opcodes are stored in the virtual address space of the process. They are simply data, with unique number for each opcode and variation. Parameters - or the actual data to be processed - can reside in other parts of the process address space.

 

If you open a debugger on a running process and go to memory view, you can see the actual opcodes in the binary section of the process address space. Or, you can dump an executable without running it and view the program as a chunk of memory.

 

The CPU has instructions to copy data between RAM and registers, and these instructions themselves have their own opcodes.


In Topic: Does Windows 10 have compatability issues?

13 July 2016 - 06:06 AM

Except for 3DSMAX, I've ran all of the mentioned software in Windows 10 without problems. I would imagine that 3DS runs just fine as well, but I use Blender nowadays instead so I'm out of the loop on that.

 

Be sure to use the latest versions of the software if possible, to further reduce potential incompatibilities. However, most stuff that runs under 7 or 8 would run just fine on 10 without any special configuration.

 

I do know that there are some issues with older Visual Studio versions, but if you use 2013 or 2015 versions they will run perfectly on 10.

 

If you happen to be in Finland, there is a banking program called "kultalinkki" which refuses to work correctly on Windows 10. The said program likely consists of mountain of emergency patches over very old codebase, though, so I'm not surprised. This is the only "recent" program that I haven't managed to run on 10.


In Topic: IFFT?

12 July 2016 - 11:37 AM

The videos I linked to explain (in my opinion) intuitively what Fourier transform (or analysis in the videos) and its inverse (or synthesis in the videos) are and how they are intimately related to each other.

 

For the derivation of the fast algorithm, consider the discrete Fourier transform operation as a vector * matrix product where the complex coefficients of the frequencies (and their phases) form a 2D matrix of complex numbers.

 

In said matrix, there exists symmetries, that can be eliminated as a pre-calculation step and the substitution coefficients stored into a lookup table. One example of such symmetry is the positive and negative real and imaginary sectors of the matrix, which are effectively just mirror images of each others, flipped about the real and imaginary axes.

 

By considering such symmetries and therefore eliminating redundant calculations, FFT is considerably faster than naïvely evaluating the whole DFT matrix. The relationship between FFT and IFFT still remains exactly the same as that between DFT and IDFT, so if you know what inverse DFT represents in relation to forward DFT, it is directly equivalent to inverse FFT in relation to forward FFT.

 

The Cricket FFT library uses the Cooley-Tukey algorithm to divide the FFT calculation into smaller steps via a Butterfly diagram. By dividing the calculation in such way, the total complexity is reduced. The full source code is included in the zip, and if you look at the calculation core you will notice that the only difference between forward and inverse is the power and sign of the coefficients used for the integration over the input.

 

I don't own one of those machines either but the videos are very well made so it is easy to figure out the mechanism :)

 

As to why the complex numbers fit perfectly in this, consider that there is exactly 90 degree phase difference between cos(n) and sin(n). The real and imaginary axes of complex numbers are also exactly 90 degrees apart.


PARTNERS