Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


maxest

Member Since 05 Nov 2005
Offline Last Active May 29 2015 11:00 AM

Topics I've Started

Spherical Harmonic Lighting - Analytical Lights

19 May 2015 - 04:21 AM

A new note on my blog :)

 

http://polish-game-developer.blogspot.com/2015/05/spherical-harmonic-lighting-analytical.html


Spherical Harmonics - analytical solution

04 May 2015 - 03:19 PM

So, I've gone through this paper (http://www.inf.ufrgs.br/~oliveira/pubs_files/Slomp_Oliveira_Patricio-Tutorial-PRT.pdf). It's all perfectly clear how everything works (or at least I think so), I implemented the stuff - it works. The only thing that bugged was that I had to do pretty lot of computation to just project some analytical lights (as the code in the paper shows you integrate the light function through a number of samples, which could be a lot) and I was pretty sure I had seen SH to be "simpler". And I found chapter 2.15 in ShaderX3. This is the code that author uses for projecting a single directional light into SH:

static void CalculateCoefficentsPerLight(LightStruct *light)

{

    SHCoeff[0].red += light->colour[0] * fConst1;

    SHCoeff[0].green += light->colour[1] * fConst1;

    SHCoeff[0].blue += light->colour[2] * fConst1;



    SHCoeff[1].red += light->colour[0] * fConst2 * light->direction[0];

    SHCoeff[1].green += light->colour[1] * fConst2 * light->direction[0];

    SHCoeff[1].blue += light->colour[2] * fConst2 * light->direction[0];



    SHCoeff[2].red += light->colour[0] * fConst2 * light->direction[1];

    SHCoeff[2].green += light->colour[1] * fConst2 * light->direction[1];

    SHCoeff[2].blue += light->colour[2] * fConst2 * light->direction[1];



    SHCoeff[3].red += light->colour[0] * fConst2 * light->direction[2];

    SHCoeff[3].green += light->colour[1] * fConst2 * light->direction[2];

    SHCoeff[3].blue += light->colour[2] * fConst2 * light->direction[2];



    SHCoeff[4].red += light->colour[0] * fConst3 * (light->direction[0] * light->direction[2]);

    SHCoeff[4].green += light->colour[1] * fConst3 * (light->direction[0] * light->direction[2]);

    SHCoeff[4].blue += light->colour[2] * fConst3 * (light->direction[0] * light->direction[2]);



    SHCoeff[5].red += light->colour[0] * fConst3 * (light->direction[2] * light->direction[1]);

    SHCoeff[5].green += light->colour[1] * fConst3 * (light->direction[2] * light->direction[1]);

    SHCoeff[5].blue += light->colour[2] * fConst3 * (light->direction[2] * light->direction[1]);



    SHCoeff[6].red += light->colour[0] * fConst3 * (light->direction[1] * light->direction[0]);

    SHCoeff[6].green += light->colour[1] * fConst3 * (light->direction[1] * light->direction[0]);

    SHCoeff[6].blue += light->colour[2] * fConst3 * (light->direction[1] * light->direction[0]);



    SHCoeff[7].red += light->colour[0] * fConst4 * (3.0f * light->direction[2] * light->direction[2] - 1.0f);

    SHCoeff[7].green += light->colour[1] * fConst4 * (3.0f * light->direction[2] * light->direction[2] - 1.0f);

    SHCoeff[7].blue += light->colour[2] * fConst4 * (3.0f * light->direction[2] * light->direction[2] - 1.0f);



    SHCoeff[8].red += light->colour[0] * fConst5 * (light->direction[0] * light->direction[0] - light->direction[1] * light->direction[1]);

    SHCoeff[8].green += light->colour[1] * fConst5 * (light->direction[0] * light->direction[0] - light->direction[1] * light->direction[1]);

    SHCoeff[8].blue += light->colour[2] * fConst5 * (light->direction[0] * light->direction[0] - light->direction[1] * light->direction[1]);



}

Later on, these nine SHCoeffs are passed to the vertex shader to compute the final illumination:

     vec3 norm = gl_Normal;
    
    color  = Coefficients[0];
    color += Coefficients[1] * norm.x;
    color += Coefficients[2] * norm.y;
    color += Coefficients[3] * norm.z;
    color += Coefficients[4] * norm.x * norm.z;
    color += Coefficients[5] * norm.y * norm.z;
    color += Coefficients[6] * norm.x * norm.y;
    color += Coefficients[7] * (3.0 * norm.z * norm.z  - 1.0);
    color += Coefficients[8] * (norm.x * norm.x  - norm.y * norm.y);

This stuff looks much simpler and I don't see any need for sophisticated integration, nor even any dot(lightVector, normal) term (which is present in the PRT Tutorial paper in the projection of the transfer function). So my question is simply about the derivation of the ShaderX3 article's constants and how this relates to the "general" solution presented in the PRT Tutorial paper?


Rigid Body Physics - Inertia Tensor Space

16 April 2015 - 09:57 AM

I implemented simple rigid body system (simulating a simple box in 3D) and am confused about the space the inertia tensor should be, or to be more precise, how to compute the proper space.

I'll just dive into pretty simple code (Python).

Force addition to rigid body:

def AddForce(self, force, point):

    self.totalForce = self.totalForce + force

    oriMat = quaternion.ToMatrix(self.ori)
    r = point - self.pos
    torque = vector.Cross(r, force)
    self.totalTorque = self.totalTorque + torque
Note here that ori is orientation quaternion. Point, self.pos and forces are all provided in world space.

Now integration:

def Integrate(self, deltaTime):

    linAcc = 1.0/self.mass * self.totalForce

    oriMat = quaternion.ToMatrix(self.ori)
    inertia_world = self.inertia_local * oriMat    # HERE
    matrix.Invert(inertia_world)
    angAcc = vector.Transform(self.totalTorque, inertia_world)

    self.pos = self.pos + self.linVel*deltaTime
    self.ori = quaternion.AddVector(self.ori, self.angVel*deltaTime)
    self.ori = quaternion.Normalized(self.ori)        

    self.linVel = self.linVel + linAcc*deltaTime
    self.angVel = self.angVel + angAcc*deltaTime

    self.totalForce = vector.Vector3()
    self.totalTorque = vector.Vector3()
As you can see, in line "HERE" I calculate the inertia tensor in world space, by simply multiplying the local inertia matrix by the rigid body's orientation matrix (which is effectively the local-to-world matrix without the translation part).

Note that I use row-major matrix order.

The way I transform the local inertia tensor seemed natural to me this way. I have local space, I have local-to-world transform, so I just mul by that matrix to get inertia in world space. And it works just fine. I even made a simple simulation in Unity using their rigid body to make sure that my behaves more or less the same. And it does.

However, I googled a bit about the inertia tensor transform and found that people do it differently. For instance here:
http://gamedev.stackexchange.com/questions/70355/inertia-tensor-and-world-coordinate-conversion
or here:

http://www.pixar.com/companyinfo/research/pbm2001/pdf/notesg.pdf (code on page G22).

They both suggest "embedding" the local tensor matrix in between local-to-world and transpose(local-to-world). But when I do this in my simulation it simply works incorrectly.

Any idea where this difference comes from?


Data-oriented Design and Fourier Transform

01 April 2015 - 09:55 AM

New post on my blog:

http://polish-game-developer.blogspot.com/2015/04/data-oriented-design-and-fourier.html

 

Feel free to discuss whether it be there or here :).


Normal Map Generator (from diffuse maps)

15 March 2015 - 02:22 PM

I bet someone can make use of this :)

 

http://polish-game-developer.blogspot.com/2015/03/normal-map-generator-from-diffuse-maps.html


PARTNERS