Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started!


Member Since 05 Nov 2005
Offline Last Active Aug 23 2015 06:35 AM

#5245830 Rigid Body Physics - Inertia Tensor Space

Posted by maxest on 11 August 2015 - 03:13 PM

Sorry for such a massive delay but I completely forgot about that thread...


Okay, I've gone through your equations and the link you provided but it's still not right.

You're saying (and the said link) that to transform tensor you go like transform * tensor * transform^T. But my code doesn't do it. It simply does transform * tensor and it works (in world-space). Look again at the code that computes world-space angular acc:

        localToWorld = quaternion.ToMatrix(self.ori)
        inertia_world = self.inertia_local * localToWorld
        angAcc_world = vector.Transform(self.totalTorque, matrix.Inverted(inertia_world))

Expanded and written in math notation it would be like:

angAcc_world =
self.totalTorque * inertia_world^-1 =
self.totalTorque * (self.inertia_local * localToWorld)^-1 =
self.totalTorque * localToWorld^-1 * self.inertia_local^-1

Again, localToWorld is only used once, not twice. And this code really works, he simulation behaves in predictable manner, as opposed to any other approach I try.

I don't really know where this:

Finally we define the world space inertia as:

worldSpaceInertia^-1 = localToWorld * localInertia^-1 * localToWorld^T

is taking place in my code.


One thing interesting here is that the last form of this equations could be written like this:

angAcc_world =
self.totalTorque * localToWorld^-1 * self.inertia_local^-1 =
(self.totalTorque * localToWorld^-1) * self.inertia_local^-1 =
totalTorque_local * self.inertia_local^-1 =

Now that is darn confusing. Does it really mean that angAcc in world-space is the same as angAcc in local-space?

#5242595 Rotations Blending

Posted by maxest on 25 July 2015 - 07:50 AM

I've been doing graphics programming stuff for quite some time now and surprisingly never had to deal with animations much. Now I needed to blend more than two transforms (each transform is represented by position vector3 and roation quaternion). With two transforms it's easy. You do lerp for the translation and slerp to the rotation, providing "blend progress" parameter. Now how do I do the same for more than two transforms? For tanslation it's easy and intuitive - just do lerp of more than two translation vectors, whose weights sum up to 1. Check. Now the rotations... So how do I slerp more than two transforms? I searched the web and turns out it's not that simple problem. So I asked a fellow animation programmer and to my surprise he told me I got it all wrong. Slerping two rotations is not the same as blending them, he told me. Instead of slerping rotations I should do this:

Quaternion q1 = Quaternion.Slerp(Quaternion.identity, rotation1, weight1);
Quaternion q2 = Quaternion.Slerp(Quaternion.identity, rotation2, weight2);
Quaternion blendedQ = q1 * q2;

And when I need more rotations I just add more identity-to-rotation slerps and compose them with the total blended quaternion. Now that is easy and seems pretty intuitive. We just rotate by (weightN of rotationN) for each N.

So eventually I tried googling for rotations blending or animation blending, expecting many people askng this question and getting similar answer to the one I was given by my friend. But turns out it's rather hard to find this solution. Is this because that solution is so obvious and nobody ask about it or do you guys have some different ways for blending rotations?

#5240504 QueryPerformanceCounter and jerking

Posted by maxest on 15 July 2015 - 09:35 AM

Yup. That did the trick.

Thanks you very much!

#5227209 Spherical Harmonics - analytical solution

Posted by maxest on 04 May 2015 - 03:19 PM

So, I've gone through this paper (http://www.inf.ufrgs.br/~oliveira/pubs_files/Slomp_Oliveira_Patricio-Tutorial-PRT.pdf). It's all perfectly clear how everything works (or at least I think so), I implemented the stuff - it works. The only thing that bugged was that I had to do pretty lot of computation to just project some analytical lights (as the code in the paper shows you integrate the light function through a number of samples, which could be a lot) and I was pretty sure I had seen SH to be "simpler". And I found chapter 2.15 in ShaderX3. This is the code that author uses for projecting a single directional light into SH:

static void CalculateCoefficentsPerLight(LightStruct *light)


    SHCoeff[0].red += light->colour[0] * fConst1;

    SHCoeff[0].green += light->colour[1] * fConst1;

    SHCoeff[0].blue += light->colour[2] * fConst1;

    SHCoeff[1].red += light->colour[0] * fConst2 * light->direction[0];

    SHCoeff[1].green += light->colour[1] * fConst2 * light->direction[0];

    SHCoeff[1].blue += light->colour[2] * fConst2 * light->direction[0];

    SHCoeff[2].red += light->colour[0] * fConst2 * light->direction[1];

    SHCoeff[2].green += light->colour[1] * fConst2 * light->direction[1];

    SHCoeff[2].blue += light->colour[2] * fConst2 * light->direction[1];

    SHCoeff[3].red += light->colour[0] * fConst2 * light->direction[2];

    SHCoeff[3].green += light->colour[1] * fConst2 * light->direction[2];

    SHCoeff[3].blue += light->colour[2] * fConst2 * light->direction[2];

    SHCoeff[4].red += light->colour[0] * fConst3 * (light->direction[0] * light->direction[2]);

    SHCoeff[4].green += light->colour[1] * fConst3 * (light->direction[0] * light->direction[2]);

    SHCoeff[4].blue += light->colour[2] * fConst3 * (light->direction[0] * light->direction[2]);

    SHCoeff[5].red += light->colour[0] * fConst3 * (light->direction[2] * light->direction[1]);

    SHCoeff[5].green += light->colour[1] * fConst3 * (light->direction[2] * light->direction[1]);

    SHCoeff[5].blue += light->colour[2] * fConst3 * (light->direction[2] * light->direction[1]);

    SHCoeff[6].red += light->colour[0] * fConst3 * (light->direction[1] * light->direction[0]);

    SHCoeff[6].green += light->colour[1] * fConst3 * (light->direction[1] * light->direction[0]);

    SHCoeff[6].blue += light->colour[2] * fConst3 * (light->direction[1] * light->direction[0]);

    SHCoeff[7].red += light->colour[0] * fConst4 * (3.0f * light->direction[2] * light->direction[2] - 1.0f);

    SHCoeff[7].green += light->colour[1] * fConst4 * (3.0f * light->direction[2] * light->direction[2] - 1.0f);

    SHCoeff[7].blue += light->colour[2] * fConst4 * (3.0f * light->direction[2] * light->direction[2] - 1.0f);

    SHCoeff[8].red += light->colour[0] * fConst5 * (light->direction[0] * light->direction[0] - light->direction[1] * light->direction[1]);

    SHCoeff[8].green += light->colour[1] * fConst5 * (light->direction[0] * light->direction[0] - light->direction[1] * light->direction[1]);

    SHCoeff[8].blue += light->colour[2] * fConst5 * (light->direction[0] * light->direction[0] - light->direction[1] * light->direction[1]);


Later on, these nine SHCoeffs are passed to the vertex shader to compute the final illumination:

     vec3 norm = gl_Normal;
    color  = Coefficients[0];
    color += Coefficients[1] * norm.x;
    color += Coefficients[2] * norm.y;
    color += Coefficients[3] * norm.z;
    color += Coefficients[4] * norm.x * norm.z;
    color += Coefficients[5] * norm.y * norm.z;
    color += Coefficients[6] * norm.x * norm.y;
    color += Coefficients[7] * (3.0 * norm.z * norm.z  - 1.0);
    color += Coefficients[8] * (norm.x * norm.x  - norm.y * norm.y);

This stuff looks much simpler and I don't see any need for sophisticated integration, nor even any dot(lightVector, normal) term (which is present in the PRT Tutorial paper in the projection of the transfer function). So my question is simply about the derivation of the ShaderX3 article's constants and how this relates to the "general" solution presented in the PRT Tutorial paper?

#5224432 Rigid Body Physics - Inertia Tensor Space

Posted by maxest on 20 April 2015 - 03:46 AM

I am glad it works, but personally I would be suspicious and assume I might potentially miss a transform somewhere.

I'm almost  sure this code works fine. The rigid body behaves quite predictably and very similar to simulation I made in Unity.

I could share my code if anyone wanted to play with it.


I'd also be happy if anyone could look again at the original code, which works, and its other variant that I think should work as well but for some reason doesn't.

Original code:

        oriMat = quaternion.ToMatrix(self.ori)
        inertia_world = self.inertia_local * oriMat
        angAcc = vector.Transform(self.totalTorque, inertia_world)

To recap: oriMat is effectively local-to-world transform. self.totalTorque is torque in world space. So this code transforms local inertia to world inertia and uses its inverse along with world torque to compute world angular acceleration.


Modified version:

        oriMat = quaternion.ToMatrix(self.ori)
        self.totalTorque = vector.Transform(self.totalTorque, matrix.Inverted(oriMat))
        angAcc = vector.Transform(self.totalTorque, matrix.Inverted(self.inertia_local))
        angAcc = vector.Transform(angAcc, oriMat)

Here I wanted to do the torque-inertia transform in local space. To do so, I transform self.totalTorque to rigid body's local space using inverse of oriMat. Then I mul that by inverted local inertia to get angular acceleration in local space. Finally, I transform angular acceleration to world space by transforming it with oriMat.

For some reason this variant doesn't yield correct behaviour of my rigid body and I really can't tell why. Anyone?

#5224146 Rigid Body Physics - Inertia Tensor Space

Posted by maxest on 18 April 2015 - 04:42 AM

I think we're misunderstanding each other a bit here smile.png.

First of all, the code I posted works perfectly fine. And I transform the local inertia tensor to world inertia tensor (in column-major order) like this:

I_world = R * I_local

which is these parts of my code (note row-major order):

    oriMat = quaternion.ToMatrix(self.ori)
    inertia_world = self.inertia_local * oriMat

And I think this the "real" world inertia tensor. In general case, when you have a vector V in some local space, have local-to-world matrix transform, and want to get vector V in world space you do this:

V_world = local-to-world * V_local

not this:

V_world = local-to-world * V_local * world-to-local

I think your solution:

I_world = R * I_local * transpose(R)

is only "valid" in this specific formula:

L = R * I_local * transpose(R) * omega_world

but it's not because of this:

L = ( R * I_local * transpose(R) ) * omega_world

but because of this:

L = R * I_local  * ( transpose(R) * omega_world )

The "interpretation" for me here is that we take omega_world, transform it to local space (hence mul by transpose of R), then we can mul by local inertia (because omega is now in local space) and finally we take all this to world space by mul with R.


Sorry guys but I'm a proponent of row-major order and I'm going to use it ;).

#5216713 Normal Map Generator (from diffuse maps)

Posted by maxest on 15 March 2015 - 03:21 PM

Sometimes you simply don't have "curvature data" and want to have *any* normal maps :P.


Do these plugins work in command-line?

#5216703 Normal Map Generator (from diffuse maps)

Posted by maxest on 15 March 2015 - 02:22 PM

I bet someone can make use of this :)



#5161449 Share the most challenging problem you solved recently! What made you fee...

Posted by maxest on 19 June 2014 - 04:49 AM

I liked the result of my work on implementing a packages system for the engine's filesystem, that supports simultaneous reading, actual file reading, and zip decompression :).

#4935895 Add skybox to deferred scene

Posted by maxest on 29 April 2012 - 03:11 PM

You don't need to use stencil if you don't want, you just need to force your skybox to have a depth of 1.0 and then enable depth testing. The easiest way to do that is to your viewport MinZ and MaxZ to 1.0.

I did that in my game and... didn't work well. OpenGL renderer worked fine, but D3D9 was pushing the skybox in front of other objects that were very close to the far plane (maybe some driver bug?). To avoid the problem I enforced all skybox vertices to have z=1 in the vertex shader:
VS_OUTPUT main(VS_INPUT input)
	VS_OUTPUT output;
	output.position = mul(input.position, worldViewProjTransform);
	output.position.z = output.position.w;
	output.texCoord0 = input.texCoord0;

	return output;

#4930009 How to dump debug data when running a program on a machine without VC++

Posted by maxest on 10 April 2012 - 03:46 PM

Yes, that's it! :)
I also found it here: http://kb.acronis.com/content/6265
Info on how to use VC++ to investigate a dump is here http://msdn.microsoft.com/en-us/library/fk551230.aspx but it's a horrible shame it doesn't work with Express version :/
Anyway, thanks mrbastard

#4846982 SSAO Problem

Posted by maxest on 09 August 2011 - 07:30 PM

My simple advice is to visualize each input data to your SSAO separately. Some of it should also exhibit this weird triangular pattern. Visualize kernel vectors, depths, whatever you use for SSAO computation.

#4826743 [SlimDX] Shadow Map

Posted by maxest on 23 June 2011 - 06:25 AM

Quick guess: Format is wrong to me. Shouldn't that be R32F or D24? I would assume D24 after taking into account you have DepthStencil bind flag.
btw: what does the DX debugger says?

#4769768 Tangents and Binormals?

Posted by maxest on 04 February 2011 - 04:48 PM

Tangent/Bitangent/Normal form an orthonormal basis (usually given in object-space of a model for every vertex), which is used to transform all vectors (that you use in your computations) to tangent-space of a texture (normal map). This way all vectors and also vectors you sample from the texture reside in the same space so the computations can be correct. This is needed for normal maps that have vectors in tangent-space (such normal maps have blueish colors).
Other possibility is to store the normals in a normal map in object-space (or any other you wish). In this case you do not need tangents and bitangents since, for example, transforming the normals form the sampled normal map would only require to use world transform to get them in world-space.
If you want some solid reference, Eric Lengyel's "Mathematics for 3D Game Programming and Computer Graphics" gives a very nice dicussion on this topic.

#4768060 My Bachelor Thesis "Software Renderer Accelerated by CUDA Technology"

Posted by maxest on 01 February 2011 - 12:50 PM

Hey guys,

A few days back I finished my bachelor thesis and a project that accompanies the thesis. Shortly speaking, my project was about implementing a selected subset of OpenGL/Direct3D fuctionality and see how much it can be speed up with CUDA. If you're interested, want to share opinions, etc., here's the www-site of the project: http://maxest.gct-game.net/vainmoinen/index.html
Note that I have put a lot of effort to explain in details the vertex and pixel processing phases, including software implementation of texture mapping with bilinear filtering and mip-mapping. I hope someone will ever learn something from this :)