Hm, I think I've just solved the puzzle, to some extent. The value that are displayed differently, depending on the version, are NaNs, so apparently if's and max treat them differently, hence the difference in result.
 Home
 » Viewing Profile: Reputation: maxest
maxest
Member Since 05 Nov 2005Offline Last Active Sep 17 2016 02:00 AM
Community Stats
 Group Members
 Active Posts 569
 Profile Views 7,143
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
#5282768 Faulty D3DCOMPILE_SKIP_OPTIMIZATION
Posted by maxest on 22 March 2016  05:59 PM
#5273926 Omnidirectional shadow mapping
Posted by maxest on 02 February 2016  01:43 PM
If you can do spot light shadow mapping doing this for point light should not be a problem. Have you tried spot lights first?
#5273915 DoF  near field bleeding
Posted by maxest on 02 February 2016  12:23 PM
I decided to implement bokeh depth of field algorithm from here http://www.crytek.com/download/Sousa_Graphics_Gems_CryENGINE3.pdf
Actually, I did the far field part and it works really nice and fast. What I have big problems with though is doing the near field bleeding part (slide 43). To be honest, just reading these slides I don't see how the author could achieve this effect withour any sort of blurring of near field CoC. Or maybe he did that but didn't mention about it.
Have a look here (yeah, I know; I'm an art master):
Say to the left we have out of focus region that is close to the camera and should bleed onto the background. Now, in slide 44 the author says that near field CoC should be used for blending. However, when processing pixels belonging to the background using this value will fill the entire background with the unblurred version of the color buffer, and any bleeding created in the DOF pass won't show up.
Any ideas on how to properly bleed the near field colors are very welcome.
#5245830 Rigid Body Physics  Inertia Tensor Space
Posted by maxest on 11 August 2015  03:13 PM
Sorry for such a massive delay but I completely forgot about that thread...
Okay, I've gone through your equations and the link you provided but it's still not right.
You're saying (and the said link) that to transform tensor you go like transform * tensor * transform^T. But my code doesn't do it. It simply does transform * tensor and it works (in worldspace). Look again at the code that computes worldspace angular acc:
localToWorld = quaternion.ToMatrix(self.ori) inertia_world = self.inertia_local * localToWorld angAcc_world = vector.Transform(self.totalTorque, matrix.Inverted(inertia_world))
Expanded and written in math notation it would be like:
angAcc_world = self.totalTorque * inertia_world^1 = self.totalTorque * (self.inertia_local * localToWorld)^1 = self.totalTorque * localToWorld^1 * self.inertia_local^1
Again, localToWorld is only used once, not twice. And this code really works, he simulation behaves in predictable manner, as opposed to any other approach I try.
I don't really know where this:
Finally we define the world space inertia as:
worldSpaceInertia^1 = localToWorld * localInertia^1 * localToWorld^T
is taking place in my code.
One thing interesting here is that the last form of this equations could be written like this:
angAcc_world = self.totalTorque * localToWorld^1 * self.inertia_local^1 = (self.totalTorque * localToWorld^1) * self.inertia_local^1 = totalTorque_local * self.inertia_local^1 = angAcc_local
Now that is darn confusing. Does it really mean that angAcc in worldspace is the same as angAcc in localspace?
#5242595 Rotations Blending
Posted by maxest on 25 July 2015  07:50 AM
I've been doing graphics programming stuff for quite some time now and surprisingly never had to deal with animations much. Now I needed to blend more than two transforms (each transform is represented by position vector3 and roation quaternion). With two transforms it's easy. You do lerp for the translation and slerp to the rotation, providing "blend progress" parameter. Now how do I do the same for more than two transforms? For tanslation it's easy and intuitive  just do lerp of more than two translation vectors, whose weights sum up to 1. Check. Now the rotations... So how do I slerp more than two transforms? I searched the web and turns out it's not that simple problem. So I asked a fellow animation programmer and to my surprise he told me I got it all wrong. Slerping two rotations is not the same as blending them, he told me. Instead of slerping rotations I should do this:
Quaternion q1 = Quaternion.Slerp(Quaternion.identity, rotation1, weight1); Quaternion q2 = Quaternion.Slerp(Quaternion.identity, rotation2, weight2); Quaternion blendedQ = q1 * q2;
And when I need more rotations I just add more identitytorotation slerps and compose them with the total blended quaternion. Now that is easy and seems pretty intuitive. We just rotate by (weightN of rotationN) for each N.
So eventually I tried googling for rotations blending or animation blending, expecting many people askng this question and getting similar answer to the one I was given by my friend. But turns out it's rather hard to find this solution. Is this because that solution is so obvious and nobody ask about it or do you guys have some different ways for blending rotations?
#5240504 QueryPerformanceCounter and jerking
Posted by maxest on 15 July 2015  09:35 AM
Yup. That did the trick.
Thanks you very much!
#5227318 Spherical Harmonics  analytical solution
Posted by maxest on 05 May 2015  09:17 AM
The light direction is stored in slots 1, 2, and 3 of the precalculated Coefficients array, which are each multiplied by the matching component of the surface normal in the shader, and all are finally added together...
Correct .
Oliveira uses numerical integration, because in his tutorial he projects a light probe and shadowed lights into SH basis. Both usually don't have an analytical form, so he can't leverage orthogonality of SH basis and solve that analytically.
Right. So the question now is what my light and transfer functions should look like so after analytical integration I end up with coefficients from the ShaderX3 article?
#5227209 Spherical Harmonics  analytical solution
Posted by maxest on 04 May 2015  03:19 PM
So, I've gone through this paper (http://www.inf.ufrgs.br/~oliveira/pubs_files/Slomp_Oliveira_PatricioTutorialPRT.pdf). It's all perfectly clear how everything works (or at least I think so), I implemented the stuff  it works. The only thing that bugged was that I had to do pretty lot of computation to just project some analytical lights (as the code in the paper shows you integrate the light function through a number of samples, which could be a lot) and I was pretty sure I had seen SH to be "simpler". And I found chapter 2.15 in ShaderX3. This is the code that author uses for projecting a single directional light into SH:
static void CalculateCoefficentsPerLight(LightStruct *light) { SHCoeff[0].red += light>colour[0] * fConst1; SHCoeff[0].green += light>colour[1] * fConst1; SHCoeff[0].blue += light>colour[2] * fConst1; SHCoeff[1].red += light>colour[0] * fConst2 * light>direction[0]; SHCoeff[1].green += light>colour[1] * fConst2 * light>direction[0]; SHCoeff[1].blue += light>colour[2] * fConst2 * light>direction[0]; SHCoeff[2].red += light>colour[0] * fConst2 * light>direction[1]; SHCoeff[2].green += light>colour[1] * fConst2 * light>direction[1]; SHCoeff[2].blue += light>colour[2] * fConst2 * light>direction[1]; SHCoeff[3].red += light>colour[0] * fConst2 * light>direction[2]; SHCoeff[3].green += light>colour[1] * fConst2 * light>direction[2]; SHCoeff[3].blue += light>colour[2] * fConst2 * light>direction[2]; SHCoeff[4].red += light>colour[0] * fConst3 * (light>direction[0] * light>direction[2]); SHCoeff[4].green += light>colour[1] * fConst3 * (light>direction[0] * light>direction[2]); SHCoeff[4].blue += light>colour[2] * fConst3 * (light>direction[0] * light>direction[2]); SHCoeff[5].red += light>colour[0] * fConst3 * (light>direction[2] * light>direction[1]); SHCoeff[5].green += light>colour[1] * fConst3 * (light>direction[2] * light>direction[1]); SHCoeff[5].blue += light>colour[2] * fConst3 * (light>direction[2] * light>direction[1]); SHCoeff[6].red += light>colour[0] * fConst3 * (light>direction[1] * light>direction[0]); SHCoeff[6].green += light>colour[1] * fConst3 * (light>direction[1] * light>direction[0]); SHCoeff[6].blue += light>colour[2] * fConst3 * (light>direction[1] * light>direction[0]); SHCoeff[7].red += light>colour[0] * fConst4 * (3.0f * light>direction[2] * light>direction[2]  1.0f); SHCoeff[7].green += light>colour[1] * fConst4 * (3.0f * light>direction[2] * light>direction[2]  1.0f); SHCoeff[7].blue += light>colour[2] * fConst4 * (3.0f * light>direction[2] * light>direction[2]  1.0f); SHCoeff[8].red += light>colour[0] * fConst5 * (light>direction[0] * light>direction[0]  light>direction[1] * light>direction[1]); SHCoeff[8].green += light>colour[1] * fConst5 * (light>direction[0] * light>direction[0]  light>direction[1] * light>direction[1]); SHCoeff[8].blue += light>colour[2] * fConst5 * (light>direction[0] * light>direction[0]  light>direction[1] * light>direction[1]); }
Later on, these nine SHCoeffs are passed to the vertex shader to compute the final illumination:
vec3 norm = gl_Normal; color = Coefficients[0]; color += Coefficients[1] * norm.x; color += Coefficients[2] * norm.y; color += Coefficients[3] * norm.z; color += Coefficients[4] * norm.x * norm.z; color += Coefficients[5] * norm.y * norm.z; color += Coefficients[6] * norm.x * norm.y; color += Coefficients[7] * (3.0 * norm.z * norm.z  1.0); color += Coefficients[8] * (norm.x * norm.x  norm.y * norm.y);
This stuff looks much simpler and I don't see any need for sophisticated integration, nor even any dot(lightVector, normal) term (which is present in the PRT Tutorial paper in the projection of the transfer function). So my question is simply about the derivation of the ShaderX3 article's constants and how this relates to the "general" solution presented in the PRT Tutorial paper?
#5224432 Rigid Body Physics  Inertia Tensor Space
Posted by maxest on 20 April 2015  03:46 AM
I am glad it works, but personally I would be suspicious and assume I might potentially miss a transform somewhere.
I'm almost sure this code works fine. The rigid body behaves quite predictably and very similar to simulation I made in Unity.
I could share my code if anyone wanted to play with it.
I'd also be happy if anyone could look again at the original code, which works, and its other variant that I think should work as well but for some reason doesn't.
Original code:
oriMat = quaternion.ToMatrix(self.ori) inertia_world = self.inertia_local * oriMat matrix.Invert(inertia_world) angAcc = vector.Transform(self.totalTorque, inertia_world)
To recap: oriMat is effectively localtoworld transform. self.totalTorque is torque in world space. So this code transforms local inertia to world inertia and uses its inverse along with world torque to compute world angular acceleration.
Modified version:
oriMat = quaternion.ToMatrix(self.ori) self.totalTorque = vector.Transform(self.totalTorque, matrix.Inverted(oriMat)) angAcc = vector.Transform(self.totalTorque, matrix.Inverted(self.inertia_local)) angAcc = vector.Transform(angAcc, oriMat)
Here I wanted to do the torqueinertia transform in local space. To do so, I transform self.totalTorque to rigid body's local space using inverse of oriMat. Then I mul that by inverted local inertia to get angular acceleration in local space. Finally, I transform angular acceleration to world space by transforming it with oriMat.
For some reason this variant doesn't yield correct behaviour of my rigid body and I really can't tell why. Anyone?
#5224146 Rigid Body Physics  Inertia Tensor Space
Posted by maxest on 18 April 2015  04:42 AM
I think we're misunderstanding each other a bit here .
First of all, the code I posted works perfectly fine. And I transform the local inertia tensor to world inertia tensor (in columnmajor order) like this:
I_world = R * I_local
which is these parts of my code (note rowmajor order):
oriMat = quaternion.ToMatrix(self.ori) inertia_world = self.inertia_local * oriMat
And I think this the "real" world inertia tensor. In general case, when you have a vector V in some local space, have localtoworld matrix transform, and want to get vector V in world space you do this:
V_world = localtoworld * V_local
not this:
V_world = localtoworld * V_local * worldtolocal
I think your solution:
I_world = R * I_local * transpose(R)
is only "valid" in this specific formula:
L = R * I_local * transpose(R) * omega_world
but it's not because of this:
L = ( R * I_local * transpose(R) ) * omega_world
but because of this:
L = R * I_local * ( transpose(R) * omega_world )
The "interpretation" for me here is that we take omega_world, transform it to local space (hence mul by transpose of R), then we can mul by local inertia (because omega is now in local space) and finally we take all this to world space by mul with R.
Sorry guys but I'm a proponent of rowmajor order and I'm going to use it ;).
#5216713 Normal Map Generator (from diffuse maps)
Posted by maxest on 15 March 2015  03:21 PM
Sometimes you simply don't have "curvature data" and want to have *any* normal maps .
Do these plugins work in commandline?
#5216703 Normal Map Generator (from diffuse maps)
Posted by maxest on 15 March 2015  02:22 PM
I bet someone can make use of this
http://polishgamedeveloper.blogspot.com/2015/03/normalmapgeneratorfromdiffusemaps.html
#5161449 Share the most challenging problem you solved recently! What made you fee...
Posted by maxest on 19 June 2014  04:49 AM
I liked the result of my work on implementing a packages system for the engine's filesystem, that supports simultaneous reading, actual file reading, and zip decompression .
#4935895 Add skybox to deferred scene
Posted by maxest on 29 April 2012  03:11 PM
I did that in my game and... didn't work well. OpenGL renderer worked fine, but D3D9 was pushing the skybox in front of other objects that were very close to the far plane (maybe some driver bug?). To avoid the problem I enforced all skybox vertices to have z=1 in the vertex shader:You don't need to use stencil if you don't want, you just need to force your skybox to have a depth of 1.0 and then enable depth testing. The easiest way to do that is to your viewport MinZ and MaxZ to 1.0.
VS_OUTPUT main(VS_INPUT input) { VS_OUTPUT output; output.position = mul(input.position, worldViewProjTransform); output.position.z = output.position.w; output.texCoord0 = input.texCoord0; return output; }
#4930009 How to dump debug data when running a program on a machine without VC++
Posted by maxest on 10 April 2012  03:46 PM
I also found it here: http://kb.acronis.com/content/6265
Info on how to use VC++ to investigate a dump is here http://msdn.microsoft.com/enus/library/fk551230.aspx but it's a horrible shame it doesn't work with Express version :/
Anyway, thanks mrbastard