 Home
 » Viewing Profile: Reputation: clb
clb
Member Since 22 May 2004Offline Last Active May 24 2016 10:01 AM
Community Stats
 Group Members
 Active Posts 735
 Profile Views 8,600
 Submitted Links 0
 Member Title Member
 Age Age Unknown
 Birthday Birthday Unknown

Gender
Not Telling
#4922932 Rotation Matrix from Vector in DirectX
Posted by clb on 17 March 2012  06:32 PM
#4922602 OpenGL 4 ray test problem
Posted by clb on 16 March 2012  09:33 AM
#4922361 OpenGL 4 ray test problem
Posted by clb on 15 March 2012  12:53 PM
The spatial containers implementation in MathGeoLib is a very recent addition. Unfortunately there are no examples available in the documentation on how to use them. If you want to use them from MathGeoLib, I recommend you dig in to them well to get familiar with the implementation, because you will probably need to customize them for your own use.
Here is how I use the KdTree class:
KdTree<Triangle> kdTree; // 1. Instantiate a new empty KdTree. The KdTree object behaves much like other containers like std::vector // 2. Populate triangles to the tree. Can do this onebyone, or if the data is contiguous in memory, in one go. for(each triangle) kdTree.AddObjects(&triangle, 1); // 3. 'Build' the KdTree to generate the spatial index. Before this step, the KdTree is not usable. kdTree.Build(); // Now we're ready to do queries: Ray ray; ray.pos = float3(1,2,3); ray.dir = float3(1,0,0); FindNearestIntersection fni; kdTree.RayQuery(ray, fni); std::cout << "Hit triangle at index " << fni.nearestIndex << " at distance " << fni.nearestD << " along the ray." << std::endl;
To do a query at the last step, you must implement a query 'callback' function, which the KdTree class calls for each node it travels. I use the FindNearestIntersection query object, which can be found here.
#4922227 OpenGL 4 ray test problem
Posted by clb on 15 March 2012  05:11 AM
I use a QuadTree for my scene container (from MathGeoLib), which stores AABB's of objects, and for each object, I prebuild a kDtree for raycasting. The kDtree contains the triangles of the mesh. Testing on the Stanford Bunny (about 70k tris) gives typically raycast times of 0.001ms  0.01ms per ray. For performing a raycast in the whole scene, I first traverse the QuadTree, and then for the object AABBs that the ray hits, I perform a kDtree search.
The kDtree approach works for static unskinned geometry. For skeletal animated meshes, an animated OBBtree/Cylindertree could be used instead.
If you are using a GPU rasterization based approach, you would probably render the whole scene to the 1x1 render target, with a 1x1 depth buffer, and draw the object IDs to the color target. I think even for single meshes, getting the results back will be on the order of 0.1ms minimum, and for whole scenes, it'll probably take 0.5ms or more of CPU time alone to query and submit the whole scene for rendering. Additionally, you will have to stall to get the results back, and then your CPU and GPU will go in lockstep for a part of the frame. The GPU will then be idle to wait to receive the rest of the frame rendering commands.
So in summary, I'd definitely have a CPUside copy of the geometry data (position only, no need to store UVs, normals, etc, if you don't need them), and build the approriate spatial acceleration structures to optimize the query speeds.
(I initially considered creating lowres meshes for raycasts alone to reduce the triangle counts for the CPU processing, but since using a kDtree turned out to be so fast, I never bothered with it, but it's also an option to consider.)
#4919220 Why can't static const float members be initialized in a class?
Posted by clb on 04 March 2012  11:57 AM
As Antheus said, it's because the standard says so. It simply allows for integer types only to be initialized like that, so floating point types have to be defined the normal way with separate definition in a single translation unit. At least preC++11...
No, it's because float isn't a integral or enumeration type. Under C++03 those were the only kinds of static data members you could initialize in the body of a class or struct.
Just answering 'because they say so' is not particularly insightful. Reasking the question as 'Why does the C++03 standard state that only integral and enum static const members may be initialized inside the class body?' is more interesting to think about. What is the technical reason, if any, for the feature being only available for ints and not floats or e.g. const char * strings, so that the wording was made that way? Given the amount of work (meetings, reviews, etc.) that is (seems to be) put into producing each version of the standard, it sounds implausible that it would have just been due to oversight. Why did they choose to restrict the feature back then?
#4910987 Ray collision help?
Posted by clb on 08 February 2012  12:37 PM
To generate the optimal minimum AABB for a model, see AABB::SetFrom(pointArray);
To generate the optimal minimum OBB for a model, google for the "Rotating Calipers" method.
To intersect a ray against an AABB, see AABB::IntersectRayAABB.
To intersect a ray against an OBB, see OBB::Intersects(Ray).
To perform the raybox intersection test, update the box and ray coordinates into same coordinate space (i.e. world space, or object's local space) for the test. This means that you update your box representation along with the transform of your object to avoid "the box does not move with the model" issue you described.
Trying to go the route of rendering the object into a frame buffer and doing perpixel tests to check for intersection is a poor idea due to added latency and performance costs.
#4909782 bump map or normal map
Posted by clb on 05 February 2012  04:09 AM
Tangent space is much more convenient, since there each pixel on the texture specifies the surface normal with respect to the interpolated surface normal from the vertex data.
#4909664 Drawing near objects problem
Posted by clb on 04 February 2012  04:13 PM
Adjust the far plane closer to the near plane
With a standard perspective projection matrix, the effective way to gain more precision over the depth range is to push the near plane as far from zero as possible, instead of bringing the far plane closer to the near plane. Cutting down the draw distance does little to improve precision for the remaining view distance.
If you are using very small near plane distances, like 0.01f, 0.1f, try pushing the near plane to 1.0f, 5.0f or even 10.0f, depending on your scale, and how close to objects your camera ever goes to.
Another solution is to linearize your depth buffer. This is a very good read.
#4907957 Skinning + Bone Look At
Posted by clb on 31 January 2012  05:37 AM
To transform from world space to the bone's local space, compute the matrix M^1, where M = W*B1*B2*B3*...*BN. That matrix will take world space points/direction vectors into the local space of bone BN.
If that's still not working, try pasting some code if someone can spot something that's off.
#4907771 Skeletal animation joint transformation question
Posted by clb on 30 January 2012  03:17 PM
I've programmed robots for many years, and there is inevitably a case where gimbal lock will be an issue, no matter how much you prepare for them.
How are quaternions better than matrices? Both store a single absolute orientation relative to their parent.
People recommend switching from matrices to quats, or from euler angles to quats to avoid gimbal lock. This is unfortunately so wrong, because it gives a misleading impression that somehow the way the rotation is represented is what would cause the gimbal lock, but it's not.
It is the more subtle effect of having to change your logic when moving from euler angles to quats, which will help you avoid gimbal lock (to some extent, depending on what you're doing).
In the most common manner, rotations are prone to gimbal lock whenever you are operating on a sequence of three (or more) rotations R1 * R2 * R3 each constrained to happen around a fixed axis. In this logic, it can occur that the rotation performed by R2 aligns the axes of R1 and R3 so that the perform their rotations about the same axis. This causes one degree of freedom to be lost, which is the issue we all have observed.
Incidentally, Euler angles use scalar triplets a,b,c (e.g. for yaw, pitch and roll) and generate rotation matrices R_a * R _b * R_c to combine to the final rotation. This is exactly like shown above, and will be prone to gimbal lock. Switching to using a quaternion (or a matrix) instead of the euler triplets allows you to avoid gimbal lock, not due to a change in your data storage, but by forcing you to change your program logic, i.e. instead of several subsequent rotations about cardinal axes, you will only store a single full orientation, and operate on that single orientation instead.
Note that if you insisted on doing the Euler>Quat conversion poorly, and did something like
Quat yawRotation = Quat::RotateAxisAngle(WorldAxisX, yawRotation); Quat pitchRotation = Quat::RotateAxisAngle(WorldAxisY, pitchRotation); Quat rollRotation = Quat::RotateAxisAngle(WorldAxisZ, rollRotation); Quat finalRotation = yawRotation * pitchRotation * rollRotation;One might think "it's all quaternions, no matrices or euler angles, so no gimbal lock!", but that's wrong. The code is running a rotation logic exactly like shown above: a sequence of rotations constrained about fixed axes, and there are at least three of those > prone to gimbal lock.
So, it's not the representation which itself avoids the gimbal lock, but the way your representation will let you run a different logic which avoids gimbal lock.
PS. In the field of robotics where you have a sequence of several 1DoF joints on a robot arm, you will always have to live with gimbal lock, since you can't change the physical world by changing the code from eulers/matrices to quats and switching your program logic  the joints will always be 1DoF.
#4907763 Skeletal animation joint transformation question
Posted by clb on 30 January 2012  02:57 PM
EDIT: How does one represent the inverse of an affine transformtion in the SQT form?
R' = conjugate( R )
T' = T
S' = 1/S
This is not correct. If you have S = UniformScaleMatrixBy(s), R = RotationMatrix (orthonormal), T = TranslationMatrixBy(x,y,z), and you have M = T*R*S, then M^(1)=T'*R'*S', where S' = UniformScaleMatrixBy(1/s), R' = R^t (transpose), and T' = R^t*(x,y,z). R needs to be transposed, not conjugated (conjugation refers to taking the complex conjugates of each element), and in the inverse, the translation part is not T.
For code, see float3x4::InverseOrthogonalUniformScale or float3x4::InverseOrthonormal in MathGeoLib. (coincidentally, I very recently reported about the exact same bug, i.e. mistaking (x,y,z) for R^t*(x,y,z) as the translation part in a library called Cinder, see the comments there for more info and the derivation of the math)
#4907415 Skinning + Bone Look At
Posted by clb on 29 January 2012  02:26 PM
First, compute a standard LookAt matrix that orients the front vector in the local bone space towards the desired world space direction. For reference, see e.g. MathGeoLib float3x4::LookAt, and its derivation.
Let's denote the lookat matrix by L. We want this matrix L to be the local>world matrix for the bone, but since the bone has parent nodes, we have a sequence local>parent>world. The problem is to compute such a local>parent matrix M for the bone, so that the local>world matrix for that bone will end up to be L. You say you know the parent>world transform (your CombinedParents matrix), so let's denote that by P. Then, we have the equation
local>world == local>parent * parent>world, or in terms of matrices above
L == M * P, where we solve for the desired M,
M = L * P^(1).
Therefore, compute the local>world LookAt matrix L, then compute the world transform of the parent, P, inverse it, and compute L * P^(1), and set that as the local>parent matrix M for the bone.
In your terms, you will then have
Final = InvBindPose * M.
#4906806 wxWidgets vs. Qt
Posted by clb on 27 January 2012  12:48 PM
#4906781 DirectX 11 + MinGWw64
Posted by clb on 27 January 2012  10:41 AM
But you really should try using VC++. It's nice IDE, especially when you need to debug code.
If the original poster wants to discuss DX11 + MinGW64, then by all means, let him have his topic. There's plenty of room for DX11 + VS2010 threads as well.
So, I should drop the compiler I'm used to, the one that is free with outstanding support for C++11 and virtually no bugs, and switch to VC++ Express, a nonoptimizing compiler with few C++11 features and several bugs that I've personally encountered? I'm not exactly excited about the thought of that.
Edit: OK, I stand corrected, it's supposedly a "fully optimizing" compiler, but I believe GCC still wins by far when it comes to the performance of the code generated by it's auto vectorization.
Let's not throw in comments based on FUD. GCC is not virtually bugfree: there are 842 outstanding unfixed bugs in the GCC C++ compiler frontend alone. Visual Studio is not bugfree either.
Believing "GCC wins by far in performance" does not make it a correct statement either. It might. Or it might not. Instead of believing, it is better to profile, but unfortunately, there are not that many great sources available that would accurately profile these compilers. Two sources I found
 http://www.gtruc.net/post0372.html : VC2010 beats GCC by a large margin.
 http://www.willus.com/ccomp_benchmark2.shtml?p1 : GCC beats VC2010 in 6 out of 10 tests.