Jump to content

  • Log In with Google      Sign In   
  • Create Account

clb

Member Since 22 May 2004
Offline Last Active Today, 04:42 PM

#4922930 Advanced Mathematics for Computer Science

Posted by clb on 17 March 2012 - 06:26 PM

I enjoyed watching the videos at ADUni.org. It shows an example structure of what undergrad CS education is like.


#4922602 OpenGL 4 ray test problem

Posted by clb on 16 March 2012 - 09:33 AM

Yes, that is the norm. Since the kd-tree stores static pregenerated geometry in object local space, the rays are transformed from world space to the local space of each object before raycasting to the kd-tree. The result is usually desired in world space, so the local space raycast hit position is then transformed back to world space.


#4922361 OpenGL 4 ray test problem

Posted by clb on 15 March 2012 - 12:53 PM

When I linked to the MathGeoLib implementation, I did not mean to imply that it would be a ready-made drop-in solution to use for your projects, mostly because MathGeoLib is not a library that is actively maintained for external developers to use. It is more intended as a source repository for copy-paste snippets of math algorithms for teaching purposes. That said, there is no reason why you could not use the whole library, but it may require some configuration to adapt it to your build system.

The spatial containers implementation in MathGeoLib is a very recent addition. Unfortunately there are no examples available in the documentation on how to use them. If you want to use them from MathGeoLib, I recommend you dig in to them well to get familiar with the implementation, because you will probably need to customize them for your own use.

Here is how I use the KdTree class:
KdTree<Triangle> kdTree; // 1. Instantiate a new empty KdTree. The KdTree object behaves much like other containers like std::vector

// 2. Populate triangles to the tree. Can do this one-by-one, or if the data is contiguous in memory, in one go.
for(each triangle)
   kdTree.AddObjects(&triangle, 1);

// 3. 'Build' the KdTree to generate the spatial index. Before this step, the KdTree is not usable.
kdTree.Build();

// Now we're ready to do queries:
Ray ray;
ray.pos = float3(1,2,3);
ray.dir = float3(1,0,0);
FindNearestIntersection fni;
kdTree.RayQuery(ray, fni);
std::cout << "Hit triangle at index " << fni.nearestIndex << " at distance " << fni.nearestD << " along the ray." << std::endl;

To do a query at the last step, you must implement a query 'callback' function, which the KdTree class calls for each node it travels. I use the FindNearestIntersection query object, which can be found here.


#4922227 OpenGL 4 ray test problem

Posted by clb on 15 March 2012 - 05:11 AM

I strongly recommend against a GPU-based solution, even if your mesh data currently resides only on the GPU.

I use a QuadTree for my scene container (from MathGeoLib), which stores AABB's of objects, and for each object, I prebuild a kD-tree for raycasting. The kD-tree contains the triangles of the mesh. Testing on the Stanford Bunny (about 70k tris) gives typically raycast times of 0.001ms - 0.01ms per ray. For performing a raycast in the whole scene, I first traverse the QuadTree, and then for the object AABBs that the ray hits, I perform a kD-tree search.

The kD-tree approach works for static unskinned geometry. For skeletal animated meshes, an animated OBB-tree/Cylinder-tree could be used instead.

If you are using a GPU rasterization based approach, you would probably render the whole scene to the 1x1 render target, with a 1x1 depth buffer, and draw the object IDs to the color target. I think even for single meshes, getting the results back will be on the order of 0.1ms minimum, and for whole scenes, it'll probably take 0.5ms or more of CPU time alone to query and submit the whole scene for rendering. Additionally, you will have to stall to get the results back, and then your CPU and GPU will go in lockstep for a part of the frame. The GPU will then be idle to wait to receive the rest of the frame rendering commands.

So in summary, I'd definitely have a CPU-side copy of the geometry data (position only, no need to store UVs, normals, etc, if you don't need them), and build the approriate spatial acceleration structures to optimize the query speeds.

(I initially considered creating low-res meshes for raycasts alone to reduce the triangle counts for the CPU processing, but since using a kD-tree turned out to be so fast, I never bothered with it, but it's also an option to consider.)


#4919220 Why can't static const float members be initialized in a class?

Posted by clb on 04 March 2012 - 11:57 AM

As Antheus said, it's because the standard says so. It simply allows for integer types only to be initialized like that, so floating point types have to be defined the normal way with separate definition in a single translation unit. At least pre-C++11...

No, it's because float isn't a integral or enumeration type. Under C++03 those were the only kinds of static data members you could initialize in the body of a class or struct.


Just answering 'because they say so' is not particularly insightful. Re-asking the question as 'Why does the C++03 standard state that only integral and enum static const members may be initialized inside the class body?' is more interesting to think about. What is the technical reason, if any, for the feature being only available for ints and not floats or e.g. const char * strings, so that the wording was made that way? Given the amount of work (meetings, reviews, etc.) that is (seems to be) put into producing each version of the standard, it sounds implausible that it would have just been due to oversight. Why did they choose to restrict the feature back then?


#4910987 Ray collision help?

Posted by clb on 08 February 2012 - 12:37 PM

First choose whether you want to use axis-aligned bounding boxes (AABB), or oriented bounding boxes (OBB).

To generate the optimal minimum AABB for a model, see AABB::SetFrom(pointArray);
To generate the optimal minimum OBB for a model, google for the "Rotating Calipers" method.
To intersect a ray against an AABB, see AABB::IntersectRayAABB.
To intersect a ray against an OBB, see OBB::Intersects(Ray).

To perform the ray-box intersection test, update the box and ray coordinates into same coordinate space (i.e. world space, or object's local space) for the test. This means that you update your box representation along with the transform of your object to avoid "the box does not move with the model" issue you described.

Trying to go the route of rendering the object into a frame buffer and doing per-pixel tests to check for intersection is a poor idea due to added latency and performance costs.


#4909782 bump map or normal map

Posted by clb on 05 February 2012 - 04:09 AM

Sure, you can author your texture containing the normal vectors to lie in the object local space. Then you can look up the object normal from the texture. Note that this will require the object to have unique triangle UV coordinates for the whole mesh, and it will disallow wrapping repeating normal maps, like this http://www.paulsprojects.net/tutorials/simplebump/normal.jpg , as-is. You will also not be able to reuse the texture for other meshes, like you can do with that normal.jpg.

Tangent space is much more convenient, since there each pixel on the texture specifies the surface normal with respect to the interpolated surface normal from the vertex data.


#4909664 Drawing near objects problem

Posted by clb on 04 February 2012 - 04:13 PM

-Adjust the far plane closer to the near plane


With a standard perspective projection matrix, the effective way to gain more precision over the depth range is to push the near plane as far from zero as possible, instead of bringing the far plane closer to the near plane. Cutting down the draw distance does little to improve precision for the remaining view distance.

If you are using very small near plane distances, like 0.01f, 0.1f, try pushing the near plane to 1.0f, 5.0f or even 10.0f, depending on your scale, and how close to objects your camera ever goes to.

Another solution is to linearize your depth buffer. This is a very good read.


#4907957 Skinning + Bone Look At

Posted by clb on 31 January 2012 - 05:37 AM

Your bone matrix transforms from the bone's local space to its parent space. If you combine the transforms W*B1*B2*B3*...*BN, you will get a matrix which transforms from the local space of bone BN to the world space. W is a matrix that transforms from the local space of the object to the world space. B1 is the root bone's local->parent matrix. (Depending on your setup, you may or may not have a separate matrix W, and might just treat B1 as W).

To transform from world space to the bone's local space, compute the matrix M^-1, where M = W*B1*B2*B3*...*BN. That matrix will take world space points/direction vectors into the local space of bone BN.

If that's still not working, try pasting some code if someone can spot something that's off.


#4907771 Skeletal animation joint transformation question

Posted by clb on 30 January 2012 - 03:17 PM

I've programmed robots for many years, and there is inevitably a case where gimbal lock will be an issue, no matter how much you prepare for them.

How are quaternions better than matrices? Both store a single absolute orientation relative to their parent.


People recommend switching from matrices to quats, or from euler angles to quats to avoid gimbal lock. This is unfortunately so wrong, because it gives a misleading impression that somehow the way the rotation is represented is what would cause the gimbal lock, but it's not.

It is the more subtle effect of having to change your logic when moving from euler angles to quats, which will help you avoid gimbal lock (to some extent, depending on what you're doing).

In the most common manner, rotations are prone to gimbal lock whenever you are operating on a sequence of three (or more) rotations R1 * R2 * R3 each constrained to happen around a fixed axis. In this logic, it can occur that the rotation performed by R2 aligns the axes of R1 and R3 so that the perform their rotations about the same axis. This causes one degree of freedom to be lost, which is the issue we all have observed.

Incidentally, Euler angles use scalar triplets a,b,c (e.g. for yaw, pitch and roll) and generate rotation matrices R_a * R _b * R_c to combine to the final rotation. This is exactly like shown above, and will be prone to gimbal lock. Switching to using a quaternion (or a matrix) instead of the euler triplets allows you to avoid gimbal lock, not due to a change in your data storage, but by forcing you to change your program logic, i.e. instead of several subsequent rotations about cardinal axes, you will only store a single full orientation, and operate on that single orientation instead.

Note that if you insisted on doing the Euler->Quat conversion poorly, and did something like
Quat yawRotation = Quat::RotateAxisAngle(WorldAxisX, yawRotation);
Quat pitchRotation = Quat::RotateAxisAngle(WorldAxisY, pitchRotation);
Quat rollRotation = Quat::RotateAxisAngle(WorldAxisZ, rollRotation);
Quat finalRotation = yawRotation * pitchRotation * rollRotation;
One might think "it's all quaternions, no matrices or euler angles, so no gimbal lock!", but that's wrong. The code is running a rotation logic exactly like shown above: a sequence of rotations constrained about fixed axes, and there are at least three of those -> prone to gimbal lock.

So, it's not the representation which itself avoids the gimbal lock, but the way your representation will let you run a different logic which avoids gimbal lock.

PS. In the field of robotics where you have a sequence of several 1DoF joints on a robot arm, you will always have to live with gimbal lock, since you can't change the physical world by changing the code from eulers/matrices to quats and switching your program logic - the joints will always be 1DoF.


#4907763 Skeletal animation joint transformation question

Posted by clb on 30 January 2012 - 02:57 PM


EDIT: How does one represent the inverse of an affine transformtion in the SQT form?


R' = conjugate( R )


T' = -T
S' = 1/S


This is not correct. If you have S = UniformScaleMatrixBy(s), R = RotationMatrix (orthonormal), T = TranslationMatrixBy(x,y,z), and you have M = T*R*S, then M^(-1)=T'*R'*S', where S' = UniformScaleMatrixBy(1/s), R' = R^t (transpose), and T' = -R^t*(x,y,z). R needs to be transposed, not conjugated (conjugation refers to taking the complex conjugates of each element), and in the inverse, the translation part is not -T.

For code, see float3x4::InverseOrthogonalUniformScale or float3x4::InverseOrthonormal in MathGeoLib. (coincidentally, I very recently reported about the exact same bug, i.e. mistaking (-x,-y,-z) for -R^t*(x,y,z) as the translation part in a library called Cinder, see the comments there for more info and the derivation of the math)


#4907415 Skinning + Bone Look At

Posted by clb on 29 January 2012 - 02:26 PM

From your equation, I assume you are using the v*M notation, and will use the same below.

First, compute a standard LookAt matrix that orients the front vector in the local bone space towards the desired world space direction. For reference, see e.g. MathGeoLib float3x4::LookAt, and its derivation.

Let's denote the lookat matrix by L. We want this matrix L to be the local->world matrix for the bone, but since the bone has parent nodes, we have a sequence local->parent->world. The problem is to compute such a local->parent matrix M for the bone, so that the local->world matrix for that bone will end up to be L. You say you know the parent->world transform (your CombinedParents matrix), so let's denote that by P. Then, we have the equation

local->world == local->parent * parent->world, or in terms of matrices above
L == M * P, where we solve for the desired M,
M = L * P^(-1).

Therefore, compute the local->world LookAt matrix L, then compute the world transform of the parent, P, inverse it, and compute L * P^(-1), and set that as the local->parent matrix M for the bone.

In your terms, you will then have

Final = InvBindPose * M.


#4906806 wxWidgets vs. Qt

Posted by clb on 27 January 2012 - 12:48 PM

We used wxWdigets, and then realized its featureset is quite short compared to Qt and switched to it. I think Qt is better. Though, in the past two and half years, Qt has made me cry to sleep many many times.


#4906781 DirectX 11 + MinGW-w64

Posted by clb on 27 January 2012 - 10:41 AM

But you really should try using VC++. It's nice IDE, especially when you need to debug code.


If the original poster wants to discuss DX11 + MinGW64, then by all means, let him have his topic. There's plenty of room for DX11 + VS2010 threads as well.

So, I should drop the compiler I'm used to, the one that is free with outstanding support for C++11 and virtually no bugs, and switch to VC++ Express, a non-optimizing compiler with few C++11 features and several bugs that I've personally encountered? I'm not exactly excited about the thought of that.

Edit: OK, I stand corrected, it's supposedly a "fully optimizing" compiler, but I believe GCC still wins by far when it comes to the performance of the code generated by it's auto vectorization.


Let's not throw in comments based on FUD. GCC is not virtually bug-free: there are 842 outstanding unfixed bugs in the GCC C++ compiler frontend alone. Visual Studio is not bug-free either.

Believing "GCC wins by far in performance" does not make it a correct statement either. It might. Or it might not. Instead of believing, it is better to profile, but unfortunately, there are not that many great sources available that would accurately profile these compilers. Two sources I found
  • http://www.g-truc.net/post-0372.html : VC2010 beats GCC by a large margin.
  • http://www.willus.com/ccomp_benchmark2.shtml?p1 : GCC beats VC2010 in 6 out of 10 tests.
I do not mean to derail, now please let's stick to the original topic.


#4906451 Animation & normals

Posted by clb on 26 January 2012 - 09:49 AM

In the field of mathematics, the formal definitions are as follows:
orthogonal matrix: A matrix which has orthonormal column (and hence) row vectors. It is a property of orthogonal matrices that their inverse equals the transpose.
orthonormal matrix: There is no such thing.

The reason for the above definition is perhaps historical, and perhaps due to that matrices with orthogonal (but not necessarily normalized) column/row vectors do not have nice properties to be worth much for mathematicians in linear algebra, and the above definition stuck.

The 3D games programmers definition:
orthogonal matrix: A matrix which has orthogonal column (if you are a Matrix*vector guy) or row (if you are a vector*Matrix guy) vectors, but not necessarily normalized.
orthonormal matrix: A matrix which has orthonormal column (and hence row) vectors.

Note that if a matrix has orthogonal (but not normalized) column vectors, it does not follow that its row vectors would also be normalized (and vice versa).

One thing that both mathematicians and 3D programmers do agree are the definitions of a set of orthogonal vectors (a collection of vectors which are all perpendicular to each other), and a set of orthonormal vectors (a collection of vectors which are all normalized and perpendicular to each other).

The above ill-logical definitions are discussed in both books Geometric Tools for Computer Graphics and Mathematics for 3D Game Programming and Computer Graphics. As a programmer, I definitely prefer the programmers' definitions, and that is what I use in my MathGeoLib library (find the link in my signature).




PARTNERS