clbMember Since 22 May 2004
Offline Last Active Apr 24 2016 11:45 AM
- Group Members
- Active Posts 735
- Profile Views 8,449
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
- Website URL http://clb.demon.fi/
Posted by clb on 26 January 2012 - 09:49 AM
orthogonal matrix: A matrix which has orthonormal column (and hence) row vectors. It is a property of orthogonal matrices that their inverse equals the transpose.
orthonormal matrix: There is no such thing.
The reason for the above definition is perhaps historical, and perhaps due to that matrices with orthogonal (but not necessarily normalized) column/row vectors do not have nice properties to be worth much for mathematicians in linear algebra, and the above definition stuck.
The 3D games programmers definition:
orthogonal matrix: A matrix which has orthogonal column (if you are a Matrix*vector guy) or row (if you are a vector*Matrix guy) vectors, but not necessarily normalized.
orthonormal matrix: A matrix which has orthonormal column (and hence row) vectors.
Note that if a matrix has orthogonal (but not normalized) column vectors, it does not follow that its row vectors would also be normalized (and vice versa).
One thing that both mathematicians and 3D programmers do agree are the definitions of a set of orthogonal vectors (a collection of vectors which are all perpendicular to each other), and a set of orthonormal vectors (a collection of vectors which are all normalized and perpendicular to each other).
The above ill-logical definitions are discussed in both books Geometric Tools for Computer Graphics and Mathematics for 3D Game Programming and Computer Graphics. As a programmer, I definitely prefer the programmers' definitions, and that is what I use in my MathGeoLib library (find the link in my signature).
Posted by clb on 10 January 2012 - 12:28 PM
When talking about protection against deliberately forged or tampered data, one uses the terms hash function, message authentication code and/or a digital signature. Based on these, if you are looking to build a system that is cryptographically strong, these will involve private-public key pairs and/or server-client communication.
Protections that don't use any of the above, become reverse-engineering challenges, instead of cryptographic challenges for the attacker (not to say that reverse-engineering challenges would be any easier in practice, if performed well).
Posted by clb on 16 December 2011 - 02:01 PM
I recommend you go back to your OBJ loader, and take it as an exercise on how to use a debugger, to hunt down what the problem is with your loader. What is the attitude with first spending several days of trying to write your own OBJ loader, then hitting a problem, realizing "oh, I guess this isn't gonna work.", and dropping it altogether to evade the real issue.
In any serious piece of code you write, especially in C++, you will have crash bugs again and again. Sounds like what you need is someone to tell you to get a hold of yourself, pick up that goddamn debugger, and start stepping through the code. Figure out where it crashes, what causes the crash, and why. Then fix it up. Because that is what you will always be doing as a programmer, and you have to learn to do it without dying inside.
If you are not willing to do that, do not want to learn C++, or file I/O, or pointers, but still want to do games, then I recommend stepping back and trying to learn a higher-level engine. For example, Unity3D is very popular nowadays.
Posted by clb on 14 December 2011 - 03:16 AM
As iMalc pointed, the problem is separable to two applications of 1D scans through the input points, to first solve for X, and then for Y. This is the case since you don't allow diagonal movements. Taxicab distance makes things simpler!
To solve the problem in 1D:
- You are given an array of input points N_1, N_2, ..., N_k, integers, and you need to find an integer K such that
a) Max | N_i - K | is minimized, or
b) Sum | N_i - K | is minimized.
I am unsure which one you were after, so for both:
In the case A, take the average of the minimum and maximum value, round to integer.
In the case B, first sort the points into ascending order (can be done in linear time with radix or bucket sort), then perform a linear pass through the array. At each iteration, you can compute the "penalty" caused by leaving the past points behind, and the gain received by moving the center point forward. At some point, these two balance, and you know you have the optimal center point. (In fact, I'm thinking it could perhaps be converted into log-time binary search, but I'd have to implement it to see if that's the case).
Posted by clb on 14 December 2011 - 02:57 AM
Posted by clb on 14 December 2011 - 01:05 AM
In Real-time Rendering, page 307, there is a sentence---'The primary limitation of environment maps is that they only represent distant objects'.
I think what this means is that even if you dynamically re-rendered the environment map each frame (ignoring any performance hit), you are still left with an approximation,since the reflection color that is returned by the cube map lookup does not take into account the position P on the surface of the model where the reflection of the light ray occurs, but instead it's assumed/computed as if the reflection occurred at the pivot/origin O of the model (the center of the frustum position that was used to generate the environment cube map).
If we estimate that the distance D of the reflected object to the point P is much larger than |P - O|, or "tending to infinity", then the error caused by not taking into account |P - O| tends to zero.
Perhaps a similar idea when explaining when it's ok to treat sunlight as a positionless directional light instead of a point light source - since it is so far away, the size of the sun, and the difference in the direction is minimal, so it's ok to use a directional light.
Posted by clb on 27 November 2011 - 06:43 PM
Posted by clb on 09 November 2011 - 06:45 AM
There does not exist such a mathematical term as "a rotation vector from a set of rotation around 3 axes (x,y,z) with z".
To represent a rotation operation in 3D space, you want to be looking for:
- axis-angle representation of rotation. This is a normalized vector (x,y,z) representing the axis of rotation, and a scalar angle r that represents the amount to rotate around that axis.
- Euler angle representation of rotation. These are the triplets of form (a,b,c).
- 3x3 rotation matrices (if rotating around an axis passing through the origin).
- 3x4 rotation matrices with a translation (if rotating around an axis passing through an arbitrary point).
- Unitary quaternions. (4D extensions of the complex numbers, of form r + xi + yj + zk). These rotate around an axis through the origin.
For example C++ code snippets on how to convert between these different representations, you can try browsing through the MathGeoLib codebase (in my sig).
Posted by clb on 27 October 2011 - 02:38 PM
- always overestimate the bounding box size, so that even if the sprite increases its size, it will still be fully inside, or only partially outside the physical bounding box so that it does not look distracting even if sprites slightly overlap.
- Disallow the game mechanic or visual effect altogether that causes sprites to resize.
- Link the game mechanic/visual effect that resizes the sprite to the physics subsystem and disallow the sprite from changing size if it would result in an overlap. I.e. only allow sprite resize if it has free room for it.
- Link the sprite resizing to the physics subsystem, and perform a physical reaction to when the sprite increases the size and collides, e.g. push its position away from the obstacle so that it does not penetrate any more, or give it a velocity away from the obstacle to resolve the penetration (or do both, i.e. adjust the position *and* velocity at the same time).
Posted by clb on 27 October 2011 - 09:07 AM
Now the output from the matrix created with an angle axis rotation is
36.36 | 275.61 | -239.8 | 0.00
153.04 | 128.61 | -1722.2 | 0.00
-331.73 | -1706.9 | 2286.80 | 0.00
0.00 | 0.00| 0.00 | 1.00
So am i doing something wrong? Or is this just a scaling issue?
This is very wrong. Creating a rotation matrix from axis-angle rotations should produce an orthonormal matrix, which is a matrix which has the length of all its column (and row) vectors one, (additionally the column and row vectors are orthogonal). The magnitudes of the columns or rows of this matrix are much greater than one.
Perhaps you passed in an unnormalized direction vector to the code that creates a rotation matrix based off an angle and a axis direction? For reference, here is the code from my library (though currently not used, atm I route axis-angles to quaternions to matrices).
Posted by clb on 24 October 2011 - 06:00 AM
The thing is, I'd like to be able to draw the 'backfacing' line segments in a different/muted color. I know the obvious solution would be to sort the list and build up two index buffers - one for one color, and another index buffer for the second color. However, that requires re-building the index lists every frame. Is there a better way?
If you converted a triangle list of an object to a line list, to render a wireframe display of the same object, you can simply use the vertex normals from the triangle list for the lines as well, and use a (vertex normal) dot (camera direction) >= 0 test to determine which color to pass out from the vertex shader to pixel shader for drawing.
If that was not the case, and your line lists come from somewhere else (and you can't build a notion of front-facing vs back-facing with normals like that), you can give each vertex in the line list a vertex color to specify what color to draw it in. This is probably better than maintaining two sets of index buffers, though if the colors dynamically change, in both cases (e.g. using an index buffer, or using a vertex color) you will have to discard-lock and reupdate the full vertex/index buffer.
For best performance, the desired method is to try to provide the vertex shader with the appropriate data to make the color choice, so you do not need to lock vertex or index buffers at render time each frame (only small-sized constant buffers). Line lists are not any special compared to triangle lists, so you can feed in as per-vertex attributes whatever parameters you find to be relevant for the computation.
Posted by clb on 23 October 2011 - 12:32 PM
one of the more common topics on the math forums here at GD is about 3D mathematics, geometry manipulation or primitive testing. A lot of people have written their own libraries for doing vector/matrix/quaternion math and geometric primitive intersection testing. There does not seem to exist a single library that would be specifically tuned for these purposes.
The result is now called MathGeoLib, which is a C++ library for games-oriented 3D mathematics and geometry manipulation:
I am not overly interested in "preaching" people to start using this library, since I am not that big fan of open source development in general and I have little time to do free maintenance work or tech support on the library ("you are on your own"). Instead, I hope that the documentation and online source code will function as a useful repository to anyone looking for code snippets on how certain intersection tests or mathematical constructs are often implemented.
Any thoughts? Spot obvious bugs? Criticism? Am I good at drawing diagrams or what?
Posted by clb on 07 October 2011 - 02:35 PM
See http://www.geometrictools.com/Documentation/EulerAngles.pdf . It contains a lot of formulations for different ways of breaking down matrices into Euler angles.
Posted by clb on 24 September 2011 - 06:53 AM
Though, there is some amount of research that has been done to implement rendering using GPGPU methods. For a very recent one, see a paper by Laine and Karras: High-Performance Software Rasterization on GPUs (in High-Performance Graphics 2011). Their measure is that the traditional rendering APIs beat a CUDA-implemented renderer by a factor of 2-8x.
Also, other sources exist. Perhaps a more interesting one is this: Alternative Rendering Pipelines in CUDA. The prospects for implementing a renderer in CUDA might be to solve the order-independent-transparency problem (see e.g. this), or to enable real-time raytracing.
I have no experience with GPGPU but the argument put forward was, that GPGPU will always be faster because the API layer is thinner while accessing the exact same hardware and because of shared memory also.
In the light of what I linked to above, I don't think this statement is true at all.
If someone knows about published papers similar to Laine and Karras, please share a link. This is an interesting topic to me as well.