
Advertisement
Eric_Brown
Member
Content count
46 
Joined

Last visited
Content Type
Profiles
Articles and Tutorials
News
Contractors
Columns
Podcasts
Games Industry Event Coverage
Gallery
Forums
Calendar
Blogs
Member Map
Projects
Everything posted by Eric_Brown

Determine current angle of rotation around vector
Eric_Brown replied to CaptainRedmuff's topic in Math and Physics
Captain, Yes, o is the position of the camera relative to the center of the sphere. When I say sphere space, I mean that the center of the sphere is the origin of this space, and the offset vector o is given relative to this origin. I would like to try and verify that I know what it is you are trying to do: From what I can tell, you are storing p and c, and using them to calculate o. You want to represent the location of the camera on the sphere by using a set of angles. You then use these angles to rotate o. Then you want to recalculate p using the rotated o. Is this correct? 
Determine current angle of rotation around vector
Eric_Brown replied to CaptainRedmuff's topic in Math and Physics
Hey Cap'n I would avoid using sphereical coordinates to represent the position of the camera on the surface of a sphere. Rather, store the center of the sphere, and the current sphere offset vector. If the magnitude of the offset vector is fixed, then the position of the camera is always on the sphere p is the vector representing the world space position of the camera, c is the vector representing the world space center of the sphere, and o is the vector representing the sphere space position of the camera. This representation is especially simple if you want to have a smooth blend from the current sphere space position of the camera o to a new desired location od. The blend is done in two steps. The first step is: Where b is between 0 and 1 and determines the blend rate. If b is 0 then there is no blend. If b is 1 there is an instantaneous blend. The second step is: Which constrains the blended position to lie on the surface of the sphere. This blend is nonlinear, but it is smooth. Also, changing the destination point before the blend is complete does not cause problems. Store this blended value as the new offset vector, and recalculate the world space position of the camera. Then find the lookat vector and so forth. Eric Brown 
How to find sin, cos, and tan without a calculator. Does this answer exist???
Eric_Brown replied to Chrono1081's topic in Math and Physics
Quote:Original post by Zipster Quote:Original post by alvaro Hmmm... That doesn't seem to add anything to the formula, except complexity. If you plug in a negative x in the usual formula, you'll get the correct result. It could be that in Euler's time there was a stylistic reason to prefer a formula that incorporates exp(ix) and exp(ix). Even if you get the same result in the end, perhaps it's to clarify that the sign of 'x' only influences the terms involved with calculating 'sin' and not 'cos'. Or to hint at the effect of complex conjugation. 
What I've found is that matrix decomposition is useful when you need to output a particular transform to a human, such as in a debugging tool, or a game design tool. For this reason alone, having a good decomposition procedure is handy. Other than interfacing with humans, keep the transform in a more compact form, such as a 4x4 matrix, or as {scale vec, rotation quat, translate vec} which simplifies a lot of blending operations. If you aren't going to be doing a lot of blending, you should really just use matrices.

marcus, I think I felt a similar confusion when I first started delving into the world of computational physics simulation. My confusion came from the fact that the process was called "integration", but the method of integration seemed more like taking a derivative. For instance, you give an example of a differentiation: Quote: Differentiation  (I understand)It gives rate of change, eg: ball is moving then to get new position at time 't' ( ds/dt = dv) we do like .. present_position = old_position + dv * dt; but the expression "present_position = old_position + dv * dt" would actually be termed integration. When you approximate calculus with finite differences, the operations of differentiantion and integration are encoded in the standard algebraic operations of addition, subtraction, multiplication and division. With this encoding, you can kind of think of a derivative as being associated with division, and an integral as being associated with multiplication. For example, the approximation of a derivative can be expressed like this Which incorporates a division. Integrating the previous equation amounts to multiplying both sides by Δt You can rearrange this expression to get the equation that you were referencing. So, in the end, even though they are basically the same relationship, if the equation is in this form it is a derivative. If the equation is in this form it is an integral. Once I came to grips with this, I overcame my confusion about terminology.

Accurate raytoplane intersection with floats
Eric_Brown replied to taz0010's topic in Math and Physics
Taz, I'm not sure if I can answer your question, but I can provide a couple of little gems I keep around when dealing with precision problems regarding planes. The first is: Floating point subtraction is notoriously bad, and can magnify the error introduced by previous floating point operations. Addition isn't so bad, but adding a negative number is the same as subtraction, and since you may not know the sign of your float you might as well treat addition and subtraction the same. Many times you obtain the plane normal from a set of three points that lie on the plane. This is usually done by subtracting the verteces to get edge vectors, then taking the cross product of the edge vectors. This results in a lot of terms that are VERY BAD. Most of the precision problems I've had with planes is due to the way I was calculating the plane normal. I've worked out a method for calculating a normal from three points, which seeks to minimize the badness. Vector findPlaneNormal(const Vector& a, const Vector& b, const Vector& c) { Vector n; n.x = a.y*(b.z  c.z) + b.y*(c.z  a.z) + c.y*(a.z  b.z); n.y = a.z*(b.x  c.x) + b.z*(c.x  a.x) + c.z*(a.x  b.x); n.z = a.x*(b.y  c.y) + b.x*(c.y  a.y) + c.x*(a.y  b.y); n.Normalize(); return n; } This method seems to perform better than the standard "cross product of edge vector" approach. I'm not sure how much better it is, but it has solved a lot of my precision problems when dealing with planes. 
Camera with Angular Velocity or Angular Momentum
Eric_Brown replied to Bemba's topic in Math and Physics
Bemba, did you get this working? 
One thing I've done before is construct a replay animation. It seems like it would take too much memory, but I've actually done this before using a little compression trick. For the animation you will need to be able to evaluate the position and orientation of several objects at every moment in time during the replay. Thus you will need to record these positions and orientations. You may also need to store the animation frame a particular character is on, or other stuff like that. You don't need to store a value for every frame, you just need to be able to determine a value at every frame. Only store data on particular frames, and linearly interpolate, or use cubic splines to get the frames inbetween. That's a simple idea, but how do you decide which frames are keyframes, and which frames are not during the initial run of the simulation? Here is a simple trick. Always store the current state as a keyframe. After storing the current state as a keyframe, take a look at the last 3 keyframes. If you remove the second to last keyframe, how does the interpolated value compare to the real value? If it is below a predefined threshold, then you get rid of the second to last keyframe. It is possible to store your keyframes sequentially in an array, since when you are writing the array, you are only ever manipulating the last and second to last entries. This makes reading the keyframes much quicker. I can envision ways in which this scheme might fail, however, when I did it, I was able to get an excellent replay, with no visible error. My compression ratio was around %10, but in some cases it was much lower. I think I was replaying about 1015 objects, some of them animated, some of them updated by physics. The keyframe structure stored the replay frame time, the position, the rotation quat, and the animation frame.

[C#] Platform game physics using integration. Is it possible?
Eric_Brown replied to Spa8nky's topic in Math and Physics
One way to incorporate precanned motion into your dynamic simulation is to find the value of a that will move the character to the position where you know it should be. This is not what I recommend, however, since the character motion usually needs to be pretty deterministic. I've found that it is easier to just snap or blend the position of the character to the precanned location, then adjust the velocity to take into account the shift in position that you just introduced. When it is time to turn the precanned motion off, the dynamics will smoothly take over. The main disconnect between precanned animation and dynamic physics is that a precanned animation usually only updates the position. The dynamic physics needs the position AND the velocity to be updated. 
Camera with Angular Velocity or Angular Momentum
Eric_Brown replied to Bemba's topic in Math and Physics
Here is a solution that might work for you. I haven't tried this, but it seems to work on paper. Sorry, it's kind of long :( If you were dealing with positions, instead of rotations, you may try a damped spring approach, where each frame you calculate the spring force between your current location and your desired location. You would also calculate some damping that is opposed to your velocity so that you don't wildly oscillate. Here is the rotational version of a damped spring force using quaternions. You need to store a "position" (q) and "velocity" (v) quaternion to represent your rotational state. You are going to calculate a rotational acceleration (a) that is analogous to a damped spring. I'll discuss later how to calculate a, but once we have it, we will step the state forward like this: If your time step is small you should replace the expensive Slerps with NLerps. If your time step is equal to 1 then you can remove the slerps altogether and the equations simplify to There are two components to a. One applies a spring force to push you toward your desired rotation. The other dampens the motion so that you don't start wildly oscillating. The linear spring force is proportional to the difference between your current and desired positions. For rotations, this difference is the rotation that rotates your current "look at" vector into your desired "look at" vector. We will call this rotation Δq. Here is a function based on the half angle method that will quickly calculate this quaternion, without employing any trig or inverse trig functions. Quat getRotationQuat(const Vector& from, const Vector& to) { Quat result; Vector H = VecAdd(from, to); H = VecNormalize(H); result.w = VecDot(from, H); result.x = from.y*H.z  from.z*H.y; result.y = from.z*H.x  from.x*H.z; result.z = from.x*H.y  from.y*H.x; return result; } Once you have calculated Δq, you need to "scale" it by a constant that represents the springs stiffness. The analogue of scaling in quaternion land is slerping between 1 and the given quaternion by the scale parameter. The acceleration due to the spring is given by The acceleration due to damping is proportional and opposed to the current velocity. Again, you just need to "scale" the velocity quat using a slerp between 1 and the velocity. Since quaternion multiplication does not commute, it is not clear if the final acceleration quat should be composed with the spring acceleration first, or with the damping acceleration first. My guess is that you should just pick one, and then tweak the k and μ parameters until it behaves correctly. At any rate, the total acceleration can be given by the product of these two acceleration components As before, if your k and μ paramters are small enough, you may want to consider replacing the expensive slerps with normalized lerps. You may just want to do this anyway  I would! If you decide to try this, let me know how well it works. I've often contemplated if it was possible to do rotational "dynamics" in this way. 
Matrix interpolation method for Skeleton?
Eric_Brown replied to Zouflain's topic in Math and Physics
There are a few things that quaternions just do better than matrices  and interpolation is one of them. To the point, if inserting quaternions requires you to consider rewriting a lot of your animation code, you may want to consider the rewrite. As an alternative you may want to specify the keyframes using quaternions, and then once you have the interpolated result, immediately convert from a quaternion to a matrix. What is the initial form of the keyframe data? It surely isn't a matrix, is it? Most animation packages export animation curves as euler angles, or quaternions. 
Camera with Angular Velocity or Angular Momentum
Eric_Brown replied to Bemba's topic in Math and Physics
If all you are trying to get is a nonuniform blend from the start location to the finish location, try using a nonuniform blending parameter for your slerp. In effect, apply the acceleration to the blending parameter that you send into the slerp function. You can think of f(t) as a kind of blend velocity. If you want to start slow, and end fast, you can use a quadratic function If you want to start fast and end slow, you can use this function 
Quote:Original post by alvaro On top of what Eric said: (1) Since q and q represent the same orientation, you should take the absolute value of the dot product before you take the arccos. (2) You don't need to compute the expensive arccos at all if all you need to know is which of them is closest. The smallest angle will be the one with the highest cosine, so simply find the one that maximizes that. Agreed!

How to get an oblique projection matrix?
Eric_Brown replied to chiyuwang's topic in Math and Physics
Quote:Original post by chiyuwang I want to project a 3D point x onto a plane (defined by a b c d) along a 3D direction v. It sounds to me like you might be trying to find the intersection of a ray with a plane. If this is the case, you don't want to search for stuff on projection matrices. Projection matrices are applicable to your problem, but it seems like the hard way to approach it. Instead, consider that you are sliding your point x in the direction of v. Eventually, you will hit the plane, and this is the point you are trying to find. Use the parameter t to represent the sliding point like this Now, you want to find the value of t so that the sliding point lies on the plane. The condition that the sliding coordinates are lying on the plane is given by inserting the definition of the sliding coordinates with respect to the parameter t gives Solving for t gives NOTE: if the denominator of t is zero, then the sliding point will never come into contact with the plane. Now, insert this value of t back into the equation of the sliding point to get the actual projected point. 
Oh, I see. If all of the points are at the same position, then you only need to worry about the angular peice. Find the rotation matrix, or quaternion that represents the set of angles. I prefer quaternions. You construct the quaternion like this Here ψ is the yaw angle, θ is the pitch angle, and φ is the roll angle. Of course, these angles are given within their canonical ranges. If you have 2 quaternions that represent 2 different rotations, then you can always construct a single quaternion that will take you from one rotation to the other If you want to find the angle that this delta quaternion represents, you use the scalar part of Δq in the following formula. Note, that you don't have to perform a full blown quaternion multiply if you want to find the scalar part of Δ q. The scalar part of this product is just the 4D dot product of the two quaternions q1 and q2. It seems to me that what you are looking for is the set of coordinates that minimizes the angle α. I don't recommend using matrices to solve this problem, because it is more expensive all around. But if you were using matrices instead, you would find the delta matrix Then you would use the formula posted by Emergent [Edited by  Eric_Brown on April 1, 2010 11:38:44 AM]

Transforming something using a "delta matrix"
Eric_Brown replied to JohnOwens's topic in Math and Physics
If the particle B is always acting like a child to A, then you should probably specify the coordinates of B with respect to A, rather than with respect to the world. 
Here is a decent article that doesn't require too much math. This approach is pretty simple, especially if your ragdoll is a bunch of points connected by sticks. I once saw a nonphysics, nonmath guy, who after reading this doc wrote a simple 2D pointstick ragdoll application in C#, where the points would collide only with the walls. It only took him about 2 hours to read the doc and write the app. (To be fair, he did ask me a few questions along the way.) When you start to bring nontrivial collision in, then you crank up the math requirements a few notches. If you want the ragdoll to drive a 3D character mesh, then you crank up the math a few more notches.

Any transformation that does not change the distance between 2 points is a rotation, by definition. This is true if both points are defined in the same space. Thus if your two points were in the same space to begin with, the rotation will not change the distance between them. If the points are in two different spaces, then a rotation of one of the spaces can affect the distance between the points. For instance, if one point is in world space, and the other point is specified in the local space of a model, then if the model rotates, then the world space representation of the model point rotates as well, and the distance between the two points changes. If this is the type of setup that you are referring to, you need to get both of your points into the same space before you can apply the distance formula. Here is a possible function if you want to find the distance between a point in world space (W), and a point in model space (M). This assumes that the yaw (y), pitch (p) and roll (r) angles are in the correct domains, and are given in radians. Also, this function allows the origin of the model space to be translated (T) from the origin in world space. If the model space and the world space share the same origin, then T is just zero. float distance(const Vector& W, cont Vector& M, float y, float p, float r, const Vector& T) { //evaluate all of the sines and cosines float cy = cos(y); float cp = cos(p); float cr = cos(r); float sy = sin(y); float sp = sin(p); float sr = sin(r); //calculate the components of the rotation matrix float m11 = cr * cp; float m12 = cr * sp * sy  sr * cy; float m13 = cr * sp * cy + sr * sy; float m21 = sr * cp; float m22 = sr * sp * sy + cr * cy; float m23 = sr * sp * cy  cr * sy; float m31 = sr; float m32 = cp * sy; float m33 = cr * cy; //calculate the deltas float dx = W.x  M.x * m11  M.y * m12  M.z * m13  T.x; float dy = W.y  M.x * m21  M.y * m22  M.z * m23  T.y; float dz = W.z  M.x * m31  M.y * m32  M.z * m33  T.z; //return the distance return sqrt(dx*dx + dy*dy + dz*dz); } Of course, if I were going to do this, I would prefer the rotation to be represented by a matrix, rather than angles, so that I wouldn't have to do all of the sines and cosines every time I wanted to check a distance between a point in world space, and a point in the model space.

Quote:Original post by Dave Eberly Have you considered the case when the projection of the sphere is moving toward the polygon, but the sphere passes under (or over) the polygon (the projection passes through the polygon) and then just touches a polygon edge, after which the sphere travels away from the polygon? Do you mean, the sphere is moving tangent to the polygon plane? If so, the sphere projection would once again break down, or at least require a special case. I think my concern here is that I have previously deployed this algorithm on a game which, after doing so, completely eliminated all collision bugs for the entire duration of the development cycle. To be fair, the game didn't have too much dynamics going on, but it was unconstrained enough that the problems with the sphere projection method should have been showing up everywhere. It just doesn't make sense to me that I could get such solid performance out of a broken algorithm. To be sure  next time the need arises I think I will go with the voronoi approach, rather than try to patch all the holes in the sphere projection approach.

Quote: originally posted by Eric_Brown I have recently posted the method I use on my blog. I've modified the article on my blog to account for the fact that I was... wrong.

Converting from global to local space  problem
Eric_Brown replied to fingerprint211b's topic in Math and Physics
Quote:Original post by fingerprint211b .. However, when I want to propagate mouse clicks, I need the mouse coordinates to be expressed in relative coordinates to that component, not global (or relative to the root component). ... That makes sense  I think I just rotations on the brain! I assume you are using C# If your subcomponents are Control objects, you can use the PointToClient function to get the x,y point of the click in the local space of the Control object. Point p = PointToClient(Cursor.Position); 
You can use this simple function Quat getRotationQuat(const Vector& from, const Vector& to) { Quat result; Vector H = VecAdd(from, to); H = VecNormalize(H); result.w = VecDot(from, H); result.x = from.y*H.z  from.z*H.y; result.y = from.z*H.x  from.x*H.z; result.z = from.x*H.y  from.y*H.x; return result; } For your input, use the yaxis as the from vector, and the normal as the to vector. This function does not work if the normal is in the y direction, since the axis of rotation becomes ambiguous. If you detect that the normal is in the y direction, you can just use the quaternion (w=0, x=1, y=0, z=0) to rotate 180 degrees around the x axis.

Converting from global to local space  problem
Eric_Brown replied to fingerprint211b's topic in Math and Physics
If the coordinates of a subcomponent are specified relative to a parent, then rotating any subcomponent should automatically rotate all of the children, so there should be no need for recursion. Is this what you are trying to accomplish? What coordinate system are the variables Angle, X, and Y specified in? 
You can use this quaternion formula Where qi is the initial quaternion, and can represent the yaxis aligned orientation of the character. The Δq quaternion represents the interaction, or animation, or whatever. There are a few ways to calculate Δq. You can use an axis and angle, or you can use my current favorite method, where you use the initial and final vectors to construct Δq. This method was discussed in a recent post on this forum, as well as on my blog (look at trick #3). This trick is useful for doing things like pointing the characters face towards something. One thing you need to be aware of, is that any vectors that you use to construct Δq need to be in the local space of qi. If you use the axis angle method, the axis needs to be specified relative to the character, not relative to the world. If you are going to use the initial/final vector trick, this applies as well. Both vectors need to be specified in the characters local space. However, the initial vector is likely not to change especially if it is the direction that the character is looking. EDIT: the link to my blog is OK, but you should look at trick #2, not trick #3 [Edited by  Eric_Brown on March 25, 2010 4:58:15 PM]

Quote:Original post by jyk Quote:I actually meant to say "point on the sphere closest to the plane" rather than "center of the sphere". It works for the 2D cases, as well as for the 3D example given by Dave Eberly. Does it work all the time? I don't know.It still doesn't seem to me like that would work in all cases. I found a case where it does not work  if the sphere is already intersecting the polygon plane. In this case you can't choose a single point on the boundary of the sphere that is closest to the plane  there are lots of them.

Advertisement