• Content count

  • Joined

  • Last visited

Community Reputation

1890 Excellent

About cadjunkie

  • Rank
  1.   Those questions usually fall in the mechanical engineering (ME) discipline. ME basically touches all other engineering fields and some of everything's in there. You're right that robotics is something that crosses a lot of disciplines, but your questions about joints and materials would be ME all the way.   You've actually described at a very high level what engineers would do. First, you have to define your functional requirements. This is usually like a use case, or some specific set of "must-haves" in the design. Sometimes that may be the total degrees of freedom, but sometimes that's simply "the machine must be able to do X or Y". This usually frees up the constraints in the concept generation phase. At that point, you come up with a basic design that may work functionally. You check that design over to see if there are any obvious problems with it, then you start analyzing it. There are lots of different analyses you can and should do, such as kinematic (to see range of motion and maybe if it can crash into itself), structural (to see if it can hold itself up or if the material will yield during operation), joints (like what kind of torque you need to move the robot armatures around), tribological (to see if friction/wear is a problem and where it is), tolerance (to see if the tolerances on different subcomponents are too tight or too loose for assembly), failure safety, motion repeatability, and so on. This is the basic design process. If at any point you come across a problem that is "unsolvable" (i.e. the joint type causes stress peaks that yield your material or the parts cost too much), you've got to go back and redesign. The further down the process you go, the harder it is to go back and redesign, so usually following rules of thumb help you save yourself early on. You don't always have to do those kinds of analyses. If you're trying to build a one-off or a prototype, you may not care if your tolerances are defined for general manufacturing. It does help, but not completely necessary. It just depends on how you're comfortable proceeding.   Mechatronics is kind of like a hybrid of mechanical and electrical engineering and deals with the application of electromechanical components. I would think a lot of the knowledge of servos/motors/etc. would be there as well as in electrical engineering. AFAIK and I was told, electrical engineers don't deal too much with this type of thing anymore because they do a lot more signal processing and systems design (the kind with complicated ODEs and filters and things I don't fully understand).   If you're interested specifically in how joints are studied and forces calculated (roughly), then I would suggest "Design of Machinery" by Norton (ISBN: 978-0077421717). " It's not the best book on the subject, but I know what's in it since I used it in my kinematics class way back when. Once the kinematics is explained, it goes into how to calculate forces. Check it out or borrow the book (or PDF?) from somewhere.   Good luck!
  2. "Real-Time" Smoothing

    Hey @L.Spiro, did you find what you were looking for? It would be very interesting and  helpful if you were to post your solution (or workaround) :-)
  3.   That's a really elegant way to pose the problem. Good find!
  4. Bezier curves were designed to be natural for non-mathematics types to use them. They can loop, but as stated before, but that behavior can be controlled. I would try it out and see.
  5. Random Math Questions

      I don't think you've really skipped anything. Seems to me one of the problems is that the math books and things you've been reading aren't helping you understand how to apply the info to do what you want. It's a learning process.     Another potential problem is that you might be overwhelming yourself with trying to understand all the math at once. It took me several years in secondary school to wrap my head around these concepts. Like any practiced skill, it's only "easy" once you've done it a thousand times. Plus, if you're really trying and still don't understand it, it's not your fault. The job of any teacher or learning tool is to break down the concept. My personal experience of learning new things from the Internet is that it takes a lot more time since it isn't always broken down nicely and there's no one to really ask (except on forums). Even then, it's hard to explain it over text.     One last thing, have you checked out BSVino's Youtube videos? At the very least, they'll help you see how the math is used.
  6. Particle Systems using Constrained Dynamics

      I actually read your paper on springs and thought it was a good approach. I think the methods are the same, but maybe the only difference is that in Witkin's paper you have 2 constants to modify instead of having to calculate spring constants for every mass.       The only advantage to this is it's a general method for any type of constraint. The bead-on-wire example is just there for analytical purposes, to show the method is consistent with classical mechanics. Sometimes you want a more complicated constraint path or lots of interconnected constraints. If can construct the position functions, you can use this method with no problems. Of course, like @Dirk Gregorius said, the method might not be suitable for gaming.
  7. Particle Systems using Constrained Dynamics

      So when I implemented it, I did it in Simulink/MATLAB. It was just easier to do the matrix math that way. I hard coded the bead-on-wire formula as written in the slides. I did the same constraint and calculated the Jacobians and partial derivatives by hand and input them into the general method described here. I did a relative error comparison and I found they agreed pretty well (max was 0.3% error, but most values were much lower). I don't think this isn't really a different method than the one you tried before. It's just more general so you can make more complicated constraint paths.   About drift, I had to do the implementation I talked about with the "magic" numbers with the position and velocity constraint. That's because I didn't have the time/patience to write a symplectic Euler integrator for use in Simulink. I just used the Matlab's default: ode45 Dormand-Prince with variable step sizes. The constraint was satisfied all right, but I did notice that energy started leaking out of the model as if there was some small amount of friction present. It eventually stabilized, but trying out the different integrators my results varied widely. Maybe I should do a symplectic Euler just to compare how it does.     I can totally see what you mean. The feedback is exactly the same kind of trick as springs. Yet, somehow I like the feedback terms better than the springs. There's no way to completely get rid of the numerical error, but I like the approach of modeling it like a disturbance in a control system. It makes the magic numbers mean something. You can tune them like the damping ratio and natural frequency for a mass-spring-damper system to get the kind of response you want.     Well, x dot x is technically |x|^2 since |x| = sqrt(x dot x). Remember here that the x vector is not the x-axis, but the position vector of the bead on the wire as measured from the center of the circle. This is orthogonal to the legal displacements along the wire, just like Witkin describes. You're right about the forces involved, because the wire has to hold the bead while it rotates and is pulled down by gravity.     I'm not sure what you mean by reformulating it in terms of "classical mechanics". All of this came from what I would term classical, or Newtonian, mechanics. You'll have to elaborate more on what you mean.
  8. Simulating physics can be fairly complex. Spatial motion (vehicles, projectiles, etc.), friction, collision, explosions, and other types of physical interactions are complicated enough to describe mathematically, but making them accurate when computed adds another layer on top of that. Making it run in real time adds even more complexity. There is lots of active research into quicker and more accurate methods. This article is meant to showcase a really interesting way to simulate particles with constraints in a numerically stable way. As well, I'll try to break down the underlying principles so it's more understandable by those who forgot their physics. Note: the method presented in this article is described in the paper "Interactive Dynamics" and was written by Witkin, Gleicher, and Welch. It was published in ACM in 1990. A posted PDF of the paper can be found here: A link to another article by Witkin on this subject can be found here: Physical Theory Newton's Laws Everyone's familiar with Newton's second law: \( F = ma\). It forms the basis of Newtonian mechanics. It looks very simple by itself, but usually it's very hard to deal with Newton's laws because of the number of equations involved. The number of ways a body can move in space is called the degrees of freedom. For full 3D motion, we have 6 degrees of freedom for each body and thus need 6 equations per body to solve for the motion. For the ease in explaining this method, we will consider translations only, but this can be extended for rotations as well. We need to devise an easy way to build and compute this system of equations. For a point mass moving in 3D, we can set up the general equations as a matrix equation: \[ \left [ \begin{matrix} m_1 & 0 & 0 \\ 0 & m_1 & 0 \\ 0 & 0 & m_1 \\ \end{matrix} \right ] \left [ \begin{matrix} a_{1x} \\ a_{1y} \\ a_{1z} \\ \end{matrix} \right ] = \left [ \begin{matrix} F_{1x} \\ F_{1y} \\ F_{1z} \\ \end{matrix} \right ]\] This can obviously be extended to include accelerations and net forces for many particles as well. The abbreviated equation is: \[ M \ddot{q} = F \] where \(M\) is the mass matrix, \(\ddot{q}\) is acceleration (the second time derivative of position), and \(F\) is the sum of all the forces on the body. Motivating Example One of the problems with computing with Newton-Euler methods is that we have to compute all the forces in the system to understand how the system will evolve, or in other words, how the bodies will move with respect to each other. Let's take a simple example of a pendulum. Technically, we have a force on the wall by the string, and a force on the ball by the string. In this case, we can reduce it to the forces shown and solve for the motion of the ball. Here, we have to figure out how the string is pulling on the ball ( \( T = mg \cos{\theta}\) ), and then break it into components to get the forces in each direction. This yields the following: \[ \left [ \begin{matrix} m & 0 \\ 0 & m \\ \end{matrix} \right ] \left [ \begin{matrix} \ddot{q}_x \\ \ddot{q}_y \\ \end{matrix} \right ] = \left [ \begin{matrix} -T\sin{\theta} \\ -mg+T\cos{\theta} \\ \end{matrix} \right ]\] We can then model this motion without needing to use the string. The ball can simply exist in space and move according to this equation of motion. Constraints and Constraint Forces One thing we've glossed over without highlighting in the pendulum example is the string. We were able to ignore the fact that the mass is attached to the string, so does the string actually do anything in this example? Well, yes and no. The string provides the tension to hold up the mass, but anything could do that. We could have had a rod or a beam hold it up. What it really does is define the possible paths the mass can travel on. The motion of the mass is dictated, or constrained, by the string. Here, the mass is traveling on a circular path about the point where the pendulum hangs on the wall. Really, anything that constrains the mass to this path with no additional work can do this. If the mass was a bead on a frictionless circular wire with the same radius, we would get the same equations of motion! If we rearrange the pendulum's equations of motion, we can illustrate a point: \[ \left [ \begin{matrix} m & 0 \\ 0 & m \\ \end{matrix} \right ] \left [ \begin{matrix} \ddot{q}_x \\ \ddot{q}_y \\ \end{matrix} \right ] = \left [ \begin{matrix} 0 \\ -mg \\ \end{matrix} \right ] + \left [ \begin{matrix} -T\sin{\theta} \\ T\cos{\theta} \\ \end{matrix} \right ]\] In our example, the only applied force on the mass is gravity. That is represented as the first term on the right hand side of the equation. So what's the second term? That is the constraint force, or the other forces necessary to keep the mass constrained to the path. We can consider that a part of the net forces on the mass, so the modified equation is: \[ M_{ij} \ddot{q}_j = Q_j + C_j \] where \(M\) is the mass matrix, \(\ddot{q}\) is acceleration (the second time derivative of position), \(Q\) is the sum of all the applied forces on the body, and \(C\) is the sum of all the constraint forces on the body. Let's note as well that the mass matrix is basically diagonal. It's definitely sparse, so that can work to our advantage later when we're working with it. Principle of Virtual Work The notion of adding constraint forces can be a bit unsettling because we are adding more forces to the body, which you would think would change the energy in the system. However, if we take a closer look at the pendulum example, we can see that the tension in the string is acting perpendicular (orthogonal) to the motion of the mass. If the constraint force is orthogonal to the displacement, then there is no additional work being done on the system, meaning no energy is being added to the system. This is called d'Alembert's principle, or the principle of virtual work. Believe it or not, this is a big deal! This is one of the key ideas in this method. Normally, springs are used to create the constraint forces to help define the motion between objects. For this pendulum example, we could treat the string as a very stiff spring. As the mass moves, it may displace a small amount from the circular path (due to numerical error). Then the spring force will move the mass back toward the constraint. As it does this, it may overshoot a little or a lot! In addition to this, sometimes the spring constants can be very high, creating what are aptly named stiff equations. This causes the numerical integration to either take unnecessarily tiny time steps where normally larger ones would suffice. These problems are well-known in the simulation community and many techniques have been created to avoid making the equations of motion stiff. As illustrated above, as long as the constraint forces don't do any work on the system, we can use them. There are lots of combinations of constraint forces that can be used that satisfy d'Alembert's principle, but we can illustrate a simple way to get those forces. Witkin's Method Constraints Usually a simulation has a starting position \(q = \left [ \begin{matrix} x_1 & y_1 & z_1 & x_2 & y_2 & z_2 & \cdots \\ \end{matrix} \right ] \) and velocity \(\dot{q} = \left [ \begin{matrix} \dot{x}_1 & \dot{y}_1 & \dot{z}_1 & \dot{x}_2 & \dot{y}_2 & \dot{z}_2 & \cdots \\ \end{matrix} \right ] \). The general constraint function is based on the state \(q(t)\) and possibly on time as well: \(c(q(t))\). The constraints need to be implicit, meaning that the constraints should be an equation that equals zero. For example, the 2D implicit circle equation is \(x^2 + y^2 - R^2 = 0\). Remember there are multiple constraints to take into consideration. The vector that stores them will be denoted in an index notation as \(c_i\). Taking the total derivative with respect to time is: \[\dot{c}_i=\frac{d}{dt}c_i(q(t),t)=\frac{\partial c_i}{\partial q_j}\dot{q}_j+\frac{\partial c_i}{\partial t}\] The first term in this equation is actually a matrix, the Jacobian of the constraint vector \(J = \partial c_i/\partial q_j\), left-multiplied to the velocity vector. The Jacobian is made of all the partial derivatives of the constraints. The second term is just the partial time derivative of the constraints. Differentiating again to get \(\ddot{c_i}\) yields: \[\ddot{c}_i = \frac{\partial c_i}{\partial q_j} \ddot{q}_j + \frac{\partial \dot{c}_i}{\partial q_j} \dot{q}_j + \frac{\partial^2 c_i}{\partial t^2}\] Looking at the results, the first term is the Jacobian of the constraints multiplied by the acceleration vector. The second term is actually the Jacobian of the time derivative of the constraint. The third term is the second partial time derivative of the constraints. The formulas for the complicated terms, like the Jacobians, can be calculated analytically ahead of time. As well, since the constraints are position constraints, the second time derivatives are accelerations. Newton's Law with Constraints If we solve the matrix Newton's Law equation for the accelerations, we get: \[\ddot{q}_j = W_{jk}\left ( C_k + Q_k \right )\] where \(W = M^{-1}\), the mass matrix inverse. If we were to replace this with the acceleration vector from our constraint equation, we would get the following: \[\frac{\partial c_i}{\partial q_j} W_{jk}\left ( C_k + Q_k \right ) + \frac{\partial \dot{c}_i}{\partial q_j} \dot{q}_j + \frac{\partial^2 c_i}{\partial t^2} = 0\] \[JW(C+Q) + \dot{J} \dot{q} + c_{tt} = 0\] Here, the only unknowns are the constraint forces. From our discussion before, we know that the constraint forces must satisfy the principle of virtual work. As we said before, the forces need to be orthogonal to the displacements, or the legal paths. We will take the gradient of the constraint path to get vectors orthogonal to the path. The reason why this works will be explained later. Since the constraints are placed in a vector, the gradient of that vector would be the Jacobian matrix of the constraints: \(\partial c/\partial q\). Although the row vectors of the matrix have the proper directions to make the dot product with the displacements zero, they don't have the right magnitudes to force the masses to lie on the constraint. We can construct a vector of scalars that will multiply with the row vectors to make the magnitudes correct. These are known as Lagrange multipliers. This would make the equation for the constraint forces as follows: \[C_j = \lambda_i \frac{\partial c_i}{\partial q_j} = J^T \lambda_i\] Plugging that equation back into the augmented equation for Newton's law: \[ \left ( -JWJ^T \right ) \lambda = JWQ + \dot{J}\dot{q} + c_{tt}\] Note that the only unknowns here are the Lagrange multipliers. Attempt at an Explanation of the Constraint Force Equation If you're confused at how Witkin got that equation for the constraint forces, that's normal. I'll attempt to relate it to something easier to visualize and understand: surfaces. Let's take a look at the equation of a quadric surface: \[Ax^2+By^2+Cz^2+Dxy+Eyz+Fxz+Gx+Hy+Iz+J=0\] The capital letters denote constants. Notice also the equation is implicit. We can see the equation for an ellipse is a quadric surface: \[f(x,y,z) = (1/a^2)x^2+(1/b^2)y^2+(1/c^2)z^2-1=0\] For a point (x,y,z) to be on the surface, it must satisfy this equation. To put it into more formal math terms, we could say the surface takes a point in \(\mathbb{R}^3\) and maps it to the zero vector in \(\mathbb{R}\), which is just 0. Any movement on this surface is "legal" because the new point will still satisfy the surface equation. If we were to take the gradient of this ellipse equation, we'd get: \[ \left [ \begin{matrix} \frac{\partial f}{\partial x} \\ \frac{\partial f}{\partial y} \\ \frac{\partial f}{\partial z} \\ \end{matrix} \right ] = \left [ \begin{matrix} \frac{2x}{a^2} \\ \frac{2y}{b^2} \\ \frac{2z}{c^2} \\ \end{matrix} \right ] \] This vector is the normal to the surface at the given (x,y,z) coordinates. If we were to visualize a plane tangent to the ellipse at that point, the dot product of the normal and the tangent plane would be zero by definition. With this same type of thinking, we can see the constraints as a type of algebraic surface since they're also implicit equations (it's hard to visualize these surfaces geometrically since they can be n-dimensional). Just like with the geometry example, if we were to take the gradient of the constraints the resulting vector would be orthogonal to a tangent plane on the surface. In more formal math terms, the constraints/surfaces can be called the null space since it contains all the points (vectors) that map to the zero vector. The gradients/normals to these constraints is termed the null space complement. The constraint force equation produces vectors that lie in the null space complement, and are therefore orthogonal to the constraint surface. The purpose of these math terms are to help generalize this concept for use in situations where the problems are not easy to visualize or intuitively understand. Calculating the State With these equations in place, the process of calculating the system state can now be summed up as follows: Construct the \(W\), \(J\), \(\dot{J}\), \(c_{tt}\) matrices. Multiply and solve the \(Ax=b\) problem for the Lagrange multipliers \(\lambda\). Compute the constraint forces using \(C = J^T \lambda\). Compute the accelerations using \(\ddot{q} = W(C+Q)\). Integrate the accelerations to get the new velocities \( \dot{q}(t) \) and positions \( q(t) \). This process can be optimized to take advantage of sparsity in the matrices. The Jacobian matrix wll generally be sparse since the constraints won't generally depend on a large number of particles. This can help with the matrix multiplication on both sides. The main challenge will be in building these matrices. Feedback Due to numerical error, there will be a tendency to drift away from the proper solution. Recall that in order to derive the modified Newton's law equation above, we forced \( c_i = 0 \) and \( \dot{c_i} = 0 \). Numerical error can produce solutions that won't satisfy these equations within a certain tolerance, so we can use a method used in control systems engineering: the PID loop. In most systems we want to control, there are inputs to the system (force, etc.) and measurements of a system's state (angle, position, etc.). A PID loop feeds the error in the actual and desired state back into the force inputs to drive the error to zero. For example, the human body has many different inputs to the force on the muscles when we stand upright. The brain measures many different things to see if we're centered on our feet or if we're teetering to one side or another. If we're falling or we're off-center, the brain makes adjustments to our muscles to stay upright. A PID loop does something similar by measuring the error and feeding that back into the inputs. If done correctly, the PID system will drive the error in the measured state to zero by changing the inputs to the system as needed. Here, we use the error in the constraint and the constraint derivative to feedback into the system to better control the numerical drift. We augment the forces by adding terms that account for the \(c_i =0\) and \(\dot{c_i}=0\): \[ F_j = Q_j + C_j + \alpha c_i \frac{\partial c_i}{\partial q_j} + \beta \dot{c_i} \frac{\partial c_i}{\partial q_j} \] This isn't a true force being added since these extra terms will vanish when the forces are correct. This is just to inhibit numerical drift, so the constants \(\alpha\) and \(\beta\) are "magic", meaning that they are determined empirically to see what "fits" better. Conclusion Witkin's method for interactive simulations is pretty widely applicable. Although this can be obviously used for movable truss structures and models of Tinkertoys, he also talks about using them for deformable bodies, non-linear optimization and keyframe animation as well. There are lots of applications of this method. Hopefully this showcase of Witkin's method will help make this interesting solution more accessible to anyone doing any type of simulation engines. Article Update Log 27 Aug 2015: Initial release
  9. Particle Systems using Constrained Dynamics

    I hate when I leave fragments of a sentence undeleted. Got it taken care of. Thanks for your feedback!
  10. Two research papers being published as an undergraduate

      Ditto. can you "discover" something everyone has done?
  11.   It seems you might not be familiar with finite element models. It's more than 10 million floats. I'm sure it's double precision, and each node has 6 degrees of freedom. Plus, each node can be tied to multiple elements, so there's element data there along with different types of stress and strain data. It's not uncommon for the output files to be 150+ GB, depending on what information is output.
  12.   Not questioning the soundness of this advice (because I do think it's sound), but would that really be feasible on large data sets? Postprocessing something like finite element result data with 1 million nodes is very standard and larger models with 10+ million nodes are common too. I can't imagine trying to have 2 copies of that data in memory. I would think the original data is stored to disk and only 1 copy is in memory and gets operated on. If need be, then it gets reloaded. But maybe I'm wrong's happened once or twice . 
  13. How would I solve this polynomial equation system?

      The only thing I can think of is to solve for z in the 3rd equation in terms of x using the quadratic equation: \( z = \frac{-mx \pm \sqrt{(m^2-4)x^2+8}}{2} \) and then substitute that into the 2nd equation. Then, subtract that result from the 1st equation*, then solve for x. Then, you can solve for z, and then y.    *EDIT: Even this way I couldn't eliminate all the y's from the equation. Maybe there's some trick there, but I can't see it.   If you're asking for a general analytical (not numerical) technique, I'm not sure what it would be.
  14. How would I solve this polynomial equation system?

      So let me see if I got this straight: You've added scalars to account for a non-zero dot product in an attempt to say that your D, G, and F vectors aren't orthogonal. Your system of equations is simplified from the distance equation where you've considered each plane formed by every pair of vectors.  You want the Euclidean distance between the vectors in their respective plane to be sqrt(2). The thing that bothers me is that x, y, and z are parametric with respect to D, G, and F, but there's no restriction on the directions of D, G, and F or enough restrictions on the values of x, y, and z, so we have way too many unknowns and not enough equations. Although you've basically restricted the dot products of the vectors to be between [0,1], I don't see how you can generate a solution set without either exhausting the possibilities or adding more constraints.