Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

207 Neutral

About maxgpgpu

  • Rank

Personal Information

  • Interests
  1. My temporary short-term hack (which might be okay long-term too, since in actual practice we shouldn't have very many infinitely thin objects) is to make the gravity computation set a minimum on center-of-mass distance before it performs the computation.  I think my temporary choice was somewhere between 0.000001 and 0.001 meter.  What is especially convenient and efficient to model as infinitely thin surfaces is pieces of exploded objects.  To give an actual thickness not only requires twice as many triangles (to create both sides of the thin piece), but also more vertices to seal up the edge thickness.
  2. Okay, seems like you're saying the "center of mass" approximation isn't valid even when objects are far apart (further apart than the sum of their maximum radii).  Hmmmm. Two answers to your other question.  I consider the engine to be a 3D simulation/physics/game engine.  OTOH, I do admit that the focus is still real-time performance, so we do not anticipate pushing for exact formula in every situation, especially if it kills performance too much (below 30FPS minimum on highish-end CPUs/GPUs, probably). Also, our first couple planned games take place in space (zero gravity, except for local objects).  Well, sometimes.  It will also have to support being in the vicinity of one or a few large objects (planets and their moons) in addition to nearby small objects (asteroids, space-habitats, spacecraft).  Though gravity doesn't make floating objects move towards each other very fast, it is relevant over minutes, much less hours.  And because (in some situations) activity happens very slowly in space, the games will probably have a way to specify a sort of "fast forward" that increases the physics interval but keeps the frame interval.  In that situation, gravity in most space environments we envision will be quite relevant!
  3. Interesting stuff. However, none of these pages answers the primary question I wanted to ask (unless I missed something).  If we ignore any twisting or stretching of objects as they pass near each other, and only pay attention to the centers of mass, in what situations do the centers-of-mass actually in reality not move in the way indicated by treating them both as point masses at their centers-of-mass?  We know for the hollow and solid sphere cases that "things change when the objects overlap", but what about other shapes like the one I mentioned, namely "two triangles" or "two round disks"? Yes, I do realize I need to decide what second, third, fourth, twenty-third order effects to ignore, but this question seems to be the first I need to resolve.  After that, I will need to decide where to draw the line and accept approximations.  One factoid that I am hoping someone can confirm is... that the centers-of-mass approximation is accurate (in terms of positions of the two centers-of-mass) at least up until the centers of mass come closer than the sum of the maximum "radius" of the two centers-of-mass (where "radius" just means the distance from either center-of-mass to the furthest point on the object).
  4. Similarly for a solid sphere, the force of gravity gradually reduces to zero as you travel through the surface and approach the center... ...but most physics engines do not simulate the force of gravity between objects because gravity is too weak to be of any effect (instead we just simulate the force of gravity from the earth) so this is irrelevant :) Are you simulating gravity between objects?   It sounds like your actual problem is do to with resolving collisions? In general, resolving object penetrations is a difficult problem, and is made easier by having thick objects that only ever penetrate a small amount. You can limit penetration by increasing the time-step, or using swept-collision detection and resolving collisions at the moment that they occur instead of after penetration has occurred.   We do swept-collision detection and we do push objects back to t=toi before we compute collision response.  So I don't think that's the problem.  All collision response seems to work fine, but when these super-thin (well, infinitely thin) objects come together, you can see that their centers seem to pull together more quickly than you'd expect (they slide face-against-face to pull the centers together, which is correct behavior if you buy into the nominal center-of-mass simplification).  And when the centers do meet, there is that crazy explosion apart that happens.  I mean, it makes sense in a way that something wacko would happen when you think about (G*m1*m2)/(d*d).  When d approaches zero, any non-zero value divided by (d*d) quickly surges towards infinity. Yes, our engine supports gravity, because our first games will take place in outer space, and we want our game/physics engine to automatically support gravity (when the game designers want and enable gravity).  Even on conventional earth-surface games, don't you need some force to push objects towards the ground (or center of earth)?
  5. In almost all 3D physics we treat 3D objects as point masses, and in most cases this is either correct or "close enough".  I know of one physical configuration this simplification is totally wrong, but wonder whether others exist (including a specific one I'll mention). The case I've known about is a hollow sphere (with one or more holes in the surface).  When any other object is outside the skin of the hollow sphere, gravity can be computed as if the object is a point mass per usual practice.  However, if another object passes through the hole in the skin, all of a sudden the gravitational attraction becomes zero... everywhere inside the hollow sphere.  A bit strange, but I can see why that's true. While testing the physics engine in my 3D game engine, I noticed what appears to be strange behavior when flat triangles come close to each other.  In my tests, the triangles have mass, as if they are very thin.  In fact they are infinitely thin, because they are simply triangles, but they behave the same when I make them very thin [3-or-more-sided] disks.  Sometimes they even explode apart (I think when the centers come close together). I realized this might be slightly similar to the hollow sphere situation in a way.  When the centers of the triangles come closer than the radius of the triangles, the point mass representation seems to have the same characteristic that makes the hollow sphere case change behavior, namely (some physical parts of each object is in opposite directions from the center of gravity (or more precisely, "some physical parts of") the other object.  It doesn't feel like exactly the same situation as the hollow sphere, but the behavior is strange and I wonder if that's because this assumption (that point masses are a valid representation) is failing big time in a similar way as the hollow sphere case. Does anyone know about this stuff and provide some comments, links, references or something? PS:  The rigid body physics and contact routines seem to work and behave properly with thick objects (where the center of masses are not super close to the external surfaces).  I don't have code to handle stacking (objects with contacts with multiple other objects), but the behavior I am seeing happens with only two objects. BTW, on a separate issue, what's the best place to read about going from "simple contacts" like I have (only 1~4 contact points on each of any two objects) to support for "arbitrary stacking"?  I'm guessing "stacking" refers to the situation where more than two objects are in contact with each other, with each contact pair having 1~4 points in contact.  I say "not more than 4 contacts" simply because my code reduces cases of more-than-4 contacts to 4 contacts to simplify processing. Thanks in advance for any tips, ideas, references.
  6. Two main points:   #1:  OOP programming languages are a fad, and to many programmers, a religion. #2:  A great many programs and applications can be written without objects of any kind.   Having said that, I'll make a couple personal comments.   #1:  I absolutely loathe OOP programming languages. #2:  Most but not all the programs I write are chock full of object oriented aspects.   Are you shaking your head in disbelief yet?   Perhaps the best way to explain this [perhaps] seeming contradiction is the following.  I absolutely loathe other people telling me what to do, and how to do so.  I absolutely loathe other people forcing me to do things "their way".  I also absolutely loathe putting application-specific mechanisms in programming languages.  I also absolutely loathe "hiding", which is not absolutely necessary in an OOP programming language, but they all are (that I know of).  I also absolutely loathe staring at code that I can't understand without needing to read 65536 to 4-billion lines of header files and other code.  I also absolutely loathe completely bogus, unreadable, incomprehensible operators... instead of completely readable, comprehendable function names.  I also absolutely loathe having 50,000 code libraries that don't work together, and have 148,000 "approaches" (yes, sometimes several approaches per code library).   Shall I continue?  Because I could continue for hours!   Here is another very relevant way to express the above.  My programs are chock full of object-oriented aspects, and I have no trouble whatsoever implementing those aspects myself in conventional C code.  And when I do implement OOP aspects in my programs, my programs remain completely readable and easy to comprehend.  I also don't need to learn enormous quantities of obscure syntax and methodologies (that are often not applied uniformly by everyone).  I can implement OOP aspects in plain old C code quite happily, BTW.  And each is appropriate to what I need to accomplish, not some attempt to hyper-generalize until cows turn into leptons, making everything sooooo general that it is chock full of clutter (required to be ultra-general), chock full of hidden or difficult-to-learn assumptions (sometimes to make hyper-generality even possible or workable).   KISS.  Keep it simple, stupid!   I can do everything... EVERYTHING... with C that anyone else can do with C++, and my code will be simpler, faster, more readable, more reliable, easier to modify, easier to collaborate with others, easier to document, and just plain better.   Why?   KISS.   The short answer is, I can do absolutely everything with:   #1:  statements #2:  variables #3:  operators #4:  functions   Did I forget something?  Probably one or two.  Maybe you don't think of "structures" as being variables.  Maybe you don't think of "structure declarations" as being "types".  You can argue the C language contains a few more kinds of gizmos, but frankly, very few.   ?????  WHAT DOES THIS MEAN  ?????   This means I only need to know (or remember) how statements work, how variables work, how operators work, how functions work... and THAT IS ALL.   Well, guess what, noobies?  If you do everything with only half a dozen very simple and fundamental mechanisms... ALL DAY, EVERY DAY... you not only learn them thoroughly... you habituate them very quickly and thoroughly, so they become like "how to breath".  You do not forget!  You do not get confused.  You don't need 37 ways to walk, you just freaking walk.   And how about interoperation?  How about implementing code libraries?   They are pretty much all exactly the same.  They are a huge pile of functions.  PERIOD.  Everything works the same, because they are all functions, just like the functions in your own programs, and just like the functions in other libraries.  And when you look at a function declaration, you understand it.  Some arguments are inputs, some arguments are outputs (the arguments passed by address without "const").  And you KNOW how every single one of them is passed... either the value of the variable is passed, or the address of the variable is passed.  Just look at the type of each argument.  The value of "int" or "double" or any other type argument is passed directly.  If you see one or more "*" as part of the typename in function declarations, you know the address (or the address of the address, etc) of the argument is passed.  So argument type "int*" means the argument is an address (pointer-to) an int.  And so forth.  You know what's happening.  With C++, you don't even know what a reference type is, because they reserve the right to hide and obscure that (and millions of other aspects of "what's going on" under the covers where you cannot see, or even if you can, it may change next Tuesday).   I'm gonna stop here, because I know there are already at least 256 hit men trying to figure out who I am, so I can be "taken out" for my atheistic views of the world, and especially for wanting my life to be simple, yet utterly flexible.  Religious freaks hate thinking like mine.   Just for the record, I learned C++ before C.  I just hated C++ so much, I decided to give C a try.  It was like removing 65536 scorpion stingers from my brain!  What a freaking relief!!!  You cannot even imagine.   Nonetheless, the fact of the matter is, humans can learn, habituate and even love just about anything, including horrific abuse.  So if you do learn to program in C++, you'll eventually habituate some way of doing things in C++ (there are about 143-million ways, and roughly 1 way to do things in C, not counting trivia like "which style comments do you prefer").   BTW, if there are any advantages to C++ (and I'm sure someone can concoct some convincing-on-the-surface examples), the totality is absolutely, definitely, certainly much more complex and problematic with C++.  Best things about C is... it works the same for 30 years, and it will work the same for another 30 years unless some C++ lover decides to "destroy C from within".   Perhaps the biggest lie of all is... C++ is better for "large programs" with "lots of contributors".  Boy is that the opposite of true!  Holy smokes!  You know how to know this is a total line of BS?  Easy.  Do you think a group of programmers who do everything with 6 simple mechanisms ALL THE TIME (with zero to nothing hidden away out of sight) can work together and understand each others code easier than a group of programmers who adopt varying subsets of 87 mechanisms and 93 non-compatible class libraries?  Hahaha.  You guess!  The additional proof is this.  The way to write good, solid, reliable C programs is the same today as 20 years ago.  The way to write good, solid, reliable C++ programs has not been invented yet, but thousands of people (who claim C++ is the greatest thing since sliced bread) will swear on a stack of bibles that THEIR WAY is the best way in the history of the universe.  Decide for yourself which you suspect is true.   And good luck!  You're gonna need it!   PS:  If you love to be loved, and hate to be laughed at... you must adopt C++.  That's for certain.  If you just want to write great code that works for years, and interoperates easily, well, prepare to be hated and vilified by the most vocal self-proclaimed programmers on planet earth.
  7. maxgpgpu

    force and torque

    Buckeye:  You apparently understood the rocket thruster configuration.  Great.  So, what series of steps (algebra equations or pseudo-code) compute the linear and angular acceleration, velocity, position and orientation for successive frames for that configuration given I know the object mass, thrust in grams, position of thrust in local-coordinates, direction of thrust in world-coordinates, object inverse inertia tensor, and duration of each frame?   Is that short enough?
  8. maxgpgpu

    force and torque

    Well, I'm almost afraid to ask the questions, because maybe the answers are even more basic and fundamental than any question like that (a specific scenario) implies.  What do I mean?  Well, for one thing, how does the duration argument/variable fit into these equations?  What I mean is this.  If we apply a force through the center of mass of an object, I get the impression these force -> acceleration -> velocity -> [delta]-position equations are reasonably precise and representative of what might happen in reality.  To be sure, I don't expect to be able to model the orbit of a planet around the sun, or a spacecraft around the earth with just four applications of force per orbit, but at least in free space (far from stars and planets), I would imagine that linear force, acceleration, velocity and change of position may be described reasonably well by the equations even if the duration is moderately large (every few seconds or perhaps even every few minutes given a long journey, with no need to recompute every 1/30 second or 1/1,000 second or 1/1,000,000 second to get fairly realistic results).   However, the situation doesn't "feel right" when it comes to torque, angular acceleration, angular velocity and angular change of orientation.  Why?  Well, partly because the object could be rotating several times during one "duration".  Now, in some very selective cases, perhaps including the one I mentioned in my last post, even angular quantities might be computed reasonably well.  That's because that case was contrived to put the rocket thruster exactly at the periphery of the sphere or disk shape spacecraft, and the thrust was precisely tangent to the outside surface of that sphere or disk.   In this specific very contrived and unusual situation, the rocket thrust at every moment remains AT the exact same point on the object in local-coordinates, and remains thrusting AT the exact same direction relative to the local-coordinates of the object.  And so, I infer that maybe if we compute everything in local-coordinates, we might get a reasonable result even if the "duration" is relatively long in relation to how long the object takes to rotate around the axis perpendicular to its thrust.   But even this highly contrived case (contrived to be easy to deal with) may be problematic, because I can imagine the trajectory of that flying sphere/disk spacecraft would be rather like a looping spiral in world-coordinates, and this is presumably different than a fully local-coordinates solution.  By definition, in local-coordinates, an object NEVER MOVES LINEARLY.  Why?  Because the center of mass is the origin of those local-coordinate system, and the center of mass will always remain the origin of that local-coordinate system, and so the object will NEVER MOVE in local-coordinates.   Unless what is meant by "local-coordinates" isn't actually the "local-coordinates" permanently attached to the object, but some kind of pseudo-imaginary inertial coordinate system that is, was, and forever shall-be the unaccelerated (but possibly moving) position the object was at when we turned on the rocket thruster, and not the local-coordinate system we think of in 3D graphics (the origin of which is always at the center of mass, and whose directions of the axes are fixed to certain fixed solid features on the rigid body (even if just little red, green, blue dots of paint on the surface so we can see and track them).   Frankly, I don't feel I should be telling you what frame of reference I want the equations to compute.  Why?  Because likely some frame of reference leads to vastly simpler equations, and that is likely the frame of reference I want.  And if I need to convert from one frame of reference to another, I'll just have to find a way to do so.  I already have matrices that convert from local-coordinates to world-coordinates and also the reverse, so depending on what is the most natural frame of reference to compute in, I will likely be able to accept that.  Though it could, I suppose, lead to a further question... how to convert from some new and very strange (to me) frame-of-reference to one I am familiar with.  We shall see.   So I'm tempted to say, how about that rocket thruster case for a starter?   I'm not concerned with gravity forces, I understand how to deal with them (and maybe all forces that always pass through the center-of-mass of all objects).   I was going to say "drag", but I'm guessing symmetric drag is easy, and asymmetric drag may be equivalent to attaching a few small rocket thrusters to various locations on an object and powering them up to whatever power-level generates a force equal to the drag.   So I'll stop here, except to add one more comment that also leads me to confusion.   In "game physics engine development" they have two functions to compute force, one that takes local-coordinates and other that takes world-coordinates.  In the one that takes local-coordinates, they simply transform to world-coordinates and continue.  Which implies, all this "frame-of-reference stuff" is merely a matter of multiplying a position or direction vector by a transformation matrix to get the force in the frame-of-reference you want, and then just procede as if the different frame has no other bearing.   However, think about what that means... or seems to mean to me... in the case of that simple example I started out with.  If that force applied to the periphery of the sphere or disk shape flying saucer is in local-coordinates, then presumably the force rotates around in circles (in terms of world-coordinates) even as the position and direction of that force stays fixed in local-coordinates!  However, if the force remained fixed in world-coordinates, well, the object would move away from the point where the force was applied (a fixed world-coordinate position)... AND... the direction of the force would not rotate with the object, but would remain pointing in the same direction (in world-coordinates) as it started.   Which means... no way could the result be the same, and this "easy technique" of transforming the force direction and position of application from one coordinate system to another could not possibly be as simple as this book implies.  This book (and the other books and slides on the same topic that I've encountered) simply never discuss the questions I am asking.  They just assume "everyone knows and understands the underlying assumptions we are making".  Well, no we don't, certainly not everyone!   And so, I'll leave you with this one example, but also a statement that "several paragraphs or pages are missing" in all these sources to help us understand exactly what is assumed, and how to deal with situations that do not fit the assumptions.  And I don't think it should be up to the "learner" to know what all the cases are... that should be explained by the authors who claim to understand the topic, or so I believe.   PS:  If you know some sources that are more practical, that explain (as much as practical) the whole range of possible physical situations and configurations and how to fiddle the equations to account for them, I'd appreciate the references.  I suppose I could ask questions one by one, but I don't even know yet whether that would require I ask questions about 5 situations and configurations, or 25, or 255.
  9. maxgpgpu

    force and torque

    Buckeye:  You have to decide where the force is applied.  If the source of the force is stationary in world space (gravity), then the force must be applied with respect to that reference frame.  If the source of the force travels with the object, then the force is applied locally.  Your general statement above assumes something about "the force" that's not general.   maxgpgpu:  Well, I don't want to be newton or euler (at least not in this field, and not now).  I'm just trying to figure out which equations to put into my engine to produce the correct result (objects do what they would do in reality, as close as possible).  And that's the problem.  What I read in the game physics books ("game physics engine development" and elsewhere), and in PDF slides downloaded from the internet, are the two simple equations in my original post.  They do say they can be applied in local-coordinates or global-coordinates, then procede to adopt global-coordinates.   However, they do not indicate what kind of physical systems and applications of force correspond to those equations, and which do not.  And so, I ask, what use are those equations if the authors has no idea what kind of physical contraption and configuration would satisfy those equations, and none of their readers (except me) bother to ask the question?   Do the equations only apply to some kind of fictitious idealized situations and contraptions, or to something useful (real life devices operating in real life ways)?  I can't tell.   The fact is, gravity is a bit of a cheat example.  First, it always (for practical purposes) applies its force through the center-of-mass of all objects, no matter where they are in the universe, and no matter how far.  Plus, unless we're discussing black-hole encounters of the third kind, the gravitation force doesn't change enough in any short time-frame (1/30 second or 1 second) to worry about changes in distance during the duration of the computation.   However, other real sources of force are not so obvious and not so uniform... to put it mildly.  But ALL authors I've read say NOTHING about the kind of real situations, real devices and real configurations we will encounter in real life (except maybe a simple spring example).   And so, I have to ask "what physical systems and configurations do those equations I wrote in my original post assume and work properly for?".   To take just the simplest example, how about a flying saucer in the shape of that idealized disk in my previous example.  Let's say that flying saucer disk has a rocket output nozzle on the exact periphery of the disk, pointing exactly tangent to the flying saucer disk.  Okay?   What equations apply to that case?   My first general, intuitive observation is the following.  This rocket thruster is indeed firmly and permanently attached to the exact same local-coordinates... forever (during force generation (thrust) and when shut down).  In my example, the local-coordinates of that thrust are <1, 0, 0>, and they STAY at <1, 0, 0> for all eternity.  Which makes this example vastly simpler than many other practical examples we (game developers) should discuss.   Also, in the local-coordinates of the object, the thrust vector (and therefore the applied force vector) is also constant at <0, 0, 1> (though the magnitude of that force might be anything (often a huge number)).   Now, since the force is applied at a single point at a single fixed coordinate <1, 0, 0>, I have to assume those equations I quoted should apply.  Also, since the force is eternally applied in a single direction, I have to further assume those equations I quoted should apply.  Of course, presumably they only apply in local-coordinates, so we still need to figure out how these local-coordinates related to world-coordinates at some point (namely, every frame time, so we can update the object orientation quaternion or matrix, and the object position in world-coordinates every frame (for many purposes, including collision-detection and collision-response)).  That's easy for orientation, since orientation is pretty much always help in local-coordinates (in a manner of speaking, that is, relative to the starting orientation which is co-aligned with world-coordinates).   But what about other physical configurations?  Say, the force of an air nozzle fixed in world-coordinates?  Or many other configurations where the quantity of force changes during the frame, the position the force is applied during the frame changes (as the object rotates and moves), and the direction of the force changes (at least relative to local-coordinates)?   All these source make it sound like there is one situation and one set of equations.  Obviously not so!  But nobody (I can see) talks about this, which sure makes my life difficult, and anyone else writing a game or physics engine.
  10. maxgpgpu

    force and torque

    We're working on "phase one" of a game/simulation physics engine - the phase that accumulates forces and torques, then turns them into linear and angular acceleration/velocity/position.  We already have "phase two" finished and working well (except a few unusual cases we want to handle more efficiently), and we'll get to "phase three" (collision response) when we get there.   We have functions written to compute native (local-coordinates) inverse inertia tensors when necessary (at object creation and after [non-linear] scaling), and the time has arrived to:   #1: accumulate forces and torques for each object. #2: convert them into linear and angular acceleration for each object. #3: compute new linear and angular velocity and position for each object.   At this point, to perform step #1 (accumulate force & torque), two basic formula seem popular:   force = force + newforce; torque = torque + (point x newforce);   The accumulated force and newforce and torque and point are all in local-coordinates with the origin at the center-of-mass of the object, mostly because that is most convenient in our engine.  In the moderately rare case a point or newforce is more naturally available in world-coordinates, we simply multiply by our world-to-local matrix to get the point or vector in local-coordinates.   Though our engine can load conventional objects, our engine is designed and optimized for "procedurally generated content".  So most game objects are tweaks and fiddles of our ~40 fundamental shapes, and/or assemblies of any number of these fundamental shapes permanently "bonded" together, or attached together (and able to articulate).  The inertia tensors of every fundamental shape is a "diagonal matrix".  As such, every element of the matrix is zero, except the m00, m11, m22 diagonal elements, and the "inverse" is simply the matrix itself with a unary negative before each of the diagonal elements.  Simple!  Though yes, life will become a bit less simple whenever we bond or attach these fundamental shape objects together with some objects in crazy orientations relative to others (to deal with "manana").   So, now for some questions...   #1:  Not all, but seemingly a majority of physics and game engines transform whichever forces and points (where forces are applied) that happen to be in local-coordinates into world-coordinates.  Then they execute those force and torque accumulation equations noted above, and thus accumulate their total force and torque in world-coordinates.   Why?  So far at least, most forces and points of application are naturally in local-coordinates.  For example, every thrust-producing rocket is attached to a spacecraft or space-station or mothership.  And thus the position of the thruster is automatically known in local-coordinates, as is the thrust-direction (almost always entirely in x or y or z in local-coordinates).  While gravity in our application is not really in any major coordinate system (the force vectors are between whatever massive objects are nearby (stars, planets, moons, etc) or very nearby (motherships, space-stations, spacecraft, etc), to compute the gravity vector in local-coordinates is trivial and fast (plus, gravity doesn't generate torque, so the torque equation isn't even executed).   #2:  Not all, but seemingly a majority of physics and game engines transform the inertia tensor and/or inverse inertia tensor from its natural "local-coordinates form" to decidely not-natural "world-coordinates form".  Presumably they do this because they transformed every force-vector and position into world-coordinates during the force and torque accumulation phase (for some odd (to us) reason).  As a result, the output of their computation is linear and angular acceleration in world-coordinates.   While world-coordinates are convenient for translation (moving the center of mass of game objects), rotation is more natural in local-coordinates.  I'm not suggesting this is because we prefer to apply rotation as object-relative "yaw, pitch, roll", but because even rotation via axis-angle or quaternion (also an axis-angle approach) is more natural in local-coordinates.  Yes, I do recognize this part is a "wash" when it comes to compute cycles, because rotation around object-axes versus world-axes is simply the difference in operand order in one multiply.   Nonetheless, from the perspective of someone in the spacecraft (or other vehicle), the control of the orientation of that spacecraft is vastly more natural in local-coordinates.  And to some degree (but less overwhelmingly), the control of translation of that spacecraft is more natural in local-coordinates ("go straight ahead" or "turn/bank right" versus ("continue in direction <+0.2365439352, -0.113554354, +0.523433959, +0.035442342> or <however one would figure out what "turn/bank right" means> in axis-angle form).   #3:  Thus the combined question is: why convert lots of natively local-coordinate forces and positions into world-coordinates, then transform a natively simple inverted inertia tensor with only three non-zero elements... into world-coordinate forces and positions, and a totally scrambled world-coordinate version of the inverted inertia tensor?  And then (perhaps) convert the angular velocity and orientation back to local-coordinates at the end (or else have to convert all subsequent intuitively local-coordinate course-coorections into totally non-intuitive world-coordinate alternatives)?   I do suppose a game that takes place in flatland (the surface of a huge planet) would have somewhat more world-coordinate forces and positions than a space, airplane, submarine or other freeform game, but still not an overwhelming majority.  After all, I would imagine any force applied to an object would need to be known in local-coordinates too, to complete other processes and computations.   PS:  As an aside, a routine to transform with an inverse inertia tensor that only contains the diagonal elements is much shorter and quicker than a full-blown transformation by an inverse inertia tensor that has been transformed to world-coordinates.  So not only do we not need to transform the inverse inertia tensor, but also every time we compute a torque with the inverse inertia tensor, that operation is much quicker too.   -----   Now, onto a different question, albeit somewhat related.   I can't even figure out what in practice is meant by "apply a force in direction dx,dy,dz at point px,py,pz for duration t seconds".  Now, on the surface, I admit this sounds like "crazy talk"... doesn't this moron know what vectors and points are?   Well, I guess it feels a bit humbling after creating a whole working 3D graphics engine to say "not really"!  But I'm serious.  When you get finished laughing at me (which is okay, do enjoy yourself), let me create a very simple example and ask what is meant by that phase --- which we need to understand to perform these simple steps!   Okay, let's take a very simple case.  For example, a 1 meter radius sphere or disk that starts out floating (not moving) in deep space (between galaxies).  For purposes of our first simple example, we will only apply force along the z-axis (towards higher values of z as time passes), directly towards a neighboring galaxy 3 million light years away.   If you imagine the disk instead of the sphere (which might help later on), assume the symmetry axis of the disk is the y-axis.  So if the disk ever rotates, it will have to rotate around the y-axis.  The positive end of the x-axis extends rightward from the sphere or disk, at right angles to both y-axis and z-axis.   Okay, so now we want to apply a 1000 gram force to the rightmost edge of the sphere or disk for 1 second.  Now, 1 second is a rather long frame time, but hey, not much is happening out here between galaxies, and this moderately longer frame time helps make my stupidity easier for smarter folks to understand.   Now, here are some questions this example raises in my puny brain:   #1:  Every source I've read claims the full 1000 gram newforce in direction <0, 0, 1> applied at point <1, 0, 0> tangent to the sphere or disk gets directly summed into the force accumulator variable force without regard for what point on the object that force is applied.  In other words, the full 1000 grams of newforce in the z-direction is accumulated, exactly as much and in the same direction as if that force had been applied through the center of mass (at the center of the sphere or disk).  Is this really right?  Yes, I know.  Every source says so... but this is quite non-intutive if you ask me!  My intuition wants to believe some of that newforce is consumed to add some torque (around the y-axis) to that object.  Yeah, I know, I know, intuition isn't math, and intuition isn't physics either!  But conservation of energy has to mean something around here, doesn't it?  Trick question?   #2:  Indeed, every source I've read claims this 1000 gram newforce in direction <0, 0, 1> applied at point <1, 0, 0> tangent to the sphere or disk does in fact add torque around the y-axis of the object, which will cause the object to start rotating counterclockwise when looking up the positive y-axis from the origin.  But if someone at rest was to capture the sphere or disk, and convert the linear motion to energy with 100% efficiency, and convert the rotational motion to energy with 100% efficiency... would they not end up extracting more energy than they would have if we had continually applied the same 1000 grams of force through the center of mass?  Why is this not a perpetual motion machine that we all should take advantage of?   But now I want to ask even simpler (and dumber) questions!   #3:  Like, what does it even mean to "apply 1000 grams of newforce in direction <0, 0, 1> to the point at <1, 0, 0> for 1 second"?  No, I'm not kidding, and I'm not asking in jest, I'm asking seriously.  I guess there may be two versions of this question (believe it or not), so I'll ask them separately to avoid causing any more confusion than I already suffer.   #3A:  The local-coordinates version of question #3.  Let's assume we paint a tiny black dot on our sphere or disk at that point at <0, 0, 1> in local-coordinates AKA object-coordinates.  Okay, now, what do we mean by "apply 1000 grams of newforce in direction <0, 0, 1> to that point at <1, 0, 0> for 1 second"?  Seriously?  I mean, during that 1 second, that black dot we painted at that point we specified at <1, 0, 0> starts to move around in a circle (relative to the object), and in some kind of string with loopy cycles (relative to the world).   And so, my stupid (yet completely sincere) question is this.  Do we keep that 1000 grams of force applied:     1:  To that exact same point in 3D world coordinate space that we initially applied the force, even as the object wanders away due to our application of force?  If so, how the yonk can we apply force to an object at a point that is not physically part-of or attached-to the object?     2:  To the point on the very rightmost periphery of the sphere or disk, meaning that point on the sphere or disk with the greatest x-coordinate, no matter how the sphere or disk moves or rotates?  If so, how can we apply force to an object at constantly varying points as the object moves and rotates?  Do we need to put some little paddle-wheels on the periphery of the sphere or disk, then fly some very fancy, very responsive servo-controlled air-nozzle along a strange spiral path in order to keep applying a force in direction <0, 0, 1> exactly on the little rotating paddlewheels that happen to be at the greatest x coordinate on the object at every moment?     3:  To that little black dot we painted on the sphere or disk that was at <1, 0, 0> at the start of this 1 second duration, even as that dot wanders around in circles (relative to the disk) and spiral loops (relative to world-coordinates)?  What kind of physical contraption could even apply a force in a constant <0, 0, 1> direction as that dot spiraled around in circles?     4:  How are we to even unambiguously understand what is meant by "apply force in a given direction to a specific point for some duration"?     5, 6, 7, 8:  All these same questions (1, 2, 3, 4 just above) if the force-direction and point are given in world-coordinates instead of local-coordinates?     9:  And how can we answer all these very practical questions for any specific physical devices, situations and configurations we ever encounter in the future?    
  11. Sorry, I was away for a couple days.   I think both of the last two posts have merit, at different phases of the process.  The algorithm that is most efficient at a rough approximation stage will probably not be best at the precision stage.  Which is why we have SAP (sweep and prune) for broad phase and GJK for narrow-phase in the first place.  And at this point, frankly, we have about 5 phases (or sub-phases), from "swept bounding sphere SAP", to "compute earliest-possible TOI based on bounding spheres (or possibly realize they do not collide)", to "compute earliest-possible TOI based upon AABBs of rotating objects", to "enhanced GJK that reports distance and a separating axis and perpendicular plane" (which again lets us compute earliest-possible TOI based upon knowing vertex at maximum distance from center-of-mass and rotation axis ala erin catto), and continued "sorta" conservative advancement based upon our enhanced GJK routine.  BTW, though they probably don't arise very often in most games, we've identified a whole slew of cases that completely hose conventional conservative advancement techniques, so we found ourselves needing to be a little more "clever" and practical than their conservative advancement techniques.  Or so we hope.   All that's great (if not a bit exhausting to create) for conventional convex objects.  But when it comes to this fairly common case of intentionally flying objects into voids in other objects, some new problems pop up, which we'd like to solve efficiently if not simply.   I think the approach mentioned by Jtippetts looks good at broad phase or perhaps one small notch down from there.  Actually, I think two variants of that probably work fine - the first based upon the bounding-sphere of the landing spacecraft (which only requires transformation of one point to world coordinates (plus the fixed radius)), and another [perhaps] based upon the AABB of the landing spacecraft versus the real (arbitrarily rotated) planes of the 5 walls of the landing bay.   I'll have to think more about the techniques proposed by clb.  First I better go read about Dobkin-Kirkpatrick to gain some background.  As for that hill-climbing approach, that sounds quite a bit like that approach I mentioned in my original post.  Fortunately the fundamental 3D shapes that everything is created from in my engine contains "n sides" and "m levels", and the indices of those vertices are retained in arrays in objects for later applications like these.  Which means, walking the vertices in objects is very efficient, and can be done intelligently for most purposes.   So, I'll fiddle.
  12.   Well, my experience is, during the process of brainstorming it is very wise to take a wide variety and attitudes (including, "I'm old, tired and lazy"), and see where they lead.  So... can't complain about that.  In fact, though I don't have the same attitude as you for reasons I'll mention later, you reminded me of an old idea from another problem.   Typically, a "landing bay" will be a big box, the entire volume of which is empty.  Of course, a mothership isn't some stationary object sitting flat on the ground, so the landing bay will always be oriented at some random fractional x, y, z angles.  Otherwise the landing bay would actually be an AABB itself.  However, I already have functions to transform from world to [object] local coordinates, so it is trivial to transform the 8 corner points of the landing bay to local-coordinates to create an AABB that exactly matches the open volume of the landing bay.   Now, I'm not sure what I need to do with the x,y,z position part of this world-to-local transformation matrix, but... I assume I can at least apply the 3x3 part of the matrix to rotate the AABB of the smaller spacecraft into the orientation of the corresponding coordinate system (the local coordinates of the landing-bay object).  So if I can also figure out how to fiddle the position.xyz of the smaller spacecraft to place it in the same relative position as the landing bay (hopefully just one or two vector subtracts), then we can first perform a quick bounding-sphere test on the smaller spacecraft, which if does not pass (says "overlapping"), we can then compare the AABBs of the "landing bay" and spacecraft for overlap, which if does not pass (says "overlapping"), would might then lead to a full GJK test against whichever of the 5 sides of the landing bay box the AABB tests indicates are "overlapping".   Actually, it is probably much smarter to transform the AABB instead of the hundreds or thousands of vertices in the small spacecraft, and the resulting AABB on the small spacecraft will almost always tightly and efficiently enclose that spacecraft.   By simply skipping AABB or GJK tests against the 6th face of the landing bay,  we avoid any problems of the spacecraft not being fully inside the landing bay (on that side).  And by testing against the sides of the landing bay box one at a time, we don't have to detect that elusive "fully contained" situation.  However, that is at the price of needing to run the GJK up to 5 times instead of only once!   Of course, this doesn't work so well when the shape of the landing bay isn't some convenient shape like a rectangular box.   As an aside, the technique you described is exactly what I did when I designed a PS2 game engine under contract for a video game company in the 1990s.   Now I'm creating a much more advanced game/simulation engine, as general purpose as possible (for many kinds of simulations and games).  So the notion of a limited number of collision points just doesn't meet the requirements of this new engine.  Also, one focus of this engine is "procedurally generated content", most definitely including 3D objects.  In fact, that's how the engine creates objects now, though it can load existing artist-drawn objects too.  The point being, the engine cannot depend on "artist identified anything", because brand new, never-before-seen objects will be created on-the-fly by the engine, depending on what is happening in the game (plus profiles of the players generated by watching them play, plus many other factors).   Which means, by the time all this works, we'll be "old and tired too".  But hopefully not until "after"... hahaha.   But this also means we are spending a lot of time and effort to make highly-efficient, highly-general (and/or specialized, and/or self-adaptive) mechanisms for many aspects of the engine.   Thanks for your suggestion... it got me thinking in another direction.
  13.   What's wrong with this? Just use a BVH of some kind (probably AABB tree) and then treat your mothership mesh as a surface. Then you can do collision detection against the surface of the mesh. Suddenly the mesh doesn't even need to be "water tight", since you only care about surfaces. If you can build your tree through insertion the tree can even represent deformable meshes. If needed adjacency information can be computed upon the surface to allow rigid shapes to roll around without catching triangle or polygon edges. If tunneling is a worry a time of impact (TOI) can be computed and tunneling can be entirely avoided. If the TOI computation is too hard to engineer or not enough time can be spent to develop it alternative forms of tunneling avoidance can be used.     Actually, I already have a collision detection function that works on concave objects in a similar manner (essentially two parallel sweep and prune routines on the triangles in the two objects).  It is totally general and works on absolutely everything for some of the reasons you mention.  The problem is, that routine is 2x to 20x slower than GJK.  As you say, one of the advantages over GJK is the fact these kinds of approaches work on overlap of object surfaces, not object volumes like GJK.  After the first frame the test is faster due to temporal coherency (the triangle sorts are much quicker).  But still... too slow, which is why I'm looking for alternatives.   Thanks for the suggestion though.  It is a good one.
  14.   Could you elaborate on that a bit?  What is the "union"?  Something related to the minkowski sum or difference?  And does any efficient way exist to perform this computation?
  15.   Right, there are at least two difficult cases:   #1:  object starts to enter landing bay, and therefore overlaps both landing bay and mothership, but is not fully contained in either.   #2:  object is fully in landing bay, and therefore fully contained by both landing bay and mothership.   Case #2 is solved if some genius figures out an efficient algorithm for "fully contained".   Case #1 is still a problem.  Seems so easy visually... just not algorithmically --- if that's a word?   -----   It is easy to say "decompose huge objects into zillions of convex objects", but not so easy in practice for a great many objects.  Also, my engine is optimized for procedurally generated content, and making procedures that support making great looking objects is difficult enough, without also trying to make them aware of how to automagically decompose objects into convex objects.   Incidentally, imagine a huge egg-shape or spherical mothership.  You must break that into EVERY SINGLE TRIANGLE to get "convex shapes".  That's just insane --- a big mothership could have millions of triangles!   A better way is called for.  We just need to figure out what it is.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!