Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

207 Neutral

About maxgpgpu

  • Rank

Personal Information

  • Interests
  1. My temporary short-term hack (which might be okay long-term too, since in actual practice we shouldn't have very many infinitely thin objects) is to make the gravity computation set a minimum on center-of-mass distance before it performs the computation.  I think my temporary choice was somewhere between 0.000001 and 0.001 meter.  What is especially convenient and efficient to model as infinitely thin surfaces is pieces of exploded objects.  To give an actual thickness not only requires twice as many triangles (to create both sides of the thin piece), but also more vertices to seal up the edge thickness.
  2. Okay, seems like you're saying the "center of mass" approximation isn't valid even when objects are far apart (further apart than the sum of their maximum radii).  Hmmmm. Two answers to your other question.  I consider the engine to be a 3D simulation/physics/game engine.  OTOH, I do admit that the focus is still real-time performance, so we do not anticipate pushing for exact formula in every situation, especially if it kills performance too much (below 30FPS minimum on highish-end CPUs/GPUs, probably). Also, our first couple planned games take place in space (zero gravity, except for local objects).  Well, sometimes.  It will also have to support being in the vicinity of one or a few large objects (planets and their moons) in addition to nearby small objects (asteroids, space-habitats, spacecraft).  Though gravity doesn't make floating objects move towards each other very fast, it is relevant over minutes, much less hours.  And because (in some situations) activity happens very slowly in space, the games will probably have a way to specify a sort of "fast forward" that increases the physics interval but keeps the frame interval.  In that situation, gravity in most space environments we envision will be quite relevant!
  3. Interesting stuff. However, none of these pages answers the primary question I wanted to ask (unless I missed something).  If we ignore any twisting or stretching of objects as they pass near each other, and only pay attention to the centers of mass, in what situations do the centers-of-mass actually in reality not move in the way indicated by treating them both as point masses at their centers-of-mass?  We know for the hollow and solid sphere cases that "things change when the objects overlap", but what about other shapes like the one I mentioned, namely "two triangles" or "two round disks"? Yes, I do realize I need to decide what second, third, fourth, twenty-third order effects to ignore, but this question seems to be the first I need to resolve.  After that, I will need to decide where to draw the line and accept approximations.  One factoid that I am hoping someone can confirm is... that the centers-of-mass approximation is accurate (in terms of positions of the two centers-of-mass) at least up until the centers of mass come closer than the sum of the maximum "radius" of the two centers-of-mass (where "radius" just means the distance from either center-of-mass to the furthest point on the object).
  4. Similarly for a solid sphere, the force of gravity gradually reduces to zero as you travel through the surface and approach the center... ...but most physics engines do not simulate the force of gravity between objects because gravity is too weak to be of any effect (instead we just simulate the force of gravity from the earth) so this is irrelevant :) Are you simulating gravity between objects?   It sounds like your actual problem is do to with resolving collisions? In general, resolving object penetrations is a difficult problem, and is made easier by having thick objects that only ever penetrate a small amount. You can limit penetration by increasing the time-step, or using swept-collision detection and resolving collisions at the moment that they occur instead of after penetration has occurred.   We do swept-collision detection and we do push objects back to t=toi before we compute collision response.  So I don't think that's the problem.  All collision response seems to work fine, but when these super-thin (well, infinitely thin) objects come together, you can see that their centers seem to pull together more quickly than you'd expect (they slide face-against-face to pull the centers together, which is correct behavior if you buy into the nominal center-of-mass simplification).  And when the centers do meet, there is that crazy explosion apart that happens.  I mean, it makes sense in a way that something wacko would happen when you think about (G*m1*m2)/(d*d).  When d approaches zero, any non-zero value divided by (d*d) quickly surges towards infinity. Yes, our engine supports gravity, because our first games will take place in outer space, and we want our game/physics engine to automatically support gravity (when the game designers want and enable gravity).  Even on conventional earth-surface games, don't you need some force to push objects towards the ground (or center of earth)?
  5. In almost all 3D physics we treat 3D objects as point masses, and in most cases this is either correct or "close enough".  I know of one physical configuration this simplification is totally wrong, but wonder whether others exist (including a specific one I'll mention). The case I've known about is a hollow sphere (with one or more holes in the surface).  When any other object is outside the skin of the hollow sphere, gravity can be computed as if the object is a point mass per usual practice.  However, if another object passes through the hole in the skin, all of a sudden the gravitational attraction becomes zero... everywhere inside the hollow sphere.  A bit strange, but I can see why that's true. While testing the physics engine in my 3D game engine, I noticed what appears to be strange behavior when flat triangles come close to each other.  In my tests, the triangles have mass, as if they are very thin.  In fact they are infinitely thin, because they are simply triangles, but they behave the same when I make them very thin [3-or-more-sided] disks.  Sometimes they even explode apart (I think when the centers come close together). I realized this might be slightly similar to the hollow sphere situation in a way.  When the centers of the triangles come closer than the radius of the triangles, the point mass representation seems to have the same characteristic that makes the hollow sphere case change behavior, namely (some physical parts of each object is in opposite directions from the center of gravity (or more precisely, "some physical parts of") the other object.  It doesn't feel like exactly the same situation as the hollow sphere, but the behavior is strange and I wonder if that's because this assumption (that point masses are a valid representation) is failing big time in a similar way as the hollow sphere case. Does anyone know about this stuff and provide some comments, links, references or something? PS:  The rigid body physics and contact routines seem to work and behave properly with thick objects (where the center of masses are not super close to the external surfaces).  I don't have code to handle stacking (objects with contacts with multiple other objects), but the behavior I am seeing happens with only two objects. BTW, on a separate issue, what's the best place to read about going from "simple contacts" like I have (only 1~4 contact points on each of any two objects) to support for "arbitrary stacking"?  I'm guessing "stacking" refers to the situation where more than two objects are in contact with each other, with each contact pair having 1~4 points in contact.  I say "not more than 4 contacts" simply because my code reduces cases of more-than-4 contacts to 4 contacts to simplify processing. Thanks in advance for any tips, ideas, references.
  6. maxgpgpu

    Threads are Cancer

    [quote name='ApochPiQ' timestamp='1314403294'] I'm kind of depressed that so many people totally missed the point that threading is a bad way to do parallelism to begin with. It's not about making threading work better, it's about [b]killing it entirely[/b]. [/quote] Now you seem to be hung up with terminology, not reality. Look, if an application has to execute four time-consuming subsystems that are utterly independent*, then ALL the problems with multi-threading (or multi-processing) vanish. Poof. Gone. The way I see it, the problem with multi-threading (or application architecture and design in general) is that everyone is trained to focus on fine-grain multi-threading, which is indeed a nasty cancer in most cases (a few very convenient cases do exist in which fine-grain is non-problematic too, but not enough to bother with). In applications like games (and we are on a website devoted to games), there simply isn't enough CPU throughput to do everything we want to. In the end, we must give up features, details, number of objects, realism of physics, high-quality shadowing, or something (usually quite a bit). One solution is optimize. I personally have no problem converting key code into [SIMD] assembly language, but that only goes so far. I totally agree that more attention should be paid to overall architecture and design, but that too has its limits. In the end, some applications are simply "up against the wall", including all game engines and most games. Given the death of Moore's Law and many core CPUs, there is very little choice - we must make those cores work for us as much as we can (without accepting cancer). And the answer to that is simultaneous execution of subsystems on multiple cores (either with the thread subsystem or process subsystem, but one way or the other). *Any number of threads can read data that isn't changing during the multiple threads/processes cores, but none can alter shared data. The fact is, if designed carefully, the various subsystems can totally avoid even reading any variables that might possibly change. Which means, there is no cancer, not even a mild cough. But sure, the multi-threading most people do is indeed horrific. I won't even go there.
  7. maxgpgpu

    Threads are Cancer

    I agree that multi-threading is very bad news. However, as with other aspects of software development, my experience has been the flaw in this area is letting someone else take responsibility for "making it work", and "making it robust". I say "trust no one", "never accept non-transparency". Which to me, means not wanting or adopting language mechanisms. I'll tell you what my solution is. I'm not sure what name should be attached, but perhaps "cooperative parallelism" or "parallel subsystems". For example, in a game engine, let's say "collision detection" takes too many CPU cycles to put that process in the serial flow of processes the engine executes. My solution is to hand that ENTIRE subsystem to another CPU-core. Think about it. In the serial implementation (which is made to run reliably first), work gets done, then collision detection is performed, then a series of other processes are performed, some of which depend upon the results of collision detection. So my solution is to have the "master CPU core" get to the point where it starts collision detection, then look to see whether it has another "slave CPU core". If no other core exists (or is available), the master CPU core just executes the collision detection subsystem itself, then moves on to subsequent work. If another CPU core is available, the master write a value into memory to tell the other core everything is ready for collision detection to be performed, then wakes up that slave CPU core. Then the master core executes other processes or subsystems that have nothing to do with the graphical objects and their vertices. When it finishes one (let's say "processing the message queue"), then the master CPU core checks to see whether the slave CPU core has written to a specific location that says "collision detection complete for frame n" (the current frame). If collision detection is not yet complete, the master CPU core executes another independent subsystem (let's say "updating the audio queue", or "sending network messages", or "computing object behavior", or "you name it" --- all of which are independent subsystems that do not need to access or change data relevant to collision detection. This is "bomb proof". Sure, a bit of thought is required to survey all the information that is accessed or written by each subsystem, but in my experience, it is quite obvious that "processing the message queue" does not change "object vertices" or anything else accessed or modified by the "collision detection" subsystem. I find this kind of "multi-processing" (if you will), extremely reliable. It isn't like a programmer could never make a mistake (some data accessed by two subsystems executing simultaneously), but... [i]that is a mistake, a clear bug[/i]. Note that both subsystems might access certain "stable variables", and that's perfectly okay. For example, both subsystems might access the current "frame number", and that's perfectly fine, since frame number is only incremented when the back-and-front buffers are swapped and the back-buffer is cleared. What must never be allowed is for one subsystem to access data that another might be modifying. Sure, it is [i]possible[/i] to coordinate such things (multi-threaders do this crap all the time), but I don't. I write my subsystems to be clean, modular and very clearly contained to certain information... and as a result I [i]never[/i] run into this situation. When I do, I reorganize the program to make it cleaner, more modular, and avoid the problem. At worst, and this rarely, rarely happens, I am tempted to make one or two "alternate/duplicate variables" that get copied from their potentially dynamic cousins at a safe point (between subsystem execution), which assures the subsystem that accesses them never see dynamic, changing variables (the state of those variables is ALWAYS in a very known and stable state between known subsystems). Anyway, this is much too long winded, but I totally disagree on this issue as I do about other issues. Do NOT depend on some language or library feature to HOPEFULLY let you "get away with something". That is not only a cop out and unreliable (since you don't understand what they're doing), but it changes from version to version of their tools/software/environment, from OS to OS, from language to language, and so forth. In contrast, if you take responsibility to make sure the very structure of execution is solid (which is easy when you do cooperative multi-processing this way), you will never get tripped up by what others are doing. I've been there, done that, and will never, ever trust the reliability of my code on anything provided by someone else. Forget that!
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!