Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

311 Neutral


About Defend

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Frob, I too haven't heard an argument that the namespace approach is worse (or not) than the class approach, but that's because it is practically impossible to find discussion on that particular question at all. Any search (I can find anyway) related to 'how to singleton' and C++ produces the class approach. Any search with the word 'singleton' at all results in replies all too keen to launch into thoughts on the pattern and/or globals. I don't disagree with them at all, but they drown the focus on any related specific questions such as this one. Thank you all though for confirming for me that I'm not just missing something obvious in C++. I think Hodgman's suspicion is a good one. Seraph, your comment was something I hadn't thought of so that feels like I've finally I found some closure! Many thanks.
  2. Not asking about singletons here (nor advocating). With that clarified: If we assume someone wants a global + unique object, why isn't a namespace always the preferred approach in C++, over implementing a singleton class? I've only seen the namespace approach encouraged when there aren't statics being shared. Eg; from Google's style guidelines: But why not have non-member functions that share static data, declared in an unnamed namespace? And why isn't this generally suggested as a better alternative to writing a singleton class in C++?
  3. I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what. My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either: * make lots of VAO/VBO pairs and flip through them to render different objects, or * make one big VBO and jump around its memory to render different objects. I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case. If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
  4. Looks really good, thanks.
  5. I want to be able to model one stick of a nunchuk being moved about freely via user control, and the other stick reacting/swinging in a way that's believable. I'm not even sure what to google; running into lots of talk about martial arts in general... and the Nintendo Wii. Could it be modelled with a single pendulum? Single pendulum and a joint maybe? These are things I'd have to learn, and ultimately the goal is something that looks clean and feels convincing "enough", so any tips are welcome. Including suggestions on how to smoothen things out should pure equations result in jittery results. Many thanks
  6. I actually disliked that 'remove degrees of freedom' approach in Linahan's article. I disliked the article on the whole, finding issues with it almost as much as I found with Fauerby's. I rambled with displeasure here: http://www.gamedev.net/topic/674540-annoying-things-about-fauerbys-and-nettles-sphere-mesh-collision-detection-guides/   Since you say it was used in Doom, perhaps I simply didn't recognise the proper application for the restricting degrees of freedom approach. Perhaps what I had in mind was something it didn't cater to. I can't remember why I was whining about that particular thing. I'm sure I look stupid in my other thread hehe.    I believe Fauerby's quadratic coefficients for detecting edge intersections are wrong also. This is because they are a straight copy of Schroeder's coefficients (found in the August 2001 copy of GDmag, at http://www.gdcvault.com/gdmag - you're welcome, people on the internet trying to source that). And Schroeder's coefficients are likely wrong, because Akenine-Moller says they are (in Real Time Rendering), and Akenine-Moller's coefficients are definitely correct.   Since Akenine-Moller used the exact same equations but got different results, and since none of these authors ever show how they get from equations to coefficients, Schroeder's are most likely wrong, which means Fauerby's are too.   The reason I say AK's coefficients are definitely correct is because Ericson (Real Time Collision Detection) produces the same coefficients as Akenine-Moller, and while Ericson - like every other author - doesn't show how he reached from equations to coefficients, Ericson's approach actually can be proven on paper. I believe the others required Mathematica; at least, Schroeder did. But Ericson's coefficients are definitely legit, and Akenine-Moller's are the same, even though Ericson uses a different approach (no simultaneous equations, no dot product = 0 trick).     So Nettle's guide = big mistakes, Fauerby's guide = several mistakes, Schroeder's guide = wrong coefficients.   Interestingly, these 3 authors are the 3 who wrote guides. Ericson and Akenine-Moller didn't write guides, they wrote books. Says something, doesn't it? BUT having said that, Ericson also totally supports Nettle's completely incorrect approach, now listed as such on his website. Ericson also oddly describes his approach as "the one suggested in Schroeder with corrections in Akenine", when it simply isn't. The difference is the only reason I was able to work it out and confirm it as correct.
  7. Sorry to reply so late, it's been a busy week moving house. And we had no cooling... so I've been avoiding using this furnace (laptop) for long.   And sorry but your approach just isn't the right one. It doesn't address several of the things I've said are required -- such as a complete set of sliding plane normals, and normals that change as abruptly as the mesh does -- but I gave it attention anyway because you have been claiming it's definitely "the" accurate way to achieve what the thread asks; accurate, and validated by reality. You've stated that with complete certainty, and even described it as a model upon which approximations should be based.   But it simply isn't validated the way you claim it is. It isn't "the" accurate way to do the thing I'm asking; there isn't a real version of it that we can validate by. Your replies to my challenges on this appear to have had little thought put into them. You can't just pluck out the word "compression" by association and use it carry your claim. Intersection doesn't appear in collision algorithms because of real life compression; it's honestly a little disrespectful that you replied with such a shallow response and expected me to take it as an explanation. Intersection in a collision algorithm was never a model for approximation; it is the approximation. We tolerate intersection because it simplifies.    Similarly, you can't just say that since collision exists in real life, any non-real parts of a collision handling algorithm are automatically validated by real life. You have to actually check simulated scenarios against real ones first, and if you had you would have realised that even for a basic concave corner, reality simply doesn't generate the infinite different collision normals that your approach does. In fact, that non-discrete output you value so much (and understandably) is the biggest giveaway. Reality doesn't give us any interpolation for a ball with a fixed velocity smashing into a concave corner. Collision directions are based on the surfaces' gradients, and the transitions between them are abrupt by definition. Where you dismiss my approach (not saying it's a stable idea, just making a point) with the flower example, it's actually abruptness you're identifying, not jitter. I'm well aware of jitter concerns; your comments make it appear that you've assumed I'm not, but what you identified as a problem is not jitter, it's the intended abruptness.   At this point you might re-explain or redefine what you're suggesting, but ultimately if you don't put a constraint on how that intersection position comes to be, you've got infinite normals possible from a fixed incoming velocity. That's the continuous yet unreal range of outputs that you value, that I don't. And if you do put constraints on those intersecting positions, you've missed the topic of the thread: ignored back-face collisions leading to any intersection imaginable.    What a confusing mess right? It is definitely annoying having to deconstruct what someone means when it's not clearly described, and I'm not going into further detail with the above paragraphs because I shouldn't be the one having to pull apart quick suggestions before I can tell if they even apply to the thread's question. That should be checked before those suggestions are typed, and the connection to the question made clear. I still really do not know exactly what application you have in mind that your suggestion applies to. But I am very, very sure that it is something. You suggestion applies to something, and I imagine that it is indeed accurate at calculating some piece of that thing. But that application is not this application, and an accurate calculating of some piece in that application does not equate to reality validating intersection response. In this thread, I really think you've tried to make the question fit an answer you already have, instead of the other way around.    In case it sheds any light on things for you, I don't use a posteriori collision detection in the application the thread describes. Maybe that helps, maybe not. I'm guessing that's what you're thinking about but I could be wrong. You're welcome to really clarify what your suggestion applies to, but I know now it's not what I want. If you do, demonstrations, or linked sources, or clear mathematical trails would be better than just more claims without clear reasoning, because it's that absence of specifics that let us go so long thinking X was talking about Y.     And my approach? I found an example in it I didn't like, so I'm rethinking it. 
  8.   That justifacation is easy to make: In physics, either you do it correctly and it will work, or it will fail (jitter, tunneling) - there is not much space in between.   That's why I am pointing out that this isn't using physics any more than the moving portal uses physics. The reason there's next to no confusion between working and failing when simulating actual physics is that we simply have reality to give us validation. We don't have that here. Reality won't validate any behaviour you might give to a bowling ball and a diamond slab that occupy the same space at the same time, any more than it will validate ways to push a box through a moving portal.   So what validation are you using? You are claiming that one particular behaviour (defining a collision normal the way you do) is validated; is simply the "accurate" behaviour. What causes you to believe this?   Meanwhile I think it's easy to argue that the goals of behaviour of unreal situations are at the developer's whim. That's why I'm skeptical. I need to see something that shows why this situation shouldn't fall into the same category as moving portals, jelly blocks in Minecraft, or surfing in Team Fortress 2; the category of behaviour goals for unreal situations that the developer invents because that's what the developer likes.   ---   Regarding the image, thanks for giving it a look.  I see it hitting 1 "edge" (the top corner) and 1 face (the bottom line). You have called this less satisfying, but I am not sure you're saying that because of that example, or just your general opinion. In that example I am perfectly satisfied with that result. That's what the OP was all about: find any example that doesn't give a satisfying set of normals.    Importantly, bear in mind that we need all those individual normals for restricting sliding motion; that is unavoidable. Regardless of whether we desire a collision normal or not, we aren't saved from having to calculate those sliding planes; we absolutely need them. I believe then that the only thing your approach saves us from is 3 lines of code that find a simple average. Compare those 3 lines to your code; altered for 3D, then altered further to handle completely arbitrary shapes instead of cubes.   For all that extra complication, if one is to claim that this is in fact the way things should be done, then something needs to validate that claim that isn't just personal preference.    These are my personal preference factors for the averaging approach (unless my approach breaks somewhere):  * the collision normal comes from the surfaces involved, not the depth.  * I don't want penetration slightly altering collision normals,  * The player will know that the planes along which they feel movement being blocked are going to define their jump, and they can predict this jump by feel, without having to look at the ground. * Collision normals that match those of the equivalent non-embedded situation (as in the 3 examples shown earlier) * A character embedded deep in the ground, but only a tiny bit in the wall, will kick off the wall as effectively as a character only touching the ground and only touching the wall.   Both our approaches are consistent, predictable, and have something that the player can pick up intuitively, but I wouldn't call yours or mine "the" way it is supposed to be done.... unless I hear of something that supports that claim. Currently I would be comfortable writing in a guide "If you want depth to affect your collision normal as well, here's an approach for that." 
  9. I don't want to just poop on that idea, but I think that calculating the centres of possibly quite complex shapes, multiple times per frame, is kind of bordering on overkill. It's not that it's complicated (it is though), it's that it's complicated without something in reality to justify the approach. If it were based on a real-life physics situation then I would embrace any complications required to do things the "correct" way, but I don't believe there is one correct way, no more than there is a correct way to solve the moving portal problem.    http://9gag.com/gag/adj90GB/moving-portal-problem   If the developers had done it one way, players would have said "Yeah, that feels right." If the developers had done it the other way, players would have said "Yeah, that feels right." And people argue both ways using classical physics until the cows come home.   Since tennis balls don't slide around inside bricks in real life, if someone were to put volume centre calculation in a guide, readers would question its justification.    So in my opinion the goal is just something that feels consistent and acceptable to the user, but from there I say the developer really has free reign. Averaging, weighting, penetration depths, volume centres... they're all going to give a normal of some kind, and all can feel fine.   Personally, averaging is fine to me for what I have in mind. I don't find value in embedded situations giving different results to their non-embedded equivalents, nor in asking the user to factor in penetration when trying to intuit the direction they're going to jump in. But maybe you do. Perhaps you're imagining a scenario where penetration definitely *will* factor into the users intuition. And that's cool too. (But on that note, I don't think your volume-centering idea gives anything to the user that the weighting/penetrating idea didn't already.)   By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it.      The discussion on what to do with normals is welcome; it's making me think. As long as the thread's topic of how to select them does also get closer to closure. And possibly it has now, because you actually seem quite confident about the normal selection process. We describe it with different language (you say Voroni, I say "hey guys I've got this 7 step diet routine") but it looks like we are thinking the same way.   The image I posted last (the pyramid) actually produces an edge normal, but I think you called vertex not because we disagree, but simply because the image isn't clear. I tried to find good angles, but looks like I failed hehe. Here it is again, new angle.    [attachment=30564:Pyramid2.png]   So edge, yeah? Between the purple and blue.     A final one, if you want, about selecting normals:   [attachment=30566:Contrived.png]
  10.   I don't think that makes it a bad idea, because the abrupt changes are no more abrupt than changes in mesh itself. It seems odd to fear sudden changes in incline while embedded, when they're already fine while not embedded (such as if example 1, realistic situation, were to travel to the right) if that is simply what the mesh dictates.   I'm not eager to smooth over all non-embedded changes in direction. I prefer the collision response to be as sharp as the mesh. That said, the way you suggest weighting things is a good idea I'd surely use if I feel differently about smoothing later.    To clarify something also, I wouldn't be averaging anything for moving through the mesh. I would be restricting to all valid sliding planes found (ie., using all selected normals). Averaging was what I suggested for things like jumping off of it.     And just making it explicit again, the topic is about asking "how to select normals", not "what do I do with normals once selected". I'm not sure if you've noticed, but your suggestion also rejected normals, and we happen to have agreed on which ones. It could be that you're only thinking in terms of face normals, so you have rejected an embedded *edge collision's* normal intuitively. In Example 1 we've both ignored the edge intersection (the bottom corner, since the image is in 2D) that the right-hand triangle would register.   That's the kind of thing I'm actually looking for agreement/disagreement with in this thread.   In case that is confusing ("Why would the right-hand triangle register an edge intersection?"), it's because the collision detection algorithm checks triangles one by one.   But those 3 examples are only simple ones. Here is an example that has no face intersections:    [attachment=30559:Pyramid.png]   The orange lines are simply guidelines displaying extrusions of each tri's face along its normal, to show more clearly that the sphere's centre does not project onto any face.   The question continues to be: Which normals would you choose, and is there any problem in the process I wrote for selecting them?       Actually the whole goal of this is to have no such collision resolution trying to put the sphere "on" a surface. If the sphere is embedded, it stays embedded and moves just as comfortably as a non-embedded sphere. Besides, as you mentioned, cheeky displacements like that are just asking for trouble. 
  11. Hmm after a quick read this seems to be a different thing. Probably because my opening post was rushed, so I didn't spell out exactly what I do and don't have already.    Finding the closest point on any triangle is something I already assume is done. Convex, concave, it doesn't matter.   The task that I think I have solved, and am asking for comments on, is to devise an algorithm that knows which normals to accept, and which normals to ignore, from the set of closest points found in any given embedded situtation. An embedded situation can involve multiple tris, and thus, multiple collision normals, some of which should be ignored.
  12. Thanks for those terms, I'll look them up. That's really helpful.    Regarding crossing surfaces, part of the process of ignoring back-facing collisions is ignoring the tri completely once the sphere's centre goes behind its surface. So I believe that situation is ok.    I'm not sure why you say convex though... my examples are concave. Perhaps I should have made it clearer in the images. The black lines are facing up... as if they're the ground.   Regarding finding closest features between a point and a polyhedra, that is something I am already able to do, convex or concave.
  13. I plan to write a new guide to sphere-mesh collision handling, since I haven't been able to find anything that doesn't come with a few problems. (The next best thing is Fauerby's guide at http://www.peroxide.dk/papers/collision/collision.pdf ).   Part of the approach I'm taking is to support one-way collisions. That is, to ignore back-facing collisions not just for a performance gain, but for practical application as well. The desired result is for the sphere to be able to find itself embedded inside a triangle and still produce "expected" motion and collisions. But not just one triangle; many triangles, at the same time, in perhaps complicated ways.   By "expected" motion, I mean that it slides only in ways that make sense, it doesn't fall through, and if required to push away from the collision, it pushes away along 1 overall normal that also makes sense. (For example, the user might simply press jump.)   Some intersections might be in front of a tri's face, while some are off to the side, while some triangles register the sphere as closest to one edge, while others find it closest to a vertex shared by earlier triangles, etc etc etc., all at once.   Much words. Such confuse.    I spent entire minutes putting together this example of 3 relatively simple embedded situations. On the left is an embedded situation; on the right is a similar realistic situation that my brain thinks matches the embedded one. The goal is for the sphere in the embedded situation to behave as if it were in the realistic situation. These 3 examples are okay demonstrations of this idea, I believe.   [attachment=30539:Embedded situations 2.png]   Bear in mind these are just 2D; 3D situations must be handled as well.     I believe I have found a process to get a correct set of normals from any situation at all. But mathematical proofs are tough, and I wouldn't know how to begin unit testing this. So instead, I'll describe my solution/algorithm and challenge any readers who might be really keen to dream up an embedded situation that breaks it.    Hopefully someone will find holes in someone else's ideas. You can use as many triangles as you want. If it's not obvious, specify which side is the front of a tri.     My solution:   Things we know: * Sphere's centre * Sphere's radius * Position of every vertex of every triangle. * Face intersection == the sphere's centre projects onto the face of a tri that it intersects. * Edge intersection == the sphere's centre projects onto an edge of a tri that it intersects, but doesn't project onto the face of that tri. * Vertex intersection == the sphere's centre overlaps a vertex of a tri that it intersects, but doesn't project onto the face of that tri, nor either of that vertex's two edges.  * Intersection point == closest point on any given intersected triangle to the sphere's centre * Collision normal (per intersection type) == Direction from intersection point to the sphere's centre   [attachment=30540:Intersections.png]     Here's that algorithm:   1. Test every triangle to see if it is intersecting the sphere. If it isn't, discard it. Ignore any back-facing situations (sphere's centre is behind triangle's plane). 2. If a triangle is intersecting the sphere, determine whether it is a face intersection, edge intersection, or vertex intersection. 3. Face intersection? Add its collision normal to "the list" of collision normals discovered so far. 4. Edge intersection? If that edge is not involved in any face intersections, add its collision normal to the list. Else, ignore it. 5. Vertex intersection? If that vertex is not involved in any edge intersections, nor face intersections, add its collision normal to the list. Else, ignore it. 6. To slide the sphere with a new velocity, remove from that velocity any vector component that goes into any collision normal in the list. Then slide. 7. Or to push away from the embedded position, average all the normals' directions and use that as an overall collision normal.     To help the brain think in 3D, recognise that this image is showing 1 face intersection, 2 edge intersections, and 1 vertex intersection. In this example, only the face intersection's normal would be used, not just because it already involves all intersected edges and vertices, but also because the sphere's centre is behind the other 3 triangles' planes.   [attachment=30541:4 ways.png]
  14. About the only thing you'll find that describes how to do it (mostly) is the peroxide.dk link provided above. However it does have a few errors. Depending on the geometry you use you might not run into them. I've actually been been writing a new (and hopefully improved) guide to what you want, but it won't be ready for a couple of weeks still I'd say. So for now, Fauerby's guide (the peroxide link) is the best I've been able to find. I don't really like recommending it because of a few misunderstandings it creates, but yeah it's the best I've seen so he deserves credit for that.   If you do use it, things to look out for:   * While it does ignore back-facing collisions, it doesn't actually handle back-facing entering of triangles.It only ignores back-facing collisions as a performance boost, and *not* for practical application. If you actually apply it practically and push your sphere into a triangle from its back face and then move it around, the resulting behaviour is not something you'll want. * If you intend to know the character's resulting velocity at the end of each frame, that code won't let you (the velocity vector isn't actually velocity, it's displacement). * You might notice the logic for padding a sphere away from a triangle it has collided with isn't quite right. Leave it as is. * You might notice the sliding plane isn't calculated correctly when the sphere gets too close to a triangle. Leave it as is. These last 2 issues actually combine to dodge a problem.   The document includes code at the bottom. You definitely can whack that code straight into your project and it will work well enough. Just do not try to use one-way collisions at all, and if you want to keep a record of inertia, you'll have to find another way.     I believe I know the process inside out so I'll answer questions if I see them.
  15. Something else annoying!   http://arxiv.org/ftp/arxiv/papers/1211/1211.0059.pdf   It's a paper that declares its intent to look at areas of Fauerby's code that aren't robust. It's presented formally but I get the impression it was written for academic assessment.    One problem is on page 4, where it says:   Jeff Linahan goes on from there to state that when the sphere meets its 2nd plane during a frame, it loses a 2nd degree of freedom, only allowed to travel along the crease that the 2 planes form. Then if it hits a 3rd plane, it loses 3 degrees of freedom and therefore, all motion entirely. Linahan uses that chain of reasoning to limit the number of recursions to 3, instead of Fauerby's 5 (which is already just an arbitrary safe number), and to then claim that his changes therefore result in better performance.   And it's just bonkers.   It's not the performance that I'm concerned with either; it's that Linahan is up and redefining the function's job then working incorrectly from his redefinition. Simply, Linahan believes the quote above. He believes the sphere's motion has to be constrained to the plane of any triangle it touches. His reasoning why is given in this image:   [attachment=30419:restrict.png] The sphere's velocity is pushing it downwards, but as a result of the angled plane, it is sliding along the purple line. As soon as it reaches the lower plane, the flat one, it "should" stop completely.    This does make some sense. It simply says the sphere should only ever move in response to the original velocity direction (the green line), and never the redirected velocity it changes to along its journey. So in the pic above, as soon as it hits that flat plane, it obviously has no sideways velocity, so should stop. Fair enough.    But this is as far as Linahan looks. From that example alone he declares that the sphere's motion is restricted to the first plane it was moving along. Why? No reason! Just one example and we're good to go! So what if that flat plane had been replaced by an angled plane of only slightly weaker incline than the first? Of course the sphere shouldn't stick to the first plane. It's just silly. And from there, the whole 3 degrees of freedom thing is simply wrong. Sure, if the sphere's motion *was* restricted to the planes it touches, it would fit. But it isn't.   What bothers me more with Linahan's paper is that despite its claim to focus on robustness, it misses the 2 bona-fide mistakes in Fauerby's method. It misses the problem of the sphere padding itself away from a collision *point*, instead of the collision *triangle*, plus despite all Linahan's focussing on the sliding plane calculations, it misses the fact that Fauerby quite simply got them wrong any time the sphere is too close to the collision point.     Linahan also misses the most obvious not-a-robust-use-of-floats example in the whole thing. On page 38 of Fauerby's method, we have this: if (normalDotVelocity == 0.0f) { ... This is a check to see if the sphere is pushing into the plane at all. If the Dot product of the velocity and the plane's normal is zero, the sphere is moving parallel to the plane, so no collision is possible. And anyone can see the problem here, especially in the context of "numerical robustness". Float comparison. Again it's just silly; silly that this is missed. If the sphere travels parallel to the plane, in real life real mathematics that Dot product will always be zero. In the ugly world of float comparisons, that Dot product will never be exactly zero. Always a tiny bit over or a bit less, positive or negative. It's the last thing in all the code you'd want to rely on (I've tested this, it definitely is borked), so why does Fauerby's get away with it? Because whenever the comparison fails, it just checks for a collision as always anyway. And as always, if a collision is found, it accidentally slips a little closer to the triangle, and as always, if it gets too close to the tri, the sliding plane accidentally shunts the sphere back up and away from it.    In other words, because it literally doesn't matter.   Anyway, back to Linahan's paper. Linahan's own conclusion says it all really.   They solved something that wasn't broken, missed the things that were, and wrapped it all up with "Hmm, gets stuck on edges often, doesn't it?"     So yeah. For all its formality and initial appearance of focus on robustness, Linahan's paper appears to be a rushed job.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!