• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.


  • Content count

  • Joined

  • Last visited

Community Reputation

311 Neutral

About Defend

  • Rank
  1. Looks really good, thanks.
  2. I want to be able to model one stick of a nunchuk being moved about freely via user control, and the other stick reacting/swinging in a way that's believable. I'm not even sure what to google; running into lots of talk about martial arts in general... and the Nintendo Wii. Could it be modelled with a single pendulum? Single pendulum and a joint maybe? These are things I'd have to learn, and ultimately the goal is something that looks clean and feels convincing "enough", so any tips are welcome. Including suggestions on how to smoothen things out should pure equations result in jittery results. Many thanks
  3. I actually disliked that 'remove degrees of freedom' approach in Linahan's article. I disliked the article on the whole, finding issues with it almost as much as I found with Fauerby's. I rambled with displeasure here: http://www.gamedev.net/topic/674540-annoying-things-about-fauerbys-and-nettles-sphere-mesh-collision-detection-guides/   Since you say it was used in Doom, perhaps I simply didn't recognise the proper application for the restricting degrees of freedom approach. Perhaps what I had in mind was something it didn't cater to. I can't remember why I was whining about that particular thing. I'm sure I look stupid in my other thread hehe.    I believe Fauerby's quadratic coefficients for detecting edge intersections are wrong also. This is because they are a straight copy of Schroeder's coefficients (found in the August 2001 copy of GDmag, at http://www.gdcvault.com/gdmag - you're welcome, people on the internet trying to source that). And Schroeder's coefficients are likely wrong, because Akenine-Moller says they are (in Real Time Rendering), and Akenine-Moller's coefficients are definitely correct.   Since Akenine-Moller used the exact same equations but got different results, and since none of these authors ever show how they get from equations to coefficients, Schroeder's are most likely wrong, which means Fauerby's are too.   The reason I say AK's coefficients are definitely correct is because Ericson (Real Time Collision Detection) produces the same coefficients as Akenine-Moller, and while Ericson - like every other author - doesn't show how he reached from equations to coefficients, Ericson's approach actually can be proven on paper. I believe the others required Mathematica; at least, Schroeder did. But Ericson's coefficients are definitely legit, and Akenine-Moller's are the same, even though Ericson uses a different approach (no simultaneous equations, no dot product = 0 trick).     So Nettle's guide = big mistakes, Fauerby's guide = several mistakes, Schroeder's guide = wrong coefficients.   Interestingly, these 3 authors are the 3 who wrote guides. Ericson and Akenine-Moller didn't write guides, they wrote books. Says something, doesn't it? BUT having said that, Ericson also totally supports Nettle's completely incorrect approach, now listed as such on his website. Ericson also oddly describes his approach as "the one suggested in Schroeder with corrections in Akenine", when it simply isn't. The difference is the only reason I was able to work it out and confirm it as correct.
  4. Sorry to reply so late, it's been a busy week moving house. And we had no cooling... so I've been avoiding using this furnace (laptop) for long.   And sorry but your approach just isn't the right one. It doesn't address several of the things I've said are required -- such as a complete set of sliding plane normals, and normals that change as abruptly as the mesh does -- but I gave it attention anyway because you have been claiming it's definitely "the" accurate way to achieve what the thread asks; accurate, and validated by reality. You've stated that with complete certainty, and even described it as a model upon which approximations should be based.   But it simply isn't validated the way you claim it is. It isn't "the" accurate way to do the thing I'm asking; there isn't a real version of it that we can validate by. Your replies to my challenges on this appear to have had little thought put into them. You can't just pluck out the word "compression" by association and use it carry your claim. Intersection doesn't appear in collision algorithms because of real life compression; it's honestly a little disrespectful that you replied with such a shallow response and expected me to take it as an explanation. Intersection in a collision algorithm was never a model for approximation; it is the approximation. We tolerate intersection because it simplifies.    Similarly, you can't just say that since collision exists in real life, any non-real parts of a collision handling algorithm are automatically validated by real life. You have to actually check simulated scenarios against real ones first, and if you had you would have realised that even for a basic concave corner, reality simply doesn't generate the infinite different collision normals that your approach does. In fact, that non-discrete output you value so much (and understandably) is the biggest giveaway. Reality doesn't give us any interpolation for a ball with a fixed velocity smashing into a concave corner. Collision directions are based on the surfaces' gradients, and the transitions between them are abrupt by definition. Where you dismiss my approach (not saying it's a stable idea, just making a point) with the flower example, it's actually abruptness you're identifying, not jitter. I'm well aware of jitter concerns; your comments make it appear that you've assumed I'm not, but what you identified as a problem is not jitter, it's the intended abruptness.   At this point you might re-explain or redefine what you're suggesting, but ultimately if you don't put a constraint on how that intersection position comes to be, you've got infinite normals possible from a fixed incoming velocity. That's the continuous yet unreal range of outputs that you value, that I don't. And if you do put constraints on those intersecting positions, you've missed the topic of the thread: ignored back-face collisions leading to any intersection imaginable.    What a confusing mess right? It is definitely annoying having to deconstruct what someone means when it's not clearly described, and I'm not going into further detail with the above paragraphs because I shouldn't be the one having to pull apart quick suggestions before I can tell if they even apply to the thread's question. That should be checked before those suggestions are typed, and the connection to the question made clear. I still really do not know exactly what application you have in mind that your suggestion applies to. But I am very, very sure that it is something. You suggestion applies to something, and I imagine that it is indeed accurate at calculating some piece of that thing. But that application is not this application, and an accurate calculating of some piece in that application does not equate to reality validating intersection response. In this thread, I really think you've tried to make the question fit an answer you already have, instead of the other way around.    In case it sheds any light on things for you, I don't use a posteriori collision detection in the application the thread describes. Maybe that helps, maybe not. I'm guessing that's what you're thinking about but I could be wrong. You're welcome to really clarify what your suggestion applies to, but I know now it's not what I want. If you do, demonstrations, or linked sources, or clear mathematical trails would be better than just more claims without clear reasoning, because it's that absence of specifics that let us go so long thinking X was talking about Y.     And my approach? I found an example in it I didn't like, so I'm rethinking it. 
  5.   That justifacation is easy to make: In physics, either you do it correctly and it will work, or it will fail (jitter, tunneling) - there is not much space in between.   That's why I am pointing out that this isn't using physics any more than the moving portal uses physics. The reason there's next to no confusion between working and failing when simulating actual physics is that we simply have reality to give us validation. We don't have that here. Reality won't validate any behaviour you might give to a bowling ball and a diamond slab that occupy the same space at the same time, any more than it will validate ways to push a box through a moving portal.   So what validation are you using? You are claiming that one particular behaviour (defining a collision normal the way you do) is validated; is simply the "accurate" behaviour. What causes you to believe this?   Meanwhile I think it's easy to argue that the goals of behaviour of unreal situations are at the developer's whim. That's why I'm skeptical. I need to see something that shows why this situation shouldn't fall into the same category as moving portals, jelly blocks in Minecraft, or surfing in Team Fortress 2; the category of behaviour goals for unreal situations that the developer invents because that's what the developer likes.   ---   Regarding the image, thanks for giving it a look.  I see it hitting 1 "edge" (the top corner) and 1 face (the bottom line). You have called this less satisfying, but I am not sure you're saying that because of that example, or just your general opinion. In that example I am perfectly satisfied with that result. That's what the OP was all about: find any example that doesn't give a satisfying set of normals.    Importantly, bear in mind that we need all those individual normals for restricting sliding motion; that is unavoidable. Regardless of whether we desire a collision normal or not, we aren't saved from having to calculate those sliding planes; we absolutely need them. I believe then that the only thing your approach saves us from is 3 lines of code that find a simple average. Compare those 3 lines to your code; altered for 3D, then altered further to handle completely arbitrary shapes instead of cubes.   For all that extra complication, if one is to claim that this is in fact the way things should be done, then something needs to validate that claim that isn't just personal preference.    These are my personal preference factors for the averaging approach (unless my approach breaks somewhere):  * the collision normal comes from the surfaces involved, not the depth.  * I don't want penetration slightly altering collision normals,  * The player will know that the planes along which they feel movement being blocked are going to define their jump, and they can predict this jump by feel, without having to look at the ground. * Collision normals that match those of the equivalent non-embedded situation (as in the 3 examples shown earlier) * A character embedded deep in the ground, but only a tiny bit in the wall, will kick off the wall as effectively as a character only touching the ground and only touching the wall.   Both our approaches are consistent, predictable, and have something that the player can pick up intuitively, but I wouldn't call yours or mine "the" way it is supposed to be done.... unless I hear of something that supports that claim. Currently I would be comfortable writing in a guide "If you want depth to affect your collision normal as well, here's an approach for that." 
  6. I don't want to just poop on that idea, but I think that calculating the centres of possibly quite complex shapes, multiple times per frame, is kind of bordering on overkill. It's not that it's complicated (it is though), it's that it's complicated without something in reality to justify the approach. If it were based on a real-life physics situation then I would embrace any complications required to do things the "correct" way, but I don't believe there is one correct way, no more than there is a correct way to solve the moving portal problem.    http://9gag.com/gag/adj90GB/moving-portal-problem   If the developers had done it one way, players would have said "Yeah, that feels right." If the developers had done it the other way, players would have said "Yeah, that feels right." And people argue both ways using classical physics until the cows come home.   Since tennis balls don't slide around inside bricks in real life, if someone were to put volume centre calculation in a guide, readers would question its justification.    So in my opinion the goal is just something that feels consistent and acceptable to the user, but from there I say the developer really has free reign. Averaging, weighting, penetration depths, volume centres... they're all going to give a normal of some kind, and all can feel fine.   Personally, averaging is fine to me for what I have in mind. I don't find value in embedded situations giving different results to their non-embedded equivalents, nor in asking the user to factor in penetration when trying to intuit the direction they're going to jump in. But maybe you do. Perhaps you're imagining a scenario where penetration definitely *will* factor into the users intuition. And that's cool too. (But on that note, I don't think your volume-centering idea gives anything to the user that the weighting/penetrating idea didn't already.)   By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it.      The discussion on what to do with normals is welcome; it's making me think. As long as the thread's topic of how to select them does also get closer to closure. And possibly it has now, because you actually seem quite confident about the normal selection process. We describe it with different language (you say Voroni, I say "hey guys I've got this 7 step diet routine") but it looks like we are thinking the same way.   The image I posted last (the pyramid) actually produces an edge normal, but I think you called vertex not because we disagree, but simply because the image isn't clear. I tried to find good angles, but looks like I failed hehe. Here it is again, new angle.    [attachment=30564:Pyramid2.png]   So edge, yeah? Between the purple and blue.     A final one, if you want, about selecting normals:   [attachment=30566:Contrived.png]
  7.   I don't think that makes it a bad idea, because the abrupt changes are no more abrupt than changes in mesh itself. It seems odd to fear sudden changes in incline while embedded, when they're already fine while not embedded (such as if example 1, realistic situation, were to travel to the right) if that is simply what the mesh dictates.   I'm not eager to smooth over all non-embedded changes in direction. I prefer the collision response to be as sharp as the mesh. That said, the way you suggest weighting things is a good idea I'd surely use if I feel differently about smoothing later.    To clarify something also, I wouldn't be averaging anything for moving through the mesh. I would be restricting to all valid sliding planes found (ie., using all selected normals). Averaging was what I suggested for things like jumping off of it.     And just making it explicit again, the topic is about asking "how to select normals", not "what do I do with normals once selected". I'm not sure if you've noticed, but your suggestion also rejected normals, and we happen to have agreed on which ones. It could be that you're only thinking in terms of face normals, so you have rejected an embedded *edge collision's* normal intuitively. In Example 1 we've both ignored the edge intersection (the bottom corner, since the image is in 2D) that the right-hand triangle would register.   That's the kind of thing I'm actually looking for agreement/disagreement with in this thread.   In case that is confusing ("Why would the right-hand triangle register an edge intersection?"), it's because the collision detection algorithm checks triangles one by one.   But those 3 examples are only simple ones. Here is an example that has no face intersections:    [attachment=30559:Pyramid.png]   The orange lines are simply guidelines displaying extrusions of each tri's face along its normal, to show more clearly that the sphere's centre does not project onto any face.   The question continues to be: Which normals would you choose, and is there any problem in the process I wrote for selecting them?       Actually the whole goal of this is to have no such collision resolution trying to put the sphere "on" a surface. If the sphere is embedded, it stays embedded and moves just as comfortably as a non-embedded sphere. Besides, as you mentioned, cheeky displacements like that are just asking for trouble. 
  8. Hmm after a quick read this seems to be a different thing. Probably because my opening post was rushed, so I didn't spell out exactly what I do and don't have already.    Finding the closest point on any triangle is something I already assume is done. Convex, concave, it doesn't matter.   The task that I think I have solved, and am asking for comments on, is to devise an algorithm that knows which normals to accept, and which normals to ignore, from the set of closest points found in any given embedded situtation. An embedded situation can involve multiple tris, and thus, multiple collision normals, some of which should be ignored.
  9. Thanks for those terms, I'll look them up. That's really helpful.    Regarding crossing surfaces, part of the process of ignoring back-facing collisions is ignoring the tri completely once the sphere's centre goes behind its surface. So I believe that situation is ok.    I'm not sure why you say convex though... my examples are concave. Perhaps I should have made it clearer in the images. The black lines are facing up... as if they're the ground.   Regarding finding closest features between a point and a polyhedra, that is something I am already able to do, convex or concave.
  10. I plan to write a new guide to sphere-mesh collision handling, since I haven't been able to find anything that doesn't come with a few problems. (The next best thing is Fauerby's guide at http://www.peroxide.dk/papers/collision/collision.pdf ).   Part of the approach I'm taking is to support one-way collisions. That is, to ignore back-facing collisions not just for a performance gain, but for practical application as well. The desired result is for the sphere to be able to find itself embedded inside a triangle and still produce "expected" motion and collisions. But not just one triangle; many triangles, at the same time, in perhaps complicated ways.   By "expected" motion, I mean that it slides only in ways that make sense, it doesn't fall through, and if required to push away from the collision, it pushes away along 1 overall normal that also makes sense. (For example, the user might simply press jump.)   Some intersections might be in front of a tri's face, while some are off to the side, while some triangles register the sphere as closest to one edge, while others find it closest to a vertex shared by earlier triangles, etc etc etc., all at once.   Much words. Such confuse.    I spent entire minutes putting together this example of 3 relatively simple embedded situations. On the left is an embedded situation; on the right is a similar realistic situation that my brain thinks matches the embedded one. The goal is for the sphere in the embedded situation to behave as if it were in the realistic situation. These 3 examples are okay demonstrations of this idea, I believe.   [attachment=30539:Embedded situations 2.png]   Bear in mind these are just 2D; 3D situations must be handled as well.     I believe I have found a process to get a correct set of normals from any situation at all. But mathematical proofs are tough, and I wouldn't know how to begin unit testing this. So instead, I'll describe my solution/algorithm and challenge any readers who might be really keen to dream up an embedded situation that breaks it.    Hopefully someone will find holes in someone else's ideas. You can use as many triangles as you want. If it's not obvious, specify which side is the front of a tri.     My solution:   Things we know: * Sphere's centre * Sphere's radius * Position of every vertex of every triangle. * Face intersection == the sphere's centre projects onto the face of a tri that it intersects. * Edge intersection == the sphere's centre projects onto an edge of a tri that it intersects, but doesn't project onto the face of that tri. * Vertex intersection == the sphere's centre overlaps a vertex of a tri that it intersects, but doesn't project onto the face of that tri, nor either of that vertex's two edges.  * Intersection point == closest point on any given intersected triangle to the sphere's centre * Collision normal (per intersection type) == Direction from intersection point to the sphere's centre   [attachment=30540:Intersections.png]     Here's that algorithm:   1. Test every triangle to see if it is intersecting the sphere. If it isn't, discard it. Ignore any back-facing situations (sphere's centre is behind triangle's plane). 2. If a triangle is intersecting the sphere, determine whether it is a face intersection, edge intersection, or vertex intersection. 3. Face intersection? Add its collision normal to "the list" of collision normals discovered so far. 4. Edge intersection? If that edge is not involved in any face intersections, add its collision normal to the list. Else, ignore it. 5. Vertex intersection? If that vertex is not involved in any edge intersections, nor face intersections, add its collision normal to the list. Else, ignore it. 6. To slide the sphere with a new velocity, remove from that velocity any vector component that goes into any collision normal in the list. Then slide. 7. Or to push away from the embedded position, average all the normals' directions and use that as an overall collision normal.     To help the brain think in 3D, recognise that this image is showing 1 face intersection, 2 edge intersections, and 1 vertex intersection. In this example, only the face intersection's normal would be used, not just because it already involves all intersected edges and vertices, but also because the sphere's centre is behind the other 3 triangles' planes.   [attachment=30541:4 ways.png]
  11. About the only thing you'll find that describes how to do it (mostly) is the peroxide.dk link provided above. However it does have a few errors. Depending on the geometry you use you might not run into them. I've actually been been writing a new (and hopefully improved) guide to what you want, but it won't be ready for a couple of weeks still I'd say. So for now, Fauerby's guide (the peroxide link) is the best I've been able to find. I don't really like recommending it because of a few misunderstandings it creates, but yeah it's the best I've seen so he deserves credit for that.   If you do use it, things to look out for:   * While it does ignore back-facing collisions, it doesn't actually handle back-facing entering of triangles.It only ignores back-facing collisions as a performance boost, and *not* for practical application. If you actually apply it practically and push your sphere into a triangle from its back face and then move it around, the resulting behaviour is not something you'll want. * If you intend to know the character's resulting velocity at the end of each frame, that code won't let you (the velocity vector isn't actually velocity, it's displacement). * You might notice the logic for padding a sphere away from a triangle it has collided with isn't quite right. Leave it as is. * You might notice the sliding plane isn't calculated correctly when the sphere gets too close to a triangle. Leave it as is. These last 2 issues actually combine to dodge a problem.   The document includes code at the bottom. You definitely can whack that code straight into your project and it will work well enough. Just do not try to use one-way collisions at all, and if you want to keep a record of inertia, you'll have to find another way.     I believe I know the process inside out so I'll answer questions if I see them.
  12. Something else annoying!   http://arxiv.org/ftp/arxiv/papers/1211/1211.0059.pdf   It's a paper that declares its intent to look at areas of Fauerby's code that aren't robust. It's presented formally but I get the impression it was written for academic assessment.    One problem is on page 4, where it says:   Jeff Linahan goes on from there to state that when the sphere meets its 2nd plane during a frame, it loses a 2nd degree of freedom, only allowed to travel along the crease that the 2 planes form. Then if it hits a 3rd plane, it loses 3 degrees of freedom and therefore, all motion entirely. Linahan uses that chain of reasoning to limit the number of recursions to 3, instead of Fauerby's 5 (which is already just an arbitrary safe number), and to then claim that his changes therefore result in better performance.   And it's just bonkers.   It's not the performance that I'm concerned with either; it's that Linahan is up and redefining the function's job then working incorrectly from his redefinition. Simply, Linahan believes the quote above. He believes the sphere's motion has to be constrained to the plane of any triangle it touches. His reasoning why is given in this image:   [attachment=30419:restrict.png] The sphere's velocity is pushing it downwards, but as a result of the angled plane, it is sliding along the purple line. As soon as it reaches the lower plane, the flat one, it "should" stop completely.    This does make some sense. It simply says the sphere should only ever move in response to the original velocity direction (the green line), and never the redirected velocity it changes to along its journey. So in the pic above, as soon as it hits that flat plane, it obviously has no sideways velocity, so should stop. Fair enough.    But this is as far as Linahan looks. From that example alone he declares that the sphere's motion is restricted to the first plane it was moving along. Why? No reason! Just one example and we're good to go! So what if that flat plane had been replaced by an angled plane of only slightly weaker incline than the first? Of course the sphere shouldn't stick to the first plane. It's just silly. And from there, the whole 3 degrees of freedom thing is simply wrong. Sure, if the sphere's motion *was* restricted to the planes it touches, it would fit. But it isn't.   What bothers me more with Linahan's paper is that despite its claim to focus on robustness, it misses the 2 bona-fide mistakes in Fauerby's method. It misses the problem of the sphere padding itself away from a collision *point*, instead of the collision *triangle*, plus despite all Linahan's focussing on the sliding plane calculations, it misses the fact that Fauerby quite simply got them wrong any time the sphere is too close to the collision point.     Linahan also misses the most obvious not-a-robust-use-of-floats example in the whole thing. On page 38 of Fauerby's method, we have this: if (normalDotVelocity == 0.0f) { ... This is a check to see if the sphere is pushing into the plane at all. If the Dot product of the velocity and the plane's normal is zero, the sphere is moving parallel to the plane, so no collision is possible. And anyone can see the problem here, especially in the context of "numerical robustness". Float comparison. Again it's just silly; silly that this is missed. If the sphere travels parallel to the plane, in real life real mathematics that Dot product will always be zero. In the ugly world of float comparisons, that Dot product will never be exactly zero. Always a tiny bit over or a bit less, positive or negative. It's the last thing in all the code you'd want to rely on (I've tested this, it definitely is borked), so why does Fauerby's get away with it? Because whenever the comparison fails, it just checks for a collision as always anyway. And as always, if a collision is found, it accidentally slips a little closer to the triangle, and as always, if it gets too close to the tri, the sliding plane accidentally shunts the sphere back up and away from it.    In other words, because it literally doesn't matter.   Anyway, back to Linahan's paper. Linahan's own conclusion says it all really.   They solved something that wasn't broken, missed the things that were, and wrapped it all up with "Hmm, gets stuck on edges often, doesn't it?"     So yeah. For all its formality and initial appearance of focus on robustness, Linahan's paper appears to be a rushed job.
  13. I have honestly always gone the other way with this advice, seemingly opposing the rest of the planet. But it worked for me.   I think the important question is how you look at coding: Is the idea of programming in your head an idea you enjoy? OR, is writing code something you see more as a means to an end, and it's more about making games?   Since your post sounds more like the latter, then yes, the advice others have offered here is probably what I'd suggest too.   So just sharing my personal take here: I jumped straight into C++ and I have to tell you I am always happy that I did. I am always thankful that C++ is "The thing I got used to", rather than "The thing that's more complex and irritating than the thing I got used to." However, yes, disclaimer: I was just interested in/excited by any coding at all, and learning "the hard one" was itself the motivation for me. I only carry on about this perspective because, having learnt C++ first, every other language I've picked up since has been a real doddle.   (Nothing wrong with having Python under your belt either anyway; it definitely has its advantages over C++, especially when you want to throw things together. But from your OP, I'd agree C# is probably what you want.)
  14. Just felt like adding a reply to say something else I just remembered.   Fauerby's approach also doesn't use the sphere's velocity to check for collisions, but rather, its translation. To put it another way, the function doesn't ask "How fast for how long?", but simply, "How far?". The TL;DR of this is that it means you can't keep track of inertia.     It's not obvious at first, because he uses a variable named "vel" in the collideAndSlide() function, and a variable named "velocity" in the CollisionPacket object. The checkTriangle() function even does some calculations with time intervals, so it looks like it really is using velocities. With all the t0 and t1 floating around his code, I was confused for quite a while. But ultimately you can see that none of his functions, nor the CollisionPacket structure, receive a time value to work with, even though the sphere's final position is being determined from the velocity variables. By the time any of Fauerby's code begins, the sphere's velocity vector has already been converted to a distance vector. I don't think that's made very clear to the reader.   The calculations in the checkTriangle() function are actually using a parameterised equation for the sphere's position. Page 11 has this: C(t) = basePoint + t ? velocity, t ? [0, 1] Which is the same as saying: end position = start position + t * max travel distance When t == 0, end position == start position.  When t == 1, end position = start position + max travel distance. The variable t isn't time, it's just the equation's parameter that must fall between 0 and 1.   Or you can look at page 46, where a collision-free frame returns: // If no collision we just move along the velocity if (collisionPackage->foundCollision == false) { return pos + vel; } As can be seen, "vel" is simply added straight to "pos".     What's the problem though? Asking "How fast for how long" is pretty much the same thing as "How far". And it seems simpler since it uses fewer variables.   But imagine the final collision in a frame...   Say the sphere only has a small distance left to travel; maybe just 5% of the total distance it moved that frame. The sphere hits a wall at some angle, adjusts its end position to be some distance along that wall, and then never collides again (in that frame). Call this Case A.   Now imagine the same collision, with the same velocity, but happening half a frame earlier. So the sphere has somewhere around 50% of its total distance (for that frame) left to travel still. As before, it hits the wall at the same angle, adjusts its end position to be some distance along that wall, and never collides again (in that frame). Call this case B.   Obviously, the end position for Case B is going to be much further along the wall than the end position for Case A. That makes sense; Case B was demonstrating an earlier hit so of course it moved further.   But (ignoring friction) Case A and Case B both still finish with the same velocity. Fauerby's code has no way of knowing this. If the final deflected distance moved in a frame is very small, you can't tell if that means the final velocity was very small, or if it means the final period of movement was very small. You could muck around with checking those deflection angles, or comparing final distances to initial distances, or some other set of calculations, but these would all be a much bigger mess than just using velocity in the first place.   Or you could make it the caller's job instead, comparing the direction of motion at the beginning of a frame, to the direction of motion at the end of a frame (using the final sliding plane normal). But this wouldn't know the difference between a gentle, many-collisions, 90° turn (preserving much velocity), and a sharp, one-collision, 90° turn (preserving absolutely no velocity). Two-hit or three-hit collisions in a single frame aren't too uncommon, and this would stuff that up.    So that's why Fauerby's guide also doesn't allow the programmer to keep track of the sphere's inertia. It's not too difficult to fix (if you were to use his code), but you would have to add a time parameter to the CollisionPacket object at the very least, and would also have to be a bit careful when switching to the parameterised equations I believe. (I don't use them at all in my own code.)
  15. When I write about Paul Nettle's guide, that will be along the lines of "I think this is wrong, do you guys agree?"   But that's not what I was doing when writing about Fauerby's guide. Your post is kind of taking incorrect guesses at things my post already explained. That's ok, I admit mine is a long post and it's hard to fully explain problems with a 48-page guide. But I was stating, not guessing, when I said:     The veryCloseDistance value is not a projection, it's just a constant that always gets applied along the velocity vector only. There is no 'normal-based' version of it. That was one of the first fixes I thought of, but if it was a projection I can show how this would lead to another error of colliding well before reaching a triangle.    Also, colliding gradually is exactly what the first error (the veryCloseDistance ... finger-infinitely-close-to-your-desk error) causes if the second error (the bogus sliding plane normal) isn't happening also. I say that from having toyed with different veryCloseDistances to try to fend off the flaw. It literally makes things soggy as you move around.   Thanks for taking the interest though. I wanted to email the authors but things are so old it'd feel like emailing Patrick Stewart about Star Trek. So I vented here.     I'm working on my own version of this collision detection that doesn't use padding at all, but instead is insensitive to embedded collisions. It avoids the two mistakes in Fauerby's guide. Plus, Fauerby's guide also always runs vertex sweep tests before edge sweep tests, instead of recognising that edge sweep tests will often make vertex sweep tests redundant (as the whole process is all about delaying unnecessary tests as much as possible).