Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

311 Neutral


About Defend

  • Rank

Personal Information

  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks all for your replies, especially those links you guys have thought of. _Silence_, I found that article in particle quite interesting. The pointers to simple engines should also be helpful, since while I've asked about guiding structures for writing I do indeed practise in merely recognising structures of other engines. Yep, exactly this is the kind of thing I'm wanting to be better at. I've had a glance all the articles & posts linked. They all get my interest but I'll prioritise those around engine structure advice. (Regarding the side topic of "Don't make an engine if you want to make a game", the disclaimer I wrote is just to keep such suggestions at bay. They're good suggestions but there's already a lot of internet covering it.)
  2. Obligatory disclaimer - Yes this is for academic purposes, not for making actual games. I've been employed as a Software Engineer for 2 years but still feel like a beginner when it comes to writing a game engine (and much of coding in general). What little experience I have with writing 3D software from scratch is from super rough university projects (which we called "engine" but that's definitely debatable). Meanwhile, my current job doesn't really take me down this line of learning. This thread is to ask for advice; mainly pointers to good guides, or comments on what structural approaches are considered to be good ones. I'm looking for something that teaches me about how to structure a game engine so that: it's good for learning about writing an engine it's not completely devoid of modern techniques it will be usable for later feature learning; networking, AI, unit testing, etc. (I imagine this criterion is typically a given anyway.) Some things I'm aware of so far (you may want to comment on any of these): https://www.gamasutra.com/blogs/MichaelKissner/20151027/257369/Writing_a_Game_Engine_from_Scratch__Part_1_Messaging.php I also have the book Kissner recommends: Game Engine Architecture by Jason Gregory. From what little I've read of it, it appears more advice than structural guide. ECS was a buzzword back when I was at uni, but the more I google it the more it appears to have become a dirty word. Unreal Engine's source Regarding ECS: For all the flak it appears to generate nowadays, it is at least a structure. If ECS isn't recommended I'm interested in what other general structural approaches there are. Regarding Unreal: I'd love to jump in and learn how Unreal Engine's structure works, but it might be biting off more than I can chew. Language of choice is C++. Thanks
  3. Frob, I too haven't heard an argument that the namespace approach is worse (or not) than the class approach, but that's because it is practically impossible to find discussion on that particular question at all. Any search (I can find anyway) related to 'how to singleton' and C++ produces the class approach. Any search with the word 'singleton' at all results in replies all too keen to launch into thoughts on the pattern and/or globals. I don't disagree with them at all, but they drown the focus on any related specific questions such as this one. Thank you all though for confirming for me that I'm not just missing something obvious in C++. I think Hodgman's suspicion is a good one. Seraph, your comment was something I hadn't thought of so that feels like I've finally I found some closure! Many thanks.
  4. Not asking about singletons here (nor advocating). With that clarified: If we assume someone wants a global + unique object, why isn't a namespace always the preferred approach in C++, over implementing a singleton class? I've only seen the namespace approach encouraged when there aren't statics being shared. Eg; from Google's style guidelines: But why not have non-member functions that share static data, declared in an unnamed namespace? And why isn't this generally suggested as a better alternative to writing a singleton class in C++?
  5. I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what. My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either: * make lots of VAO/VBO pairs and flip through them to render different objects, or * make one big VBO and jump around its memory to render different objects. I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case. If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
  6. Looks really good, thanks.
  7. I want to be able to model one stick of a nunchuk being moved about freely via user control, and the other stick reacting/swinging in a way that's believable. I'm not even sure what to google; running into lots of talk about martial arts in general... and the Nintendo Wii. Could it be modelled with a single pendulum? Single pendulum and a joint maybe? These are things I'd have to learn, and ultimately the goal is something that looks clean and feels convincing "enough", so any tips are welcome. Including suggestions on how to smoothen things out should pure equations result in jittery results. Many thanks
  8. I actually disliked that 'remove degrees of freedom' approach in Linahan's article. I disliked the article on the whole, finding issues with it almost as much as I found with Fauerby's. I rambled with displeasure here: http://www.gamedev.net/topic/674540-annoying-things-about-fauerbys-and-nettles-sphere-mesh-collision-detection-guides/   Since you say it was used in Doom, perhaps I simply didn't recognise the proper application for the restricting degrees of freedom approach. Perhaps what I had in mind was something it didn't cater to. I can't remember why I was whining about that particular thing. I'm sure I look stupid in my other thread hehe.    I believe Fauerby's quadratic coefficients for detecting edge intersections are wrong also. This is because they are a straight copy of Schroeder's coefficients (found in the August 2001 copy of GDmag, at http://www.gdcvault.com/gdmag - you're welcome, people on the internet trying to source that). And Schroeder's coefficients are likely wrong, because Akenine-Moller says they are (in Real Time Rendering), and Akenine-Moller's coefficients are definitely correct.   Since Akenine-Moller used the exact same equations but got different results, and since none of these authors ever show how they get from equations to coefficients, Schroeder's are most likely wrong, which means Fauerby's are too.   The reason I say AK's coefficients are definitely correct is because Ericson (Real Time Collision Detection) produces the same coefficients as Akenine-Moller, and while Ericson - like every other author - doesn't show how he reached from equations to coefficients, Ericson's approach actually can be proven on paper. I believe the others required Mathematica; at least, Schroeder did. But Ericson's coefficients are definitely legit, and Akenine-Moller's are the same, even though Ericson uses a different approach (no simultaneous equations, no dot product = 0 trick).     So Nettle's guide = big mistakes, Fauerby's guide = several mistakes, Schroeder's guide = wrong coefficients.   Interestingly, these 3 authors are the 3 who wrote guides. Ericson and Akenine-Moller didn't write guides, they wrote books. Says something, doesn't it? BUT having said that, Ericson also totally supports Nettle's completely incorrect approach, now listed as such on his website. Ericson also oddly describes his approach as "the one suggested in Schroeder with corrections in Akenine", when it simply isn't. The difference is the only reason I was able to work it out and confirm it as correct.
  9. Sorry to reply so late, it's been a busy week moving house. And we had no cooling... so I've been avoiding using this furnace (laptop) for long.   And sorry but your approach just isn't the right one. It doesn't address several of the things I've said are required -- such as a complete set of sliding plane normals, and normals that change as abruptly as the mesh does -- but I gave it attention anyway because you have been claiming it's definitely "the" accurate way to achieve what the thread asks; accurate, and validated by reality. You've stated that with complete certainty, and even described it as a model upon which approximations should be based.   But it simply isn't validated the way you claim it is. It isn't "the" accurate way to do the thing I'm asking; there isn't a real version of it that we can validate by. Your replies to my challenges on this appear to have had little thought put into them. You can't just pluck out the word "compression" by association and use it carry your claim. Intersection doesn't appear in collision algorithms because of real life compression; it's honestly a little disrespectful that you replied with such a shallow response and expected me to take it as an explanation. Intersection in a collision algorithm was never a model for approximation; it is the approximation. We tolerate intersection because it simplifies.    Similarly, you can't just say that since collision exists in real life, any non-real parts of a collision handling algorithm are automatically validated by real life. You have to actually check simulated scenarios against real ones first, and if you had you would have realised that even for a basic concave corner, reality simply doesn't generate the infinite different collision normals that your approach does. In fact, that non-discrete output you value so much (and understandably) is the biggest giveaway. Reality doesn't give us any interpolation for a ball with a fixed velocity smashing into a concave corner. Collision directions are based on the surfaces' gradients, and the transitions between them are abrupt by definition. Where you dismiss my approach (not saying it's a stable idea, just making a point) with the flower example, it's actually abruptness you're identifying, not jitter. I'm well aware of jitter concerns; your comments make it appear that you've assumed I'm not, but what you identified as a problem is not jitter, it's the intended abruptness.   At this point you might re-explain or redefine what you're suggesting, but ultimately if you don't put a constraint on how that intersection position comes to be, you've got infinite normals possible from a fixed incoming velocity. That's the continuous yet unreal range of outputs that you value, that I don't. And if you do put constraints on those intersecting positions, you've missed the topic of the thread: ignored back-face collisions leading to any intersection imaginable.    What a confusing mess right? It is definitely annoying having to deconstruct what someone means when it's not clearly described, and I'm not going into further detail with the above paragraphs because I shouldn't be the one having to pull apart quick suggestions before I can tell if they even apply to the thread's question. That should be checked before those suggestions are typed, and the connection to the question made clear. I still really do not know exactly what application you have in mind that your suggestion applies to. But I am very, very sure that it is something. You suggestion applies to something, and I imagine that it is indeed accurate at calculating some piece of that thing. But that application is not this application, and an accurate calculating of some piece in that application does not equate to reality validating intersection response. In this thread, I really think you've tried to make the question fit an answer you already have, instead of the other way around.    In case it sheds any light on things for you, I don't use a posteriori collision detection in the application the thread describes. Maybe that helps, maybe not. I'm guessing that's what you're thinking about but I could be wrong. You're welcome to really clarify what your suggestion applies to, but I know now it's not what I want. If you do, demonstrations, or linked sources, or clear mathematical trails would be better than just more claims without clear reasoning, because it's that absence of specifics that let us go so long thinking X was talking about Y.     And my approach? I found an example in it I didn't like, so I'm rethinking it. 
  10.   That justifacation is easy to make: In physics, either you do it correctly and it will work, or it will fail (jitter, tunneling) - there is not much space in between.   That's why I am pointing out that this isn't using physics any more than the moving portal uses physics. The reason there's next to no confusion between working and failing when simulating actual physics is that we simply have reality to give us validation. We don't have that here. Reality won't validate any behaviour you might give to a bowling ball and a diamond slab that occupy the same space at the same time, any more than it will validate ways to push a box through a moving portal.   So what validation are you using? You are claiming that one particular behaviour (defining a collision normal the way you do) is validated; is simply the "accurate" behaviour. What causes you to believe this?   Meanwhile I think it's easy to argue that the goals of behaviour of unreal situations are at the developer's whim. That's why I'm skeptical. I need to see something that shows why this situation shouldn't fall into the same category as moving portals, jelly blocks in Minecraft, or surfing in Team Fortress 2; the category of behaviour goals for unreal situations that the developer invents because that's what the developer likes.   ---   Regarding the image, thanks for giving it a look.  I see it hitting 1 "edge" (the top corner) and 1 face (the bottom line). You have called this less satisfying, but I am not sure you're saying that because of that example, or just your general opinion. In that example I am perfectly satisfied with that result. That's what the OP was all about: find any example that doesn't give a satisfying set of normals.    Importantly, bear in mind that we need all those individual normals for restricting sliding motion; that is unavoidable. Regardless of whether we desire a collision normal or not, we aren't saved from having to calculate those sliding planes; we absolutely need them. I believe then that the only thing your approach saves us from is 3 lines of code that find a simple average. Compare those 3 lines to your code; altered for 3D, then altered further to handle completely arbitrary shapes instead of cubes.   For all that extra complication, if one is to claim that this is in fact the way things should be done, then something needs to validate that claim that isn't just personal preference.    These are my personal preference factors for the averaging approach (unless my approach breaks somewhere):  * the collision normal comes from the surfaces involved, not the depth.  * I don't want penetration slightly altering collision normals,  * The player will know that the planes along which they feel movement being blocked are going to define their jump, and they can predict this jump by feel, without having to look at the ground. * Collision normals that match those of the equivalent non-embedded situation (as in the 3 examples shown earlier) * A character embedded deep in the ground, but only a tiny bit in the wall, will kick off the wall as effectively as a character only touching the ground and only touching the wall.   Both our approaches are consistent, predictable, and have something that the player can pick up intuitively, but I wouldn't call yours or mine "the" way it is supposed to be done.... unless I hear of something that supports that claim. Currently I would be comfortable writing in a guide "If you want depth to affect your collision normal as well, here's an approach for that." 
  11. I don't want to just poop on that idea, but I think that calculating the centres of possibly quite complex shapes, multiple times per frame, is kind of bordering on overkill. It's not that it's complicated (it is though), it's that it's complicated without something in reality to justify the approach. If it were based on a real-life physics situation then I would embrace any complications required to do things the "correct" way, but I don't believe there is one correct way, no more than there is a correct way to solve the moving portal problem.    http://9gag.com/gag/adj90GB/moving-portal-problem   If the developers had done it one way, players would have said "Yeah, that feels right." If the developers had done it the other way, players would have said "Yeah, that feels right." And people argue both ways using classical physics until the cows come home.   Since tennis balls don't slide around inside bricks in real life, if someone were to put volume centre calculation in a guide, readers would question its justification.    So in my opinion the goal is just something that feels consistent and acceptable to the user, but from there I say the developer really has free reign. Averaging, weighting, penetration depths, volume centres... they're all going to give a normal of some kind, and all can feel fine.   Personally, averaging is fine to me for what I have in mind. I don't find value in embedded situations giving different results to their non-embedded equivalents, nor in asking the user to factor in penetration when trying to intuit the direction they're going to jump in. But maybe you do. Perhaps you're imagining a scenario where penetration definitely *will* factor into the users intuition. And that's cool too. (But on that note, I don't think your volume-centering idea gives anything to the user that the weighting/penetrating idea didn't already.)   By all means, if there's some real-life justification I'm missing that could convince a user one way is more "correct", I would be happy to hear it.      The discussion on what to do with normals is welcome; it's making me think. As long as the thread's topic of how to select them does also get closer to closure. And possibly it has now, because you actually seem quite confident about the normal selection process. We describe it with different language (you say Voroni, I say "hey guys I've got this 7 step diet routine") but it looks like we are thinking the same way.   The image I posted last (the pyramid) actually produces an edge normal, but I think you called vertex not because we disagree, but simply because the image isn't clear. I tried to find good angles, but looks like I failed hehe. Here it is again, new angle.    [attachment=30564:Pyramid2.png]   So edge, yeah? Between the purple and blue.     A final one, if you want, about selecting normals:   [attachment=30566:Contrived.png]
  12.   I don't think that makes it a bad idea, because the abrupt changes are no more abrupt than changes in mesh itself. It seems odd to fear sudden changes in incline while embedded, when they're already fine while not embedded (such as if example 1, realistic situation, were to travel to the right) if that is simply what the mesh dictates.   I'm not eager to smooth over all non-embedded changes in direction. I prefer the collision response to be as sharp as the mesh. That said, the way you suggest weighting things is a good idea I'd surely use if I feel differently about smoothing later.    To clarify something also, I wouldn't be averaging anything for moving through the mesh. I would be restricting to all valid sliding planes found (ie., using all selected normals). Averaging was what I suggested for things like jumping off of it.     And just making it explicit again, the topic is about asking "how to select normals", not "what do I do with normals once selected". I'm not sure if you've noticed, but your suggestion also rejected normals, and we happen to have agreed on which ones. It could be that you're only thinking in terms of face normals, so you have rejected an embedded *edge collision's* normal intuitively. In Example 1 we've both ignored the edge intersection (the bottom corner, since the image is in 2D) that the right-hand triangle would register.   That's the kind of thing I'm actually looking for agreement/disagreement with in this thread.   In case that is confusing ("Why would the right-hand triangle register an edge intersection?"), it's because the collision detection algorithm checks triangles one by one.   But those 3 examples are only simple ones. Here is an example that has no face intersections:    [attachment=30559:Pyramid.png]   The orange lines are simply guidelines displaying extrusions of each tri's face along its normal, to show more clearly that the sphere's centre does not project onto any face.   The question continues to be: Which normals would you choose, and is there any problem in the process I wrote for selecting them?       Actually the whole goal of this is to have no such collision resolution trying to put the sphere "on" a surface. If the sphere is embedded, it stays embedded and moves just as comfortably as a non-embedded sphere. Besides, as you mentioned, cheeky displacements like that are just asking for trouble. 
  13. Hmm after a quick read this seems to be a different thing. Probably because my opening post was rushed, so I didn't spell out exactly what I do and don't have already.    Finding the closest point on any triangle is something I already assume is done. Convex, concave, it doesn't matter.   The task that I think I have solved, and am asking for comments on, is to devise an algorithm that knows which normals to accept, and which normals to ignore, from the set of closest points found in any given embedded situtation. An embedded situation can involve multiple tris, and thus, multiple collision normals, some of which should be ignored.
  14. Thanks for those terms, I'll look them up. That's really helpful.    Regarding crossing surfaces, part of the process of ignoring back-facing collisions is ignoring the tri completely once the sphere's centre goes behind its surface. So I believe that situation is ok.    I'm not sure why you say convex though... my examples are concave. Perhaps I should have made it clearer in the images. The black lines are facing up... as if they're the ground.   Regarding finding closest features between a point and a polyhedra, that is something I am already able to do, convex or concave.
  15. I plan to write a new guide to sphere-mesh collision handling, since I haven't been able to find anything that doesn't come with a few problems. (The next best thing is Fauerby's guide at http://www.peroxide.dk/papers/collision/collision.pdf ).   Part of the approach I'm taking is to support one-way collisions. That is, to ignore back-facing collisions not just for a performance gain, but for practical application as well. The desired result is for the sphere to be able to find itself embedded inside a triangle and still produce "expected" motion and collisions. But not just one triangle; many triangles, at the same time, in perhaps complicated ways.   By "expected" motion, I mean that it slides only in ways that make sense, it doesn't fall through, and if required to push away from the collision, it pushes away along 1 overall normal that also makes sense. (For example, the user might simply press jump.)   Some intersections might be in front of a tri's face, while some are off to the side, while some triangles register the sphere as closest to one edge, while others find it closest to a vertex shared by earlier triangles, etc etc etc., all at once.   Much words. Such confuse.    I spent entire minutes putting together this example of 3 relatively simple embedded situations. On the left is an embedded situation; on the right is a similar realistic situation that my brain thinks matches the embedded one. The goal is for the sphere in the embedded situation to behave as if it were in the realistic situation. These 3 examples are okay demonstrations of this idea, I believe.   [attachment=30539:Embedded situations 2.png]   Bear in mind these are just 2D; 3D situations must be handled as well.     I believe I have found a process to get a correct set of normals from any situation at all. But mathematical proofs are tough, and I wouldn't know how to begin unit testing this. So instead, I'll describe my solution/algorithm and challenge any readers who might be really keen to dream up an embedded situation that breaks it.    Hopefully someone will find holes in someone else's ideas. You can use as many triangles as you want. If it's not obvious, specify which side is the front of a tri.     My solution:   Things we know: * Sphere's centre * Sphere's radius * Position of every vertex of every triangle. * Face intersection == the sphere's centre projects onto the face of a tri that it intersects. * Edge intersection == the sphere's centre projects onto an edge of a tri that it intersects, but doesn't project onto the face of that tri. * Vertex intersection == the sphere's centre overlaps a vertex of a tri that it intersects, but doesn't project onto the face of that tri, nor either of that vertex's two edges.  * Intersection point == closest point on any given intersected triangle to the sphere's centre * Collision normal (per intersection type) == Direction from intersection point to the sphere's centre   [attachment=30540:Intersections.png]     Here's that algorithm:   1. Test every triangle to see if it is intersecting the sphere. If it isn't, discard it. Ignore any back-facing situations (sphere's centre is behind triangle's plane). 2. If a triangle is intersecting the sphere, determine whether it is a face intersection, edge intersection, or vertex intersection. 3. Face intersection? Add its collision normal to "the list" of collision normals discovered so far. 4. Edge intersection? If that edge is not involved in any face intersections, add its collision normal to the list. Else, ignore it. 5. Vertex intersection? If that vertex is not involved in any edge intersections, nor face intersections, add its collision normal to the list. Else, ignore it. 6. To slide the sphere with a new velocity, remove from that velocity any vector component that goes into any collision normal in the list. Then slide. 7. Or to push away from the embedded position, average all the normals' directions and use that as an overall collision normal.     To help the brain think in 3D, recognise that this image is showing 1 face intersection, 2 edge intersections, and 1 vertex intersection. In this example, only the face intersection's normal would be used, not just because it already involves all intersected edges and vertices, but also because the sphere's centre is behind the other 3 triangles' planes.   [attachment=30541:4 ways.png]
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!