Jump to content
  • Advertisement

Seer

Member
  • Content Count

    85
  • Joined

  • Last visited

Community Reputation

278 Neutral

1 Follower

About Seer

  • Rank
    Member

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Currently if I was to program a game using C++ with SFML or Java with LibGDX I would render game objects by calling "object.render()" on the game object. Although this makes it easy to access the information necessary to render the game object, it also couples rendering to the game logic which is something I would like to move away from. How can rendering be implemented so that it is decoupled from the game objects? I wish to know how this can be done in the standard object oriented paradigm, so please don't suggest that I use an ECS. Thank you.
  2. I suppose I'm thinking of the bounding rectangle of the polygon. When a polygon object is made, vertices are placed around the polygon according to the sin and cos of the rotation given and its half width and half height. I just want to make it so that no matter the rotation affecting the initial placement of the vertices, that the width and height of the polygon (along the X and Y axes) are the same as those given in the constructor. Can you see a way to do this?
  3. Does anyone have any ideas as to how to fix the width and height of the polygons when their vertices are given an initial rotation?
  4. If you would like to see the full picture (three .pde files), you can view the code at my GitHub repository here. If interested you can also run the program yourself if you have Processing installed. Here though is the code which implements the SAT to detect collisions and create the MTV. Please tell me if you see any mistakes. class CollisionHandler { PVector detectCollision(PShape shape1, PShape shape2) { float magnitude = 999999; PVector direction = new PVector(); ArrayList<PVector> axes = new ArrayList<PVector>(); axes.addAll(getAxes(shape1)); axes.addAll(getAxes(shape2)); for(int i = 0; i < axes.size(); ++i) { PVector axis = axes.get(i); PVector p1 = project(shape1, axis); PVector p2 = project(shape2, axis); if(isOverlap(p1, p2)) { float overlap = getOverlap(p1, p2); if(overlap < magnitude) { magnitude = overlap; direction = axis; } } else { return null; } } return direction.mult(magnitude); } ArrayList<PVector> getAxes(PShape shape) { ArrayList<PVector> axes = new ArrayList<PVector>(); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector v1 = shape.getVertex(i); PVector v2 = shape.getVertex(i + 1 == shape.getVertexCount() ? 0 : i + 1); PVector edge = v2.sub(v1); PVector axis = new PVector(-edge.y, edge.x); axis.normalize(); axes.add(axis); } return axes; } PVector project(PShape shape, PVector axis) { float min = axis.dot(shape.getVertex(0)); float max = min; for(int i = 1; i < shape.getVertexCount(); ++i) { float projection = axis.dot(shape.getVertex(i)); if(projection < min) { min = projection; } else if(projection > max) { max = projection; } } return new PVector(min, max); } boolean isOverlap(PVector p1, PVector p2) { return p1.x < p2.y && p1.y > p2.x; } float getOverlap(PVector p1, PVector p2) { return p1.x < p2.y ? p1.y - p2.x : p2.y - p1.x; } }
  5. Here are two videos showing the collision handling between polygons. The collisions between the hexagon and the triangle are resolved as you would expect, with the hexagon being moved back by the MTV such that it is resting against the triangle. The collisions between the two triangles however are not resolved correctly, with the moving triangle jumping back a small distance upon collision resolution. I would like some help understanding why this happens and how it might be resolved. Also attached is an image showing two rectangles, both specified as being of width 200 pixels and height 140 pixels. One is made using the Processing rect() function and the other is a Polygon object constructed as described in my last post. As you can see, there is a noticeable disparity between their respective dimensions, with the Polygon object having smaller width and height. I would like some help understanding how to create the polygon such that its width and height are those specified by the parameters of its constructor, regardless of initial vertex rotation. Thank you. polygoncollision1.mp4 polygoncollision2.mp4
  6. I have programmed an implementation of the Separating Axis Theorem to handle collisions between 2D convex polygons. It is written in Processing and can be viewed on Github here. There are a couple of issues with it that I would like some help in resolving. In the construction of Polygon objects, you specify the width and height of the polygon and the initial rotation offset by which the vertices will be placed around the polygon. If the rotation offset is 0, the first vertex is placed directly to the right of the object. If higher or lower, the first vertex is placed clockwise or counter-clockwise, respectively, around the circumference of the object by the rotation amount. The rest of the vertices follow by a consistent offset of TWO_PI / number of vertices. While this places the vertices at the correct angle around the polygon, the problem is that if the rotation is anything other than 0, the width and height of the polygon are no longer the values specified. They are reduced because the vertices are placed around the polygon using the sin and cos functions, which often return values other than 1 or -1. Of course, when the half width and half height are multiplied by a sin or cos value other than 1 or -1, they are reduced. This is my issue. How can I place an arbitrary number of vertices at an arbitrary rotation around the polygon, while maintaining both the intended shape specified by the number of vertices (triangle, hexagon, octagon), and the intended width and height of the polygon as specified by the parameter values in the constructor? The Polygon code: class Polygon { PVector position; PShape shape; int w, h, halfW, halfH; color c; ArrayList<PVector> vertexOffsets; Polygon(PVector position, int numVertices, int w, int h, float rotation) { this.position = position; this.w = w; this.h = h; this.halfW = w / 2; this.halfH = h / 2; this.c = color(255); vertexOffsets = new ArrayList<PVector>(); if(numVertices < 3) numVertices = 3; shape = createShape(); shape.beginShape(); shape.fill(255); shape.stroke(255); for(int i = 0; i < numVertices; ++i) { PVector vertex = new PVector(position.x + cos(rotation) * halfW, position.y + sin(rotation) * halfH); shape.vertex(vertex.x, vertex.y); rotation += TWO_PI / numVertices; PVector vertexOffset = vertex.sub(position); vertexOffsets.add(vertexOffset); } shape.endShape(CLOSE); } void move(float x, float y) { position.set(x, y); for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void rotate(float angle) { for(int i = 0; i < shape.getVertexCount(); ++i) { PVector vertexOffset = vertexOffsets.get(i); vertexOffset.rotate(angle); shape.setVertex(i, position.x + vertexOffset.x, position.y + vertexOffset.y); } } void setColour(color c) { this.c = c; } void render() { shape.setFill(c); shape(shape); } } My other issue is that when two polygons with three vertices each collide, they are not always moved out of collision smoothly by the Minimum Translation Vector returned by the SAT algorithm. The polygon moved out of collision by the MTV does not rest against the other polygon as it should, it instead jumps back a small distance. I find this very strange as I have been unable to replicate this behaviour when resolving collisions between polygons of other vertex quantities and I cannot find the flaw in the implementation, though it must be there. What could be causing this incorrect collision resolution, which from my testing appears to only occur between polygons of three vertices? Any help you can provide on these issues would be greatly appreciated. Thank you.
  7. Seer

    Issue re-arranging equation

    Thanks very much for the help everyone. The problem was that I had never heard of the notion of reciprocals. After looking into them it all became clear. It really is amazing just how many ways you can manipulate expressions in mathematics. Below are the full workings. Our initial equation: To bring t over to the left hand side, multiply both sides by t: This results in: To get rid of the x on the left hand side, divide both sides by x: This initially results in: This is where I was stuck before, but after looking into reciprocals I understood that the expression could be rewritten as: Thanks to the associative and commutative properties of multiplication, this can be rewritten as: Which is the same as our end goal: Thanks again everyone, I really appreciate your help.
  8. Seer

    Issue re-arranging equation

    Okay, I understand reordering terms when multiplying, I just didn't know what you meant by associative. However, I don't understand how you got from: t = (4*sin(theta)^2) / x to t = 4 * sin(theta)^2 * (1 / x) Let's just take the right hand side. I understand that 4 / x is equal to 4 * (1 / x) because: 4 * (1 / x) = (4 / 1) * (1 / x) = 4 * 1 / 1 * x => 4 / x However, the right hand side of the equation isn't 4 / x, if it was it would already be solved. sin(theta)^2 is part of the numerator. This must be where my knowledge fails, because as I understand it everything in the numerator must be divided by the denominator. This would mean that both 4 and sin(theta)^2 must each be divided by x. Am I incorrect in thinking that? Can you pick and choose which term in a set of terms in the numerator to divide by the denominator?
  9. Seer

    Issue re-arranging equation

    Would you mind explaining how? Obviously I'm wrong but to my mind t = (4 * sin(theta)^2) / x is the same as t = (4 / x) * (sin(theta)^2 / x), since both 4 and sin(theta)^2 are divided by the x. This makes sense to me, but I'm not making the logical connection between this and the equation. I don't understand. Would you mind explaining further? Maths is not my strong point. It was just recently that I learned how to solve for variables in an equation and to re-arrange equations. It's incredible really, I feel like I have unlocked so many doors with this knowledge. No longer is it a mystery to me how expressions like "y = sin(theta) * radius" are arrived at. I had always understood through SOHCAHTOA that sin(theta) = y / radius, but I never understood how it was that people could re-arrange that to the equation above. It really is a great feeling when you make logical connections in your brain and finally understand what you did not before. Hopefully you can help me to make the connections to demystify my mystified brain on this issue.
  10. Seer

    Issue re-arranging equation

    Ah, sorry you're right that's not very clear, it should be: x = (4 / t) * sin(theta)^2 is the same as t = (4 / x) * sin(theta)^2 x = (4 / t) * sin(theta)^2 t * x = (4 / t) * t * sin(theta)^2 - (Multiply both sides by t to bring t over to the LHS) t * x = 4 * sin(theta)^2 - (The ts on the RHS cancel out) (t * x) / x = (4 * sin(theta)^2) / x - (Now divide both sides by x to remove the x from the LHS) t = (4 * sin(theta)^2) / x - (This is what I have ended up with)
  11. In the Introduction section of Ian Millingtons book "Game Physics Engine Development", he assumes that you have a certain level of mathematical knowledge. An example of what you are expected to understand is that x = 4 / t * sin(theta)^2 is the same as t = 4 / x * sin(theta)^2. I have tried to work this out but I cannot seem to resolve it at the last step. Here are my workings: x = 4 / t * sin(theta)^2 t * x = 4 / t * t * sin(theta)^2 - (Multiply both sides by t to bring t over to the LHS) t * x = 4 * sin(theta)^2 - (The ts on the RHS cancel out) t * x / x = 4 * sin(theta)^2 / x - (Now divide both sides by x to remove the x from the LHS) t = 4 * sin(theta)^2 / x - (This is what I have ended up with) What I have ended up with is the whole of the right hand side of the equation over x. Ian Millington ends with only the 4 being divided by x. What am I misunderstanding here? My current understanding is that what is done to one side of the equation must be done to the other in order for it to remain true. Therefore, if all of the left hand side is divided by x then all of the right hand side must be divided by x, not just 4. If this is true, then does that not mean that sin(theta)^2 must also be divided by x? If so, how can this be resolved? Thanks. Apologies for the formatting, I cannot figure out how to use the formatter.
  12. What books, tutorials or other resources would you recommend for learning good architecture design for 2D games, where the concepts are well explained with clear implementation details and if possible where an actual game is developed using these design principles, being gradually built up in a step-by-step manner?
  13. Thank you for addressing my questions frob. However, I am still curious to know; could you successfully develop games with the scope of those I suggested above, such as Pokemon or Golden Sun or some typical GBA level 2D RPG, using sub optimal or plain bad design practices such as having objects refer to resources and having their own render function? Would it be possible? Would it be painful? I am not saying that I would want to willfully program a game poorly or advocate doing so if you have the capabilities not to, but if your skill is such that you don't know how to properly decouple rendering from game objects or implement an event bus (like me), would it be possible? At this point what I have mainly taken away from this discussion is to not allow objects to hold references to resources and instead have systems hold the resources, to not give game objects their own render function, and to set up some kind of messaging system using an event bus or observer-like pattern to keep objects decoupled from systems. If all of that is correct, then what I now need is to know how to do all of that. This means having implementation details be explained to me. If you don't want to demonstrate how such an architecture would be implemented I understand as I'm sure it would be quite a lot to go through. In that case, can you suggest any tutorials, books or other material that go through developing a quality architecture for 2D games such as the kind I am interested in? I would like to stress if I may that I am only interested in 2D, not 3D. There may be resources out there which explain and step you through implementing fine architectures for 3D games which may be applicable to 2D games, but I would like the focus to be exclusively on 2D if possible. The reason is because learning these concepts and how to implement them is difficult enough and I don't want the added complexities of 3D on top of that. I hope that's not being too demanding. Thank you.
  14. At what point is a game no longer considered small enough to be able to implement simplified patterns, such as objects referencing resources? Would games with the scope of GBA era titles such as Pokemon Emerald, Golden Sun or Dragonball Z: The Legacy of Goku 2 be considered sufficiently small enough in this regard? It is only up to the scope of games such as these that I am interested in making for now. I have no interest in making 3D games. I care about how best to handle sprite sheets, textures extracted from sprite sheets, animations made using those textures, sound effects, background music and fonts. That's basically it. I have a few questions related to these: Should a given sprite sheet containing sets of animations, such as character animations, have an accompanying data file specifying values such as X position, Y position, width, height, X offset from origin and Y offset from origin for every frame in the sheet, so that you can properly extract the frames with necessary data when loading the sprite sheet? Should sprite sheets be loaded individually? I ask because I have heard of something called an atlas which packs sets of individual sprite sheets into one big sprite sheet and supposedly it is what you should use. To be absolutely clear, if a resource manager is what loads and initially holds the various resources, should it only hold resources in separate containers of that type of resource? For example, all textures would go in a container holding only textures, all animations would go in a container holding only animations, and so on for every type of resource. Then, when a system needs the resources its domain governs, it just takes a flat container of those resources from the resource manager and works from there. With resource loading, generally what I do is load all resources at the start of the program so that they are all available and ready to be used. Only once everything has loaded does the program continue. For the scope of the kind of game I am interested in making, is this an acceptable approach? If so, at what scope, if any, does it become unacceptable or problematic? Would you recommend the observer pattern for sending messages from objects to systems? If so, would you recommend implementing it with a kind of one for all approach where every observing system acts on the same function call, such as notify(Event, Object), differing in how they respond based only on the Event Enum which is passed? This way, each systems notify() definition would be comprised of a switch over the Events which are relevant to it. Object would be passed so that any type of data that needs to be sent can be wrapped in an Object and sent that way. Object could be replaced by Data, passing derived Data types instead I think. If not that then would you recommend using a set of disparate Listener interfaces instead, where the type of Listener can be different with more pertinent, sculpted function names and signatures for the relevant system. I have not used this type of observer pattern before as I have never been able to get my head around it being so used to doing it the way I described above, so if this is how you would recommend implementing the observer pattern would you mind explaining how? On the issue of ordering objects for rendering, I can see how a dedicated Renderer system would be good because it can take care of rendering the objects in a certain order as specified by whatever value or values are meaningful to you. For example, in 2D games where sprites don't take up an entire tile for themselves and are able to stand behind one another, you would probably want to order the objects by their Y position value so that those who are "above" are rendered before those who are "below" and appear behind those below them should they enter the same space. Even in this case however, the Renderer, once finished ordering the objects, can still simply render them by doing something like: for(GameObject gameObject : gameObjects) { gameObject.render(...); } In this case the Renderer has properly ordered all the objects and all it then does is tell them to render themselves. Is that still not a good solution?
  15. May I please have some feedback?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!