Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

100 Neutral

About TravisGesslein

  • Rank
  1. Don't use a Sprite class as a representation for your player. Sprites are usually general-purpose 2D elements. Instead, create an "Entity" or "Actor" class which inherits from Sprite or contains a sprite (either way is fine) and use that class to represent your player That way, you can easily use your sprite class for other drawing related things too, like Tiles on your map. As to your question: class World { Acfor player; public void attachPlayer(Actor *p) { player = p; p.belongsTo(this); } } class Actor { private World world; public void belongsTo(World* w) { world = w; } }
  2. I'm guessing currently you do something like this: class GraphicsManager { std::vector<Player> players; //... void takeCareOfGraphicsForNPC(Player *player); void drawAll(); //... } This is illogical, since the player relies on your graphics device to function appropriately, not the other way around. So instead, do it like this: class Drawable { //... } class GraphicsManager { //... void draw(const Drawable &d); //... } class Player : public Drawable { //... void draw(const GraphicManager &graphics); //... } class PlayerManager { std::vector<Player*> players; void takeCareOfPlayer(Player *p) } and so forth. This is just a rough idea of course. Player wouldn't just be a drawable, it would maybe inherit from an Actor class first, and you wouldn't make an entire PlayerManager just to take care of your Player objects, you'd do some kind of generic Actor manager. Of course there are alternative approaches: A Player might not know how to draw itself at all and just contain its data, and the PlayerManager would be responsible to extract that data, interpret it, put it into a drawable and hand it over to the GraphicsManager. But that's personal taste imo, it doesn't really change anything in terms of effort of maintenance etc. I can't go into more specifics since this is specific to what you're trying to do, but I think you should get my point.
  3. What are you trying to do after you checked the color of a pixel? If you would then just modify the color or the alpha value of it, you should check out pixel shaders.
  4. Hi. I will use the stencil buffer to draw 2D shadows, but I'm having some problems figuring out how to render things to the stencil buffer in the correct order, and how to set up the glStencilFuncs and Ops. What I want to achieve: I have several circle-shaped light sources, geometry somewhere on the screen, and then additional geometry which are simply the shadows that I calculate (in general they work fine, but sometimes they overlap with the geometry that they get cast from which shouldn't be an issue though since I don't draw them directly and just stencil them anyways). I'm having immense trouble figuring out how, and it what order, I need to draw things into the stencil buffer. Has one of you implemented something like this already and can give me a general rundown on how to render the scene into the stencil buffer properly?
  5. TravisGesslein

    OpenGL Problems with drawing shadows (2D)

    what else then? none of the ones that i tried produced different results
  6. TravisGesslein

    OpenGL Combining two blending functions

    Thanks! I'm rendering to texture now (didn't think it would be that simple). Thanks for the advice!
  7. Hi. The following function (java) calculates the shadows which a shape casts depending on the position of the lights. It isn't required to understand my problem, but I'll post it anyways just for completeness: public Shape calculateShadow(Drawable drawable) { Shape shadow = new Shape(); shadow.setColor(new Color(0,0,0)); HashMap<Integer, Vector2f> required_vertices = new HashMap<Integer, Vector2f>(); Vector<Vector2f> edge_normals = drawable.getEdgeNormals(); Vector<Vector2f> vertices = drawable.getVertices(); int size = vertices.size(); Vector2f to_light = new Vector2f(0.0f,0.0f); Vector2f vertex; for(int i=0; i<size; ++i) { vertex = vertices.get(i); to_light.x = position.x - vertex.x; to_light.y = position.y - vertex.y; if(edge_normals.get(i).dotProduct(to_light) <= 0.0f) { required_vertices.put(i, vertex); if(i+1 <size) { required_vertices.put(i+1, vertices.get(i+1)); } else { required_vertices.put(0, vertices.get(0)); } } } Collection<Vector2f> c = required_vertices.values(); Iterator<Vector2f> it = c.iterator(); while(it.hasNext()) { vertex = it.next(); shadow.addPoint(vertex); shadow.addPoint(Vector2f.multiply(Vector2f.subtract(vertex, this.position), 1000.0f)); } for(int i=0; i<shadow.getVertices().size();++i) { System.out.println(i + " " + shadow.getVertices().get(i)); } return shadow; } This has nothing to do with my problem, but to sum it up, the function goes through each edge of a given shape and determines if it's facing away from the light or towards it. If it is indeed facing away from the light, the algorithm adds both vertices attached to the edge to the shadow's shape, as well as two additional vertices which are just projections from the light to the corner vertices over a long distance (so that they're certainly offscreen). I'm using a HashMap in there to guarantee that all necessary vertices only appear once in the final shadow shape (simply makes drawing faster). Now my problem: The algorithm adds the vertices to the shape simply in the order of their appearance of the Vector<Vector2f> that is returned by the shape's function getEdgeVertices. Which means that if it detects that the corner vertices 0, 2 and 3 (as well as their projections) are necessary to draw the shadow, they will be saved in this order: vertex_0, projectionfromvertex_0, vertex_2, projectionfromvertex_2, vertex_3, projectionfromvertex_3 Now I'm using GL_TRIANGLE_STRIP to draw the shadow, but because of the way it's drawn this way, it sometimes (when the light is to the right or below the shape) draws the shadow over my shape which is obviously not supposed to happen. To give you a better understanding of what I mean, here's a super fancy drawing: The yellow dot is the light source, the red rectangle my shape that is casting the shadow, and the green dots are the vertices that are added to the shadow's shape. The numbers next to them represent the order in which they are saved inside the Vector<Vector2f> of the shape. Now you all know how GL_TRIANGLE_STRIP works: It takes the array and then draws a triangle from vertices 0,1,2 - then from vertices 1,2,3 - then 2,3,4 etc. In the above constellation this will result in a perfectly fine shadow, since drawing 0,1,2 then 1,2,3 then 2,3,4 etc. will create a shape by opengl which is drawn outside of the red rectangle (sorry for crappy quality, MS Paint ) : But now in this setup, where the light is to the right of the rectangle: It will put the vertices into the shadow's shape in the same order. Now, when OpenGL renders the thing, it draws triangle 0,1,2 and 1,2,3 (which are fine), but drawing triangles 2,3,4 and a couple of others will result in drawing the shape over the rectangle, which is bad: How can I fix this problem? One thing you have to take for granted, which I can't explain to you right now: I can't simply draw the shadows first, for various reasons.
  8. Hello. I have a Sprite class which renders parts of its texture invisible using ordinary blending: (Note, this is Java, using LWJGL, that's why a GL11 is in front of everything) GL11.glBlendFunc(GL11.GL_SRC_ALPHA, GL11.GL_ONE_MINUS_SRC_ALPHA); (I'm manually setting the alpha of certain pixels in the texture to 1.0) However, I'd now like to combine this blending by also somehow factoring in the alpha values which are already stored in the color buffer. To be more precise, I'm rendering an invisible light into the RGBA buffer only using alpha values. It starts out with a vertex in the middle of the light with full light intensity as its alpha value and draws the points around it in a triangle fan with 0 alpha. This way, OpenGL interpolates the alpha values for me, creating a smooth decrease in light intensity from the middle of the light to its edge. GL11.glBegin(GL11.GL_TRIANGLE_FAN); GL11.glColor4f(0.0f,0.0f,0.0f, m_intensity); GL11.glVertex3f(m_position.x, m_position.y, m_depth); GL11.glColor4f(0.0f, 0.0f, 0.0f, 0.0f); for(int i=0;i<m_vertices.size();++i) { Vector2f pos = m_vertices.get(i); GL11.glVertex3f(m_position.x + pos.x,m_position.y + pos.y,m_depth); } GL11.glEnd(); Now, my lights are drawn before everything else on the screen. I basically fill the buffer with alpha values, then render everything else on top of it using this kind of blending: GL11.glBlendFunc(GL11.GL_DST_ALPHA,GL11.GL_ONE); This also works fine and everything that is drawn where the light would be is brightened according to its distance to the center of the light... however, this way of blending geometry and the way my sprites are rendered now conflict. I can only choose either one of those two, but I'd like to do both at the same time. Is there a way to combine blending functions together? I guess it would be possible too just render the sprites quad twice, once using ordinary blending to mask out certain parts of the texture, and after that again using DST_ALPHA blending to add the light intensities to the color. But this sounds really painful, especially since draw calls using glBegin and glEnd to the graphics card are limited (as far as I know even top notch modern cards can't handle more than a couple of thousand per frame at 60fps due to all the overhead). This would really help me out a lot, thank you
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!