Fingers_

Members
  • Content count

    688
  • Joined

  • Last visited

Community Reputation

410 Neutral

About Fingers_

  • Rank
    Advanced Member
  1. VBO based GUI

    I would recommend Vertex Array as the first step. It's simpler to deal with than VBO (because you're just keeping an array of vertex data in RAM) and should be available in all flavors of OpenGL including ES. The performance differences between the different methods are insignificant. In my current GUI code, I use various "draw sprite" commands that output into several vertex arrays (one array per texture*). The list of vertex arrays is sorted so that text is on top and window borders are on the bottom. The window border graphics are in an atlas with default widget graphics so they end up in the same array. (But widgets are added after the window frame so they draw on top) I didn't have ordering problems from custom graphics, the main sorting issue was that drawing new windows on top of previously drawn things could add the new polygons to an existing array which could be drawn before the new window's frame or widgets. *So now I "freeze" the vertex arrays when drawing a new window, and create new arrays so that in the end they're all drawn in the correct order. (I use "painter's algorithm" because things can be translucent) For example, the list of vertex arrays might look like this (back to front): (Wnd1 border+widget frames) (Wnd1 custom widgets) (Wnd1 text) (Wnd2 border+widget frames) (Wnd2 custom widgets) (Wnd2 text)
  2. OpenGL Sorting for transparency

    Quote:Original post by way2lazy2care Quote:Original post by Atrix256 Quote:tried this and it did not work. if objects in front are rendered first, objects behind still don't show through. Are you sure you turned off Z writes? (but left on Z reads) cause seriously... this should work, and i've done it before many times. the only reason they wouldnt show through is because the Z test failed - which means that z writes are still on :/ I'm stupid. I apparently was very tired and disabled blending instead of the depth writing. I think it works now :D I'll let you all know if I run into any trouble later. You didn't mention what kind of blending you're using, but unless it's pure additive (GL_ONE, GL_ONE) blending for all the transparent objects you will still need to draw them back to front for it to look correct. Otherwise if a further-away object is drawn after a nearby object, its appearance will not be properly affected by being seen through the foreground object.
  3. Looking for a 3D model format

    Blender's .obj exporter is actually nice, it lets you specify exactly what information you want to export. Assuming you've created a model that has per-vertex normals, just make sure the "normals" button is pressed in the dialog that pops up when you export to "wavefront .obj".
  4. Yaw + Pitch = Roll

    This is correct, desirable behavior. A real spaceship or plane will do the same thing. Any attempt to "fix" it with Euler angles etc will result in unrealistic and annoying gimbal lock behavior. Consider the case of the plane pitching up 90 degrees, then yawing right 90 degrees. Model it by holding a miniature plane in your hand if you'd like. The only realistic result is that the plane ends up rolled 90 degrees right and pointed horizontally to the right. If you used absolute yaw/pitch angles, changing the yaw angle when pointed up would result in the plane rolling rather than yawing from the pilot's point of view, making it feel like the plane is "stuck" pointing upward.
  5. Defining a "nebula"

    Your example implementation produces a fuzzy sphere that can be used as a building block for nebulae although you may want to modify the attenuation function so that the center isn't a sharp point. To build a nebula you start by making one such "node" and then add more in random locations around it. Reject locations that are too close to an existing node (unnecessary to add density there) or too far from the nearest neighbor (you want the gradients around the nodes to "merge"). The nebular density at a point in space is the sum of the influence of all nodes that overlap the point. In a 2D game you may be able to just "draw" all the nodes into a density map that'll make it very easy to use in-game. My old game "Strange Adventures in Infinite Space" (now available for free) did just that. I used several different image-based density maps for the nebula nodes to make interesting features like swirls and bubbles in the nebula. It's rather like a 2D "metaballs" system. In the sequel "Weird Worlds: Return to Infinite Space" I computed a lower-resolution density map on the fly because the player can modify the nebula and the world size is variable.
  6. 3d point sprites

    Now, I have never used DX10 or geometry shaders but I pasted "Can you convert a point list into a sprite list using the geometry shader with dx10?" into Google. The first hit contains the following sentence by someone from Microsoft discussing DX10 geometry shaders: "We can do things like take a point and generate a set of triangles around that point and expand it into a sprite." Are you saying that you did not do this 10-second action during the several hours between your two posts? Why?
  7. engine for big big map

    Rendering it like a Quake-derived engine (assuming you can fit it in RAM etc) wouldn't be the problem because you're only rendering a small subset of the world at any one time. Your level editor is more likely going to choke on it, because it'll probably show more of the level at once. A more subtle but serious limitation you'd run into is what the size of the world does to floating point accuracy. As you get further from the origin, floating point coordinates get more and more "quantized" which results in strange artifacts. To keep this under control, the Quake engine limited world coordinates to +/- 4096 units, ie. a cube about 200 meters across. Source allows world sizes up to 800m (+/- 16384 units) which is probably very close to the threshold where the errors become visible. This is sizable for a first-person shooter but not huge in absolute terms, and won't be enough to hold a level as large as you're describing. One solution to this world size issue is using local coordinate systems within sections of the level. There's a very nice paper on the Dungeon Siege engine that describes how they created a continuous, detailed world of immense size like you're planning to do. I could only find a link to a powerpoint presentation of it in a hurry, but a bit of googling should find the more detailed article. It also touches on other helpful subjects like streaming (only keeping the part of the world in memory that the player can see/interact with).
  8. Verlet constraint confusion

    Jakobsen is important in the sense that he introduced a huge number of game programmers to a technique that (a) requires no background in physics to understand and (b) with little work results in a stable and usable "physics simulation" that may not be realistic but looks good enough in a game. There's a large number of indie games that would not exist without the popularization of Verlet integration. Gish, World of Goo, and my own Soup du Jour all build on Jakobsen. After reading his article I had my first soft body physics system running in a couple of hours. It felt downright liberating to see an object stably resting on another one without me telling it to stop.
  9. I included basic joystick/gamepad support in Brainpipe, also written in SDL. This is more or less the bare minimum you'll need: 1. (Using a mouse interface) the user picks the joystick/pad from a list. 2. Before setting up anything else, the player must press a "fire" button so you can record it and use it for the rest of the setup. 3. When setting up joystick axis, ask the user to turn the stick left and press fire. Store the entire state of the joystick object. Ask him to turn the stick right and press fire. Store the state again. Then compare the two stored states and find which axis (or "hat") changed the most between the two button presses, and in which direction. (Continue to select up/down axis the same way.. I did them separately but in retrospect probably shouldn't have since you always need both) My game saves the joystick configuration in a file named after the joystick name... So if you use the same joystick again you won't have to remap it, and you can plug in different joysticks without overwriting the old configuration in case you want to switch back later.
  10. OpenGL problem with 2D zoom

    If you're using texture clamping (GL_CLAMP), try using GL_CLAMP_TO_EDGE instead. This will prevent the border color from showing up.
  11. If you aren't using the stencil buffer for anything else, you may be able to use it for this purpose. Basically, draw your debug layer elements with depth check disabled and writing to the stencil buffer. When you're drawing the game graphics, use stencil check to make sure you're not overwriting the debug layer. (But yeah, if you have a GUI system it's probably best to just make this stuff GUI objects)
  12. Usually if you have performance problems with 2D sprites, the two most likely causes are fill rate and state changes. If you have a ton of large sprites and are running in a high resolution, each one of the million+ pixels on the screen may have dozens of writes per frame and this will bog down the video card. To test whether this is the cause, try running in a low resolution. If the frame rate improves then fill rate is the problem. In particle systems, it's common to see textures where the particle occupies a small area in the middle of the texture and most of the sprite is empty space. This empty space is still blended into the frame and eats up fill rate. If you can make the polygon half the width/height, you'll reduce fill rate usage by 75%. State changes are things like changing the texture or shader. This is a time-consuming operation on the video card. A common beginner mistake in sprite rendering is to set the texture for each sprite and draw them one by one. If you set the texture/shader once and then draw every sprite that uses the same texture/shader in one batch, you can get a dramatic performance improvement. Modern video cards are optimized to display 3D models with thousands of polygons each, so the best performance in 2D is found by batching thousands of sprites and drawing them all at once. If you're using OpenGL immediate mode, switch to vertex arrays or VBO. (IIRC OpenGL ES on iPhone doesn't even have immediate mode)
  13. What's the format and/or what's the game it's from? Most anything is documented somewhere on the Internet, but you need to be less vague about what you're looking for. Are you certain that the data is just xyz plus a mystery parameter? Could it be eight bytes with one byte per parameter for position, normal and 2d texture coordinates? I've seen one byte values used before, e.g. older id software engines stored a vertex in four bytes (xyz and a normal index).
  14. Air flow caused by fans?

    The "fake" method will also make it more intuitive for the player because the results are far more predictable than if you used realistic fluid simulation.
  15. Are you sure you don't have a "glEnable(GL_LINE_SMOOTH);" somewhere? Have you tried simply disabling it? Also, why do you enable blending and use the glBlendFunc that's usually used with GL_LINE_SMOOTH? If you're just trying to draw opaque white lines, you should disable blending altogether.