plainoldcj

Members
  • Content count

    51
  • Joined

  • Last visited

Community Reputation

1068 Excellent

About plainoldcj

  • Rank
    Member
  1. Hey, maybe reading this helps with your scewed texture problem: [url=http://www.xyzw.us/~cass/qcoord/]the q-coordiante[/url]
  2. Okay, here's my code: view on github. I'm using the graph data structure and Voronoi algorithm of the LEDA library. The LEDA API is actually pretty good, so you should have no trouble reading the code, even if you are unfamiliar with the library. The output of LEDA's Voronoi algorithm is a planar graph. Every edge is labeled with the site on its left side, and every proper node is labeled with CIRCLE(a, b, c), where a, b, c are sites closest to that node. The target nodes of unbounded edges have nodes at ,,infinity'' that store degenerated circles. More information here. The idea to bound the Voronoi Diagram, i.e., to intersect it with a quad, is as follows. First of all, assume that we only want to get rid of unbounded regions. Then we can choose the quad so large that is only intersects unbounded edges of the Voronoi Diagram. The basic idea is to traverse all unbounded edges and move their target nodes from ,,infinity'' to the border of the quad. So, for each unbounded edge you compute the intersection point with the quad. That's pretty cheap, because the quad consists of only four segments. Also, you don't need to create new nodes or something like that. You can efficiently find the next unbounded edge (in counter-clockwise order) by following the face cycle of the unbounded ,,backface'' region. While traversing, you could simply connect the target nodes of two adjacent unbounded edges with a new edge. In addition, you have to add the four ,,corner nodes'' of the bounding quad. In essence, I check for every pair of adjacent edges if there is a quad node between them. If that is the case, I add it. Because I traverse the backface in stricly counter-clockwise order, the quad nodes are added in that order, too. So at any time, there is only one possible candidate node that I have to test. For my particular application, I needed to bound the Voronoi Diagram with a fix-sized quad, which might intersect edges of proper regions (I wanted to render a fix-sized section of the diagram). Therefore, the code I show you first loops over all nodes and edges. Every edge that intersects the quad is subdivided into two new edges: one new edge is outside the quad and can be discarded. The other new edge is treated like an additional unbounded edge. I implemented this algorithm because a naive approach using general segment intersection algorithms turned out to be too slow.
  3. I'm not entirely sure I understood your question correctly, but to me it sounds like you want a ,,bounded voronoi diagram''. Or, put differently, you want the intersection of the vornoi diagram and a rectangle. If that's what you want, I'll gladly tell how I've done it in one of my projects.
  4. Okay, I'm tired and maybe I'm missing something. However, it appears that you simply adopt the orientation of firePoint for the newly instantiated game object. I assume firePoint is the tip of the gun and its a child of the player game object. So, I'd say you turn the player in direction of the mouse, the firePoint turns accordingly and then you instantiate the bullet with the same orientation. What you want to do is shoot the bullet in direction (mousePos - firePoint.pos). If I recall the Unity API correctly, you could simply assign that direction to transform.forward. Alternatively, there is a method Quaternion.LookRotation.
  5. Give me your Java and Python code

    Hey guys, great links, thanks! I find pygame.org especially useful, because it shows screenshot of the games.
  6. Hey, at the moment I'm heavily procrastinating, so I thought it's a good time to refresh my language knowledge Therefore, I'm asking if one of guys has a small game (around 10k loc maybe) written in Java or Python that I can tinker around with. Ideally, the game has only a few library dependencies. Also, the code should be "java-ish" or "python-ish" (as opposed to translated from, say, C), so I get to learn common idioms and standard libraries. Open source games are fine, too. So, any suggestions?
  7. That's interesting, I haven't thought about it this way before. I don't know much about SC2, but I've played WC3 a lot (lot). In WC3, there are research queues. Both games focus on micro-management of single units or small groups of units. That's a design decision that forces you to have little automation (including AI) of movement and casting, because that's how players compete. I like the idea, that most of what happens is caused by direct player input. I agree that the player has to carry out a lot of tasks, but I don't think they want to keep the player busy clicking. The assignment of drones to minerals, for example, has been automated in SC2, simply because it's a boring task with no consequence for the outcome of a match.
  8. When to use pointers or objects in 'Composition'?

    I prefer method 2 for composition, although I sometimes use method 3 for the following reasons. 1. using pointers, you can forward declare class Graphics in the header to reduce dependencies, which in turn reduces compile times when you modify class Graphics 2. using pointers, you can put all the initialize() work in the constructor and you don't have to worry about your object being in invalid state. (in cases where you can't use initializer lists)
  9. It sounds like the weapon is rotated around it's center and then translated, so maybe the order of transformations is a problem and you can try to change translate * rotate to rotate * translate.   In general, you want the weapon to have a fixed position in camera space. Therefore the easiest way is to not transform the weapon at all.   If you transform all your objects by the same "world" matrix, however, you want to give the weapon a local transformation that cancels this effect. that is the weapon matrix is the inverse of the world matrix. Try using a standard Invert() method first and see if it gives the desired results.   After that you can try to replace the Invert() call with a product of translation and rotation matrices.
  10. Maybe it's already sufficient for your application to sort the individual triangles back-to-front. You can store the vertices statically on the gpu and issue a sorted list of indices in every frame, for example. It's not too bad.   If you really need perfect transparency you may consider depth peeling. It's a multipass technique and runs on older hardware.   The cheapest trick is to draw in any order and tell everyone it's correct. Most people don't notice alpha-blending errors anyways :)
  11. Think in terms of frustrum planes, not view rays. You could try something like this, again using the right frustrum plane for example: 1. get the plane equation 2. cast ray R along x-axis to find object-plane distance 3. move camera along x-axis so that distance is 0 depending on the object or which bounding geometry you use you should sample the distance at every vertex and use the minimum or something like that. im not really sure what you are trying to do. after the camera translation the object should still be outside the camera view, right?
  12. In the example above the right plane of the viewing frustrum should be a supporting plane of the object. that is the object and the plane must intersect and the object must lie on one side of the plane.
  13. Problemas with glTranslated

    Does your problem look similar to this: https://dl.dropboxusercontent.com/u/56764397/screendump0.png ? When you talk about the camera losing pixels and objects moving into the screen this could be a clipping problem.
  14. Orthographic projection causes artifacts

    Hmm. To my surprise the problem disappears when I choose a sufficiently large negative value for the near clipping plane. So before I set (znear, zfar) = (0.1, 100.0) for both the perspective and the orthographic projection matrix and the error occurs. When I choose (znear, zfar) = (-100.0, 100.0) for the orthographic matrix everything looks fine. I have no idea why. Since setting znear = -100.0 essentially doubles the viewing volume I would guess the error becomes even worse!
  15. Hey guys,   I observe strange artifacts in my application when I switch from perspective projection to orthographic projection. Here's an image of the problem: https://dl.dropboxusercontent.com/u/56764397/projection_bug.png It looks to me like some sort of z-fighting.   Now, I'm using depth peeling for transparency and the artifacts disappear when rendering with another transparency technique, so maybe the issue is how I read and write the depth textures.   It's pretty strange, though, because the projection matrix is not referenced in any fragment shaders.   Any ideas what I can do to isolate the problem?