Best Way to Control Depth (Z-Order) (2D)

Started by
4 comments, last by Zahlman 17 years, 5 months ago
I'm not sure how to control depth in my little game engine. It's 2D, but I'm implementing it using SDL/OpenGL and DirectX. I'm currently working on the GL implementation because I currently know nothing about DX. :-) Anyway, I need to control the order in which graphics are drawn, or more specifically the depth of drawing operations. For instance, I want to be able to draw graphics to the screen at any time but be able to set their depth. One method for this looks like this:
paint(Graphic* g, Vector2<float> Pos, int Depth)
What's the best way of doing this? Right now I have a class called Canvas that is used for such higher level drawing operations (as well as drawing primitives and binding brushes). I also have a small class called PaintJob. PaintJob contains a Texture pointer, a pointer to a VertexGroup (a small vertex buffer), and a depth. Each drawing request is stuffed into a PaintJob and stored in a list. Calling Canvas::update() sorts this list by depth and then paints everything. This seems a bit ineffecient to me. Is there a better way? Can I somehow employ the depth buffer for this? This way seems to offer some flexibility and a lot of API independance, but I thought I'd ask if anyone has any better suggestions. :-) Thanks! [Edited by - GenuineXP on November 6, 2006 9:59:53 PM]
Advertisement
There are several options:

1) You can use OpenGL/DirectX and just blit polys with your sprites as textures. This allows you to specify the Z value when blitting vertices and letting the GPU do the sorting (Fun method but requires "newer" 3d hardware).

2) You can bubble/quick/whatever-sort every frame (or every time you change a z value) and render this way. C++ provides sort() (or qsort()?) for this. Probably the simplest and least inefficient way to do what you want.

3) My personal favorite for 2D: implement a scene graph, preferably with a balanced binary search tree. This way you can iterate over the nodes and draw them without any comparisons per frame. The tree stays balanced every time you insert and remove a node and does it relatively quickly. Check out http://en.wikipedia.org/wiki/Self-balancing_binary_search_tree for more information. The problem with these is that they can be relatively difficult to implement but is probably the most efficient method of the three.

Good luck!


correction: "least inefficient" -> "least efficient"
When rendering sprites as a textured quad then you have the 3rd co-ordinate for the vertices useable as depth. The GPU provides a depth buffer and depth test on a per pixel basis. A caveat with this method is that the quad is always rendered as a quad, and transparent pixels doesn't alter the color buffers but still alter the depth buffer when being rendered. So you may notice that parts of a sprite will be invisible when rendered behind the transparent pixels of a sprite that itself was rendered temporarily before the other one. To avoid this problem you have to do an alpha test as well, so that rendering (including to the depth buffer) is supressed totally on transparent texels.

Another thing to consider when using the depth buffer is that having two sprites with the same depth value overlap will introduce z fighting artifacts. So you have to avoid this situation, e.g. by guaranteing unique depth values for your sprites.
What advantages does a scene graph provide? It sounds interesting, and if it could offer some other useful features I may opt for that.

Using the 3rd vertex coordinate seems easiest, but will I lose support on older hardware? I have alpha blending setup, so I'd have to try it to test the results with overlayed transparent images.

I'll look into balanced binary trees for a scene graph. I'm not sure how to keep a binary tree balanced or how it iterate over its nodes. :-.

Thanks for the replies.
You shouldn't have to be manually re-calculating Z positions all the time. You shouldn't *want* to manually set "depth" values, because you only open yourself to the possibility of mistakes, really.

One simple solution is to have your sprites in groups according to the 'layer' (e.g. background/foreground/HUD), and sort each layer according to an appropriate heuristic. (e.g. for background and HUD you might not have any overlap and thus not need any sorting; for foreground; typically you sort by the y-coordinate of the *bottom* of the sprite). The sorting can easily be done with std::sort, as long as you can implement operator< for your sprite class, and/or a comparator object.

You might not need to sort everything every frame, of course. If objects do not change z-order very often, you might want to make it so that objects logically belong to a layer, and 'inform' the layer whenever they move. The layer can then remove the object from its internal container, and re-insert it in the appropriate Z-position. The net result is an insertion sort, but an amortized one.

// Lots of details omitted, simplifying assumptions made etc.class Layer;class Sprite {  int x,y,w,h;  Layer* l;  public:  void move(int dx, int dy) {    x += dx; y += dy;    l->changeZOrder(*this);  }  bool operator<(const Sprite& rhs) {    return (y + h) < (rhs.y + rhs.h);  }};class Layer {  vector<Sprite> sprites;    public:  void changeZOrder(const Sprite& s) {    sprites.remove(s);    sprites.insert(lower_bound(sprites.begin(), sprites.end(), s), s);  }  void render() {    std::for_each(sprites.begin(), sprites.end(), mem_fun_ref(&Sprite::render));  }};

This topic is closed to new replies.

Advertisement