Efficient GUI clipping on the GPU

Started by
6 comments, last by Hodgman 11 years, 6 months ago
While my question is quite simple, the answer is not turning out to be so straightforward: in OpenGL, given an arbitrary number of coplanar rectangular surfaces that overlap (eg a GUI of arbitrary dimensions from something like 150x300px to HD resolutions, including potentially multi-monitor setups), which would make the most sense:

1) do point-in-rect style tests per fragment in the shader
2) order elements on the cpu, keep depth testing off, ignore overdraw and just brute-force everything
3) implement a minuscule z offset and let depth testing do the clipping while ignoring overdraw
4) there's no one-glove-fits-all solution: different platforms require different solutions

I currently have depth testing and masking disabled and I'm brute-forcing everything. It's simple, but not necessarily elegant.

Regarding option 3: I do want to implement some form of clipping and Z-ordering anyway, but I'm not sure if Z-offsetting controls is the solution here. I'm mildly curious as to whether point-in-rect (essentially having a four-part if statement per fragment per intersecting element might be the most viable, especially considering that I want to prepare for a potential software fallback in case of a non-accelerated GL ES implementation or such).

Since the question is somewhat abstract due to scalability, I'd appreciate some thoughts! smile.png
Advertisement
For rectangle clipping take a look at the scissor test .
A GUI implementation generally generates several and up to tens or even hundreds of clip rectangles.

Scissor test is useful for clipping pixels that fall outside the draw rectangle; I need to clip regions that are inside it (eg not draw under a button while drawing a toolbar).
You may be solving the wrong problem.

Generally, UI code solves this problem by just drawing the windows in order, front to back (omitting only completely obscured windows). Why? Because that's actually faster than faffing about...

Yes, it generates some overdraw, but that's still usually easier than trying to clip -- UI pixels are often fairly cheap (being things like flat or shade fills of rectangles).

If you're actually running this on a 3D card, then the obvious answer is just to use the Z buffer to sort this problem out for you. Which would be the second approach I'd try, but only if the overdrawing method proved slow.

Go for the simplest approach until you really find it won't work.

3) implement a minuscule z offset and let depth testing do the clipping while ignoring overdraw
Modern GPUs have "Hi-Z" capabilities, where as well as performing the standard z-test for each pixel, they also first perform a low res/less precise z-test that can discard a block of pixels before they're shaded. As long as you render from front-to-back, with a decent z separation per layer, this hardware will help reduce your overdraw costs quite a bit.

A GUI implementation generally generates several and up to tens or even hundreds of clip rectangles.

Scissor test is useful for clipping pixels that fall outside the draw rectangle; I need to clip regions that are inside it (eg not draw under a button while drawing a toolbar).

To be honest, I don't understand the issue you have yet.So there's the chance that I'm heading in the wrong direction smile.png but...

a gui is often structured in a tree-like structure which gives you a natural drawing order. If you want to change this drawing order for elements on the same level (windows i.e.) you just change the node order.

Further on, once you need alpha blending, you need to render them in order,else you will get artifcats when using only the z-buffer.

To clip them on the GPU (ie to keep text in a given rect), use the scissor.

For example, my in-game gui is drawn in immediate mode (!) with several hundreds, even thousands, of rects per frame (ranging from 1024x600 to single chars of 8x8), using shaders, alpha blending, scissor tests etc. with an acceptable impact on performance. And there's quite some space left for optimizations (atlas/texture array, vbo etc.).
My concerns are mainly for the unaccelerated/emulated systems and have to do with overdrawing controls that have a bunch of small print text or antialiased splines/curves in them, which could make use of early clipping. Katie's remarks about the simplest solution being the best make the most sense, though. I'll just resort to proper Z-ordering and minute offsets for now.

Ashaman - I'm afraid I mean clipping in the reverse sense to what the scissor rect does (as in not drawing what is behind child controls). I'm talking about low performance systems (GL ES) that may or may not be GPU-accelerated.
I know of at least one AAA 3rd person adventure game, where their occlusion culling system simply selected the 5 largest rectangular walls that were on screen, and tested every object (that passed the frustum cull test) to see if it was fully behind one of those walls. This seems ridiculously simple compared to PVS/octree/BVH/etc systems, but it worked anyway.

Katie's comment of "omitting only completely obscured windows" makes sense in the same way -- instead of performing perfect clipping, you could just test small items for full occlusion by large objects. E.g. before submitting a control with an expensive spline, just check to see if it is fully occluded by another window.
If you do all this at once (e.g. foreach object, for each occluder, test bounds) it should be pretty fast.

This topic is closed to new replies.

Advertisement