Recommended Method for Rendering a GUI with VBOs

Started by
6 comments, last by Labrasones 12 years, 4 months ago
I'm looking for advice on implementing the [rendering portion] of my GUI using VBOs. I realize this is probably quite easy, technically speaking, but I've never used VBOs for obscenely trivial geometric tasks and I'm wary of doing so incorrectly. I'm aware of most of the simpler "tricks" for GUI rendering (e.g. restrained updating, render to texture, render only the updated widgets, etc.) it's how to use the VBO(s) as replacements for immediate mode rendering that is throwing me. I can't shake the idea that it is complete overkill.

Google searching turned up several options/solutions:
  • Use a monolithic VBO for the entire GUI, based on the reasoning that multitudes of ant-sized VBOs are not performant (followed by a large flamefest over the validity of that statement).
  • Use an army of small VBOs, where each VBO corresponds to a widget, based on the reasoning that dealing with a giant VBO isn't worth it.
  • Use a single VBO per geometry type (e.g. a quad) and transform it via (presumably) a shader.
  • Use QT (not an option, but it was suggested quite frequently).
  • Ignore the problem altogether by relying on OpenGL's compat-mode until absolutely forced to do otherwise. After plowing through a number of straight-OpenGL GUI libraries this appears to be the most popular, albeit unspoken, solution. I'd really prefer to forgo this as I'd like to avoid as much deprecated functionality as possible, but I'm open to an argument to the contrary.
Is there a general guideline for which is preferred or, in lieu of that, is there an alternate preferred method for using VBOs in GUIs?

I'm not using C/C++ currently but advice, examples, or "look at" libraries in any language are fine. Any help is greatly appreciated.
Advertisement
You can tackle this task in several steps using a profiler to see what step you should take next, if any. You can start with one vbo per widget or geometry type, i am using one quad with dimensions 1 by 1. Then i'm setting a size uniform to a shader to scale this quad appropiately to the size of the widget. Also providing a translation uniform for position ofcourse ;).


When you have got this working you will see a huge number of draw calls. What you can do then is either implement the render to texture together with lazy updating, or you could switch to one big dynamic vertex buffer which shouldn't be too bad either knowing whole characters get skinned trough dynamic vertex buffers, so you got some bandwidth availabe. For the first option you might want to make sure to share a depth buffer with your scene's rendering somehow so you can chip off a few of those fragment shader executions. Every fragment behind hud/gui doesnt need to be lit or use other expensive calculations. For the second option you probably first need atlas textures because it wont make much sense to use one big vertex buffer if you still have to switch textures and only render portions. Also if you know your widgets wont be moving very much you might want to make an exception here and drop interleaved vertex formats for a separation between position and uv. You are probably switching textures/uvs more often than positions so this could save you some bandwidth.
Excellent, thank you sir.

You can tackle this task in several steps using a profiler to see what step you should take next, if any. You can start with one vbo per widget or geometry type, i am using one quad with dimensions 1 by 1. Then i'm setting a size uniform to a shader to scale this quad appropiately to the size of the widget. Also providing a translation uniform for position ofcourse ;).[/quote]
Aye. My only defense (against the subtle accusation of pre-optimization) is I haven't written anything yet and was just looking to avoid doing anything egregiously stupid.

For the first option you might want to make sure to share a depth buffer with your scene's rendering somehow so you can chip off a few of those fragment shader executions. Every fragment behind hud/gui doesnt need to be lit or use other expensive calculations. For the second option you probably first need atlas textures because it wont make much sense to use one big vertex buffer if you still have to switch textures and only render portions. Also if you know your widgets wont be moving very much you might want to make an exception here and drop interleaved vertex formats for a separation between position and uv. You are probably switching textures/uvs more often than positions so this could save you some bandwidth.[/quote]
Will do, 'preciate the help.

I'm looking for advice on implementing the [rendering portion] of my GUI using VBOs. I realize this is probably quite easy, technically speaking, but I've never used VBOs for obscenely trivial geometric tasks and I'm wary of doing so incorrectly. I'm aware of most of the simpler "tricks" for GUI rendering (e.g. restrained updating, render to texture, render only the updated widgets, etc.) it's how to use the VBO(s) as replacements for immediate mode rendering that is throwing me. I can't shake the idea that it is complete overkill.

While I can't answer your direct question (in fact, I would like to know the answer myself) I can certainly assert it's not overkill if you consider OpenGL ES does not support immediate mode and you are forced to effectively use VBOs. That means the majority of computers running OpenGL out there today.

Stephen M. Webb
Professional Free Software Developer

There's another option that's often overlooked: don't use OpenGL to render the damn GUI. Or more precisely, composite your GUI into a texture CPU-side, and just use OpenGL to blit it to the screen.

Why would you even consider this? Because CPUs are really, really fast these days, 2D GUI compositing is a braindead simple operation to perform on the CPU, and there are lots and lots of lovely libraries to help you (for example: cairo/skia). Not to mention that GUIs (and fonts in particular) are a really pain to efficiently render on the GPU.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

While I can't answer your direct question (in fact, I would like to know the answer myself) I can certainly assert it's not overkill if you consider OpenGL ES does not support immediate mode and you are forced to effectively use VBOs. That means the majority of computers running OpenGL out there today.[/quote]
Certainly.

That said, I was referring more to the idea than the physical item. VBOs were designed to pump out impressive amounts of geometry at breakneck speed. This makes their design (certainly their OpenGL syntax) somewhat antithetical to easy GUI rendering, especially since most GUIs aren't usually speed demon material. The slowdown from using immediate mode was balanced by the small geometry (it's just a bunch of quads) and the need to texture switch frequently. It's not that I have a problem with VBOs, it's just that I've always been left with the impression that "GUI rendering" wasn't on the form that was filled out when they were put into the specification. If you're not doing anytihng exceptional then switching is a reasonable amount of work for no really discernible gain.

On the subject of OpenGLES - that is true, for OpenGLES 2.0. But on an actual computer (OGLES is for embedded devices, although that is just semantics) where OpenGL is the means-of-transit for cross-platform purposes, immediate mode is still supported as a backwards compatible profile (up through the current spec, which is 4.2 I believe). The only reason I mention that is because I'm not targeting embedded devices. Most of the GUI libraries I looked at (as in, source) seemed content to just let the issue lie for now. Not that I blame them, it was just mildly frustrating trying to find some examples where the GUI in question had been designed to use VBOs instead of back-ported in by someone named Batman with a single commit message that read "IF THIS BREAKS USE THE SIGNAL - BATMAN".

There's another option that's often overlooked: don't use OpenGL to render the damn GUI. Or more precisely, composite your GUI into a texture CPU-side, and just use OpenGL to blit it to the screen.

Why would you even consider this? Because CPUs are really, really fast these days, 2D GUI compositing is a braindead simple operation to perform on the CPU, and there are lots and lots of lovely libraries to help you (for example: cairo/skia). Not to mention that GUIs (and fonts in particular) are a really pain to efficiently render on the GPU. [/quote]
That is an excellent suggestion, actually. I don't have any particular interest in the benefits of the programmable pipeline, GUI-wise, which is a major source of my frustration with this.
you could also have a look at http://awesomium.com/. You can then make your UIs in html+friends while using JS to handle it's logic.
@swiftcoder I'm interested in learning more about the method you suggested. However, I'm having trouble finding resources that teach the subject. Do you know where or what I should be looking for?

This topic is closed to new replies.

Advertisement