# OpenGL Efficient GUI rendering

## Recommended Posts

This has probably come up before here, so feel free to link me straight to a good source of info .. but ..

Anyway for various reasons I have written a little GUI system for a game I'm working on, and decided to have a go at fleshing it out to make it useful for writing tools or little apps, possibly cross platform. Oh gawd, yet another reinventing the wheel I hear you cringe, yes, well, can't say much on that ...

Here's a screeny showing I have it working ok, it didn't take too long:

[img]http://i49.tinypic.com/2mxgbnq.jpg[/img]

For the game I was using directx to do the actual GUI rendering, but I've swapped over to opengl for general use, and it's a long time since I did much opengl, I'm very out of date with it.

So my question is probably to people who have done this before, what did you find the most efficient way of rendering the GUI objects? I can see there's a few trade offs involved. I was until recently just doing everything in software, writing to one big software surface, then locking a big viewport size texture and copying the RGB data across. Not particularly elegant or fast, but simple and it works.

That is until I realised I wanted to have some opengl 3d viewports displaying 3d models inside the app, with GUI elements possibly overlayed (such as menus, or dialog boxes). And to future proof things it would be nice to not lock the entire viewport every time there is a little change, so it works at reasonable speed.

So my guesses for some alternative methods are:

1) As each 'widget' is changed, I render this to the big software texture, and lock and upload just a part of the texture (using glTexSubImage2D?). This isn't as simple as it could be though, because it appears the source data can't have a 'pitch' to jump across the x coord on each line (if the viewport is much wider than the 'dirty rectangle'), so I'd have to first convert the big software texture to a temporary smaller one before uploading to opengl.

I could also keep a list of dirty rectangles that need uploading to opengl to avoid uploading the same area more than once in a frame.

2) Same as above but keep a separate software surface for each 'widget'. That way it can be uploaded without fiddling. However it makes changing the size of widgets potentially more problematic (as the software surface size will change), and would be nicer to avoid all those unnecessary memory allocations (although I could use memory pools I spose).

3) Have a separate opengl texture for each widget. Probably faster for rendering but pretty ugly in terms of memory allocation / deletion.

4) Try and render everything directly on the 3d card, without using software textures.

So, anyone know is there a standard good way of doing this? Anyone know what CeGUI, MyGUI etc do?
Cheers [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

##### Share on other sites
[quote name='lawnjelly' timestamp='1341420449' post='4955665']
what did you find the most efficient way of rendering the GUI objects?[/quote]The one that takes less work to implement. I've had quite some stir with GUI systems and I can tell for sure performance has never been a problem for me. Never. They just are not performance paths.
But, as a design rule, I'd try to minimize memory consumption rather than runtime performance.

##### Share on other sites
[quote name='Krohm' timestamp='1341424113' post='4955675']
[quote name='lawnjelly' timestamp='1341420449' post='4955665']
what did you find the most efficient way of rendering the GUI objects?[/quote]The one that takes less work to implement. I've had quite some stir with GUI systems and I can tell for sure performance has never been a problem for me. Never. They just are not performance paths.
But, as a design rule, I'd try to minimize memory consumption rather than runtime performance.
[/quote]
Well that sounds encouraging.

I'm beginning to think the idea of trying to render GUI items on top of the opengl 3d windows is overkill, and opening up a bag of worms. Instead I could just render the GUI in the background, then just update the 3d window on top and assume nothing is overlaying it. And in the case of where the user opens a menu or something, I could pause the 3d window, kind of avoiding the problem.

I kind of agree on the memory consumption and also simplicity versus performance, after all I don't want to spend that much time on it, and create the greatest kickass GUI known to man if I'm only going to use it for a couple of apps. Just didn't want to paint myself into a corner later on with early design decisions if it's not necessary.

At the moment I'm thinking in terms of keeping the current approach with one big software surface, then just optimizing the amount of locking / dirty rectangles uploaded to the opengl texture. It may well stall the 3d pipeline, but it shouldn't matter that much if I'm not doing it every frame, and it's just for tools.

##### Share on other sites
A GUI rendering can be slow, because text rendering can be slow if there's a lot of text. So you can look into text rendering optimization.

I know you didn't ask for comments on the reinventing thing, but it's pretty strange that you are really reinventing the look of the standard win32 GUI.
If you want to release stuff with this GUI, it will kick your ass. And it will kick the users' ass. A GUI that just looks like the well known Windows GUI but doesn't work like it is a major user-anti-experience.

##### Share on other sites
[quote name='Krohm' timestamp='1341424113' post='4955675']
The one that takes less work to implement. I've had quite some stir with GUI systems and I can tell for sure performance has never been a problem for me. Never. They just are not performance paths.
But, as a design rule, I'd try to minimize memory consumption rather than runtime performance.
[/quote]

I'm guessing that you've never worked with Scaleform. [img]http://public.gamedev.net//public/style_emoticons/default/tongue.png[/img] Edited by MJP

##### Share on other sites
[quote name='szecs' timestamp='1341427663' post='4955689']
A GUI rendering can be slow, because text rendering can be slow if there's a lot of text. So you can look into text rendering optimization.

I know you didn't ask for comments on the reinventing thing, but it's pretty strange that you are really reinventing the look of the standard win32 GUI.
If you want to release stuff with this GUI, it will kick your ass. And it will kick the users' ass. A GUI that just looks like the well known Windows GUI but doesn't work like it is a major user-anti-experience.
[/quote]

I agree about the user experience, there are some well executed and user-friendly third party GUIs and some that just leaving you scratching your head.

In this case I'd already written the basics of the GUI for a light weight in-game menu system, so thought why not add regular application menus and try running with it for a simple app. It's all good experience. I'm just planning on using it for a simple internal 3d model editor for now, nothing fancy.

And of course not being tied to win32 leaves you more options, me and a couple of friends have just released an app on iOS that used a simplified version of this.

##### Share on other sites
[quote name='MJP' timestamp='1341427897' post='4955691'] that you've never worked with Scaleform. [img]http://public.gamedev.net//public/style_emoticons/default/tongue.png[/img][/quote]Of course not. I'm referring to the shitty systems I use here.

##### Share on other sites
Have you come across GWEN? Its a GUI system aimed at games.

It comes with renderers for GDI, Allegro, OpenGL, Direct2D, DirectX and SFML.

I've used it with SFML. I had to modify the SFML renderer slightly to suit my needs and because it's open source it was easy to do.

From the website:

[quote]
[b] Facts[/b][list]
[*]Coded in C++
[*]Fully Namespaced
[*]All standard window controls
[*]Behaves like you'd expect
[*]Lightweight
[list]
[/list][*]Easy to integrate (comes with renderers for Windows GDI, Allegro, OpenGL, Direct2D, DirectX and SFML)
[*]Totally portable & cross platform
[*]Doesn't need RTTI
[/list]
Released under a "do whatever you want" MIT license.
[/quote]

##### Share on other sites
I've written this, but am not really sure it answers any questions, but perhaps you will find something useful here ;)

I would say that what you do would largely depend on your end goals (and those may change as time progresses). By this I mean if it's only ever going to be for use in your own projects, or if you intend a more general purpose library for use by the masses.

I guess some of what I put here is more aimed at a public lib and so it may not apply if what you're doing is only ever going to be for use in your own stuff.

It has to be said that to be completely general purpose and suitable for all possible scenarios is exceptionally difficult, if not impossible, to pull off. GUIs used for games and such may usually make assumptions and impose limitations that perhaps you can not if your GUI is to be used for non-game things. Editors and such can fall into either category, and which category any given tool falls into depends on many things IMO.

I suggest to create a brief “mission statement” that describes specific roles for your GUI and stick to that at all costs. Decide early on some limitations and stick to them rigidly. While “rigidity” sounds like bad advice, in the long run it will save you time, effort and keep your project lean, mean and on target at all times. Trying to be all things to all people is a slippery slope, and ultimately will split users into two camps – those that love the fact that you're willing to implement their feature request or modification, and those that think your lib sucks because it is complex or bloated – you cannot please all of the people, all of the time ;)

Current CEGUI performs rendering at various levels. The base level caches geometry for a window's rendering which remains unmodified and thus is reused until something changes on that window. By geometry, I do not mean four corners, 2 triangles or whatever, but geometry for all imagery drawn, so if the window has tiled a background and text, the geometry for drawing all those quads is stored.

In addition to this, we also support rendering the base level cached geometry back to a texture along with rendering for all attached child elements. This is generally used either for special effects or for optimisation purposes (if you have a page of text, rendering it to a texture first can be a massive performance boost – because drawing the two triangles needed to display the 'cache' texture is likely faster than drawing 20,000 triangles or more needed for a page of text). CEGUI makes this cache texture optional, and under the users control, since scenarios exist where always rendering to texture first will be slower – if we exclude special effects where it might always be needed, if some UI element is changing every frame, using a middle-man texture will slow things down instead of speeding them up.

We give access to the user to various parts of this pipeline so they can do their own thing, such as rendering 3D models. Having said that, it is also possible if the user renders the 3D model to texture, and then uses that texture to put the rendered model into a window. For our purposes that can be beneficial since it allows the user to cleanly update that model without having to hook into the CEGUI rendering process directly.

TL;DR: Keep it as simple as possible. Avoid mission creep. Don't let other people pull you off of your original vision. There is no one right answer and only you can decide what is best for your project!

CE

##### Share on other sites
Thanks for the advice guys, especially thanks to Eddie as you are probably one of the most qualified guys to give answers having done it all! [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

Yes I am sure trying to make a GUI for others to use is a whole other kettle of fish. You can end up trying to be all things to all people, but it's difficult because there are always trade-offs, as you say adding all the features that users demand, versus bloating the code and making it more complex to maintain.

I expect like other aspects of games (3d rendering, physics etc) the 'best' solution for your particular constraints only becomes apparent after several entire redesigns.

After a little investigation it turns out there is a way to easily upload just part of a software surface (canvas I'm calling it) to opengl, using

[i]glPixelStorei(GL_UNPACK_ROW_LENGTH, uiWidth);[/i]
prior to calling
[i]glTexSubImage2D[/i]

which deals with the problem of the pitch of source surface, although it may not work on opengl ES (I'll cross that bridge when I come to it).

So I'll go with the keep it simple stupid approach for now and do it all in software, and just upload dirty rectangles where pixel data changes.

And render to texture does sound like it may add extra possibilities to any problems of integrating 3d windows with the rest of the GUI. Good points too about having a cached surface for things like text, but it not being faster in all cases.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627665
• Total Posts
2978531
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 10
• 12
• 22
• 13