Efficient GUI rendering

Started by
8 comments, last by lawnjelly 11 years, 9 months ago
This has probably come up before here, so feel free to link me straight to a good source of info .. but ..

Anyway for various reasons I have written a little GUI system for a game I'm working on, and decided to have a go at fleshing it out to make it useful for writing tools or little apps, possibly cross platform. Oh gawd, yet another reinventing the wheel I hear you cringe, yes, well, can't say much on that ...

Here's a screeny showing I have it working ok, it didn't take too long:

2mxgbnq.jpg

For the game I was using directx to do the actual GUI rendering, but I've swapped over to opengl for general use, and it's a long time since I did much opengl, I'm very out of date with it.

So my question is probably to people who have done this before, what did you find the most efficient way of rendering the GUI objects? I can see there's a few trade offs involved. I was until recently just doing everything in software, writing to one big software surface, then locking a big viewport size texture and copying the RGB data across. Not particularly elegant or fast, but simple and it works.

That is until I realised I wanted to have some opengl 3d viewports displaying 3d models inside the app, with GUI elements possibly overlayed (such as menus, or dialog boxes). And to future proof things it would be nice to not lock the entire viewport every time there is a little change, so it works at reasonable speed.

So my guesses for some alternative methods are:

1) As each 'widget' is changed, I render this to the big software texture, and lock and upload just a part of the texture (using glTexSubImage2D?). This isn't as simple as it could be though, because it appears the source data can't have a 'pitch' to jump across the x coord on each line (if the viewport is much wider than the 'dirty rectangle'), so I'd have to first convert the big software texture to a temporary smaller one before uploading to opengl.

I could also keep a list of dirty rectangles that need uploading to opengl to avoid uploading the same area more than once in a frame.

2) Same as above but keep a separate software surface for each 'widget'. That way it can be uploaded without fiddling. However it makes changing the size of widgets potentially more problematic (as the software surface size will change), and would be nicer to avoid all those unnecessary memory allocations (although I could use memory pools I spose).

3) Have a separate opengl texture for each widget. Probably faster for rendering but pretty ugly in terms of memory allocation / deletion.

4) Try and render everything directly on the 3d card, without using software textures.

So, anyone know is there a standard good way of doing this? Anyone know what CeGUI, MyGUI etc do?
Cheers smile.png
Advertisement

what did you find the most efficient way of rendering the GUI objects?
The one that takes less work to implement. I've had quite some stir with GUI systems and I can tell for sure performance has never been a problem for me. Never. They just are not performance paths.
But, as a design rule, I'd try to minimize memory consumption rather than runtime performance.

Previously "Krohm"


[quote name='lawnjelly' timestamp='1341420449' post='4955665']
what did you find the most efficient way of rendering the GUI objects?
The one that takes less work to implement. I've had quite some stir with GUI systems and I can tell for sure performance has never been a problem for me. Never. They just are not performance paths.
But, as a design rule, I'd try to minimize memory consumption rather than runtime performance.
[/quote]
Well that sounds encouraging. :)

I'm beginning to think the idea of trying to render GUI items on top of the opengl 3d windows is overkill, and opening up a bag of worms. Instead I could just render the GUI in the background, then just update the 3d window on top and assume nothing is overlaying it. And in the case of where the user opens a menu or something, I could pause the 3d window, kind of avoiding the problem.

I kind of agree on the memory consumption and also simplicity versus performance, after all I don't want to spend that much time on it, and create the greatest kickass GUI known to man if I'm only going to use it for a couple of apps. Just didn't want to paint myself into a corner later on with early design decisions if it's not necessary.

At the moment I'm thinking in terms of keeping the current approach with one big software surface, then just optimizing the amount of locking / dirty rectangles uploaded to the opengl texture. It may well stall the 3d pipeline, but it shouldn't matter that much if I'm not doing it every frame, and it's just for tools.
A GUI rendering can be slow, because text rendering can be slow if there's a lot of text. So you can look into text rendering optimization.

I know you didn't ask for comments on the reinventing thing, but it's pretty strange that you are really reinventing the look of the standard win32 GUI.
If you want to release stuff with this GUI, it will kick your ass. And it will kick the users' ass. A GUI that just looks like the well known Windows GUI but doesn't work like it is a major user-anti-experience.

The one that takes less work to implement. I've had quite some stir with GUI systems and I can tell for sure performance has never been a problem for me. Never. They just are not performance paths.
But, as a design rule, I'd try to minimize memory consumption rather than runtime performance.


I'm guessing that you've never worked with Scaleform. tongue.png

A GUI rendering can be slow, because text rendering can be slow if there's a lot of text. So you can look into text rendering optimization.

I know you didn't ask for comments on the reinventing thing, but it's pretty strange that you are really reinventing the look of the standard win32 GUI.
If you want to release stuff with this GUI, it will kick your ass. And it will kick the users' ass. A GUI that just looks like the well known Windows GUI but doesn't work like it is a major user-anti-experience.


I agree about the user experience, there are some well executed and user-friendly third party GUIs and some that just leaving you scratching your head.

In this case I'd already written the basics of the GUI for a light weight in-game menu system, so thought why not add regular application menus and try running with it for a simple app. It's all good experience. I'm just planning on using it for a simple internal 3d model editor for now, nothing fancy.

And of course not being tied to win32 leaves you more options, me and a couple of friends have just released an app on iOS that used a simplified version of this.
that you've never worked with Scaleform. tongue.png
Of course not. I'm referring to the shitty systems I use here.

Previously "Krohm"

Have you come across GWEN? Its a GUI system aimed at games.

It comes with renderers for GDI, Allegro, OpenGL, Direct2D, DirectX and SFML.

I've used it with SFML. I had to modify the SFML renderer slightly to suit my needs and because it's open source it was easy to do.

Heres the link:
http://code.google.com/p/gwen/

From the website:


Facts

  • Coded in C++
  • Fully Namespaced
  • All standard window controls
  • Behaves like you'd expect
  • Lightweight

    • No XML readers, no font loaders/renderers, no texture loaders - your engine should be doing all this stuff!
  • Easy to integrate (comes with renderers for Windows GDI, Allegro, OpenGL, Direct2D, DirectX and SFML)
  • Totally portable & cross platform
  • Doesn't need RTTI

  • Released under a "do whatever you want" MIT license.
    [/quote]
    I've written this, but am not really sure it answers any questions, but perhaps you will find something useful here ;)

    I would say that what you do would largely depend on your end goals (and those may change as time progresses). By this I mean if it's only ever going to be for use in your own projects, or if you intend a more general purpose library for use by the masses.

    I guess some of what I put here is more aimed at a public lib and so it may not apply if what you're doing is only ever going to be for use in your own stuff.

    It has to be said that to be completely general purpose and suitable for all possible scenarios is exceptionally difficult, if not impossible, to pull off. GUIs used for games and such may usually make assumptions and impose limitations that perhaps you can not if your GUI is to be used for non-game things. Editors and such can fall into either category, and which category any given tool falls into depends on many things IMO.

    I suggest to create a brief “mission statement” that describes specific roles for your GUI and stick to that at all costs. Decide early on some limitations and stick to them rigidly. While “rigidity” sounds like bad advice, in the long run it will save you time, effort and keep your project lean, mean and on target at all times. Trying to be all things to all people is a slippery slope, and ultimately will split users into two camps – those that love the fact that you're willing to implement their feature request or modification, and those that think your lib sucks because it is complex or bloated – you cannot please all of the people, all of the time ;)

    Current CEGUI performs rendering at various levels. The base level caches geometry for a window's rendering which remains unmodified and thus is reused until something changes on that window. By geometry, I do not mean four corners, 2 triangles or whatever, but geometry for all imagery drawn, so if the window has tiled a background and text, the geometry for drawing all those quads is stored.

    In addition to this, we also support rendering the base level cached geometry back to a texture along with rendering for all attached child elements. This is generally used either for special effects or for optimisation purposes (if you have a page of text, rendering it to a texture first can be a massive performance boost – because drawing the two triangles needed to display the 'cache' texture is likely faster than drawing 20,000 triangles or more needed for a page of text). CEGUI makes this cache texture optional, and under the users control, since scenarios exist where always rendering to texture first will be slower – if we exclude special effects where it might always be needed, if some UI element is changing every frame, using a middle-man texture will slow things down instead of speeding them up.

    We give access to the user to various parts of this pipeline so they can do their own thing, such as rendering 3D models. Having said that, it is also possible if the user renders the 3D model to texture, and then uses that texture to put the rendered model into a window. For our purposes that can be beneficial since it allows the user to cleanly update that model without having to hook into the CEGUI rendering process directly.

    TL;DR: Keep it as simple as possible. Avoid mission creep. Don't let other people pull you off of your original vision. There is no one right answer and only you can decide what is best for your project!

    CE
    Thanks for the advice guys, especially thanks to Eddie as you are probably one of the most qualified guys to give answers having done it all! smile.png

    Yes I am sure trying to make a GUI for others to use is a whole other kettle of fish. You can end up trying to be all things to all people, but it's difficult because there are always trade-offs, as you say adding all the features that users demand, versus bloating the code and making it more complex to maintain.

    I expect like other aspects of games (3d rendering, physics etc) the 'best' solution for your particular constraints only becomes apparent after several entire redesigns.

    After a little investigation it turns out there is a way to easily upload just part of a software surface (canvas I'm calling it) to opengl, using


    glPixelStorei(GL_UNPACK_ROW_LENGTH, uiWidth);
    prior to calling
    glTexSubImage2D

    which deals with the problem of the pitch of source surface, although it may not work on opengl ES (I'll cross that bridge when I come to it).

    So I'll go with the keep it simple stupid approach for now and do it all in software, and just upload dirty rectangles where pixel data changes.

    And render to texture does sound like it may add extra possibilities to any problems of integrating 3d windows with the rest of the GUI. Good points too about having a cached surface for things like text, but it not being faster in all cases.

    This topic is closed to new replies.

    Advertisement