Jump to content

  • Log In with Google      Sign In   
  • Create Account


Designing a GUI system, hints appreciated.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
101 replies to this topic

#1 Krohm   Crossbones+   -  Reputation: 3044

Like
0Likes
Like

Posted 22 June 2010 - 07:57 AM

At the end of yesterday, I figured out I truly needed a way more complete and less headache-involving approach to manage my simulated GUI. I couldn't just keep on hacking pixels together.
I had some experience with "real" GUIs in the past and while I am surely not a wizard like many others, I have a few scars. I recalled reading an article on Immediate Mode GUIs and I've decided to take a look at them.

The next part of the post is essentially a wrap-up of a day spent in understanding IMGUI I post here for other eventual readers in the hope they could save some time.
 


Just to aggregate the few resources I've considered:
probably first original discussion on the topic
original video by Casey Muratori on IMGUI paradigm
Game developer magazine, september 2005, The Inner Product, by Sean Barrett.
I also liked pretty much this SDL based tutorial.

At this point however, I was still not completely sold. Some of the so-called "cons" of retained mode GUIs didn't apply to the system I had in mind as I had additional machinery at my disposal that I could use to keep various things under control.
My goal was to figure out the shortest path to get the minimal GUI I need that is also "The Right Way". I figured out that I might have to consider fully fledget retained GUI in the future. It seems to be very reasonable and Unity seems to be following that same path, so why not?

For those not familiar with the paradigm, here are the starting point from which an IMGUI system works:


  1. The most important thing is that the user will interact with a single control at time. Everyone of us probably knows that, kudos to Mr Muratori for formalizing that and proving a less involved GUI system is possible!

  2. The second important thing is accepting a GUI is closely tied to the data it represents. A checkbox will generally represent a boolean property. Radio buttons give you a choice between the few designed. Callbacks and Listeners (as in SWING) could be fine if you need to manage non-trivial behaviour, but this could be embedded in the application on need, while leaving the paradigm simplier.
    As such, have the GUI controls work directly on application data (which is then GUI' state as well). The problem of connecting the app to the gui is now solved. Leave eventual non-trivial mappings to the application.


Now that those things are clear, IMGUI takes the form of a very nice, syncronized-looking call such as doButton(blah), it's almost going back to the old console-print and read function. This doButton function will return true if the state of the drawn button changed this tick. End of the story.
Now coming to the point...
 


As I would like to figure out a good way to do this which doesn't require me to rewrite everything... or even something for a while, I would like to have some brainstorming on what you wished your GUI could do (or is doing).
I know "more or less" what I need in the next future but I ask for help in drawing a bigger picture.



Sponsor:

#2 swiftcoder   Senior Moderators   -  Reputation: 9846

Like
0Likes
Like

Posted 22 June 2010 - 09:16 AM

Quote:
Original post by Krohm
I would like to have some brainstorming on what you wished your GUI could do (or is doing).
- Unicode support. My currently in-development GUI toolkit does this via cairo/pango, but I don't know of any others which can manage this, and it makes foreign localisation one hell of a lot easier.

- Automatic layout. IMGUI's are a bit easier to layout manually, but I really don't want to have to hand-configure (in code, no less) the GUI for each container at each resolution/aspect-ratio.

- Theming. There is no point adopting a 3rd party GUI toolkit if I can't theme it to completely match the rest of my application. This doesn't just mean custom images, but custom metrics, layout, etc.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#3 owl   Banned   -  Reputation: 364

Like
0Likes
Like

Posted 22 June 2010 - 09:27 AM

Ease to implement.

GUI.Init();

Window = GUI.AddWindow();
Button = Window.AddControl(BUTTON);
Button.OnClick(OnButtonClickFunc);

GUI.Update();
GUI.Draw();


#4 ApochPiQ   Moderators   -  Reputation: 14891

Like
0Likes
Like

Posted 22 June 2010 - 09:42 AM

Full control over the rendering pipeline. I want to be able to completely integrate GUI element rendering into my own graphics engine, to do things like render GUI elements to textures rather than the framebuffer, optimize batching, and so on. This is tricky to get right but makes your GUI toolkit incredibly powerful, especially if you want to do nontrivial stuff with your UI.

Prototyping tools that allow me to edit and preview UI animations etc. outside the game are also invaluable. Ideally, a WYSIWYG editor should be available.

#5 Grafalgar   Members   -  Reputation: 544

Like
0Likes
Like

Posted 22 June 2010 - 10:18 AM

I'm in the process of building a GUI authoring tool + API that I'm hoping addresses most of the UI headaches I, and most others, have encountered. If anyone is interested in giving it a go, please let me know, I'm looking for some critique =)

That said, though, I've found there's a massive difference between application UI's and game UI's. Most people I've seen that start off with a game-UI try to mimic application-UI's. Ie, the controls they focus on are checkboxes, radio button, lists, group boxes, and so on. This is mostly overkill though - most games I've seen use majority three types of controls: sprites, labels and buttons. Though my UI API supports it, an elaborate control hierarchy may not even be necessary, since most game UI's I've seen are barely two levels deep.

The games that do benefit from application-UI's are data-intensive ones - like MMO's or sports games (since they need to display statistics). For everything else, simplicity is key :)

#6 Krohm   Crossbones+   -  Reputation: 3044

Like
0Likes
Like

Posted 22 June 2010 - 08:39 PM

Thank you everybody for your suggestions. Reading them truly helped me focus. I'll reply them in the second part of the message, for this first part, I'd like to continue "the story" from what I've left in my original message.



In my first message, I wrote that "II was still not completely sold" from IMGUI paradigm. What truly scared me was the layout manager, for non-trivial widgets.
To solve this, I started looking at available libraries and yesterday I stumbled on nvidia-widgets. NVWidgets, as I'll call it now on, looked just right to me.

The first thing I've noticed is that the need to generate control ID is gone in nvWidgets. I long wondered why IMGUI controls needed a ID, as their existance lasts a moment, and they're almost completely identified by their evaluation. Taking out the unique ID helped me incredibly in closing the circle.

The second thing that nvWidgets does is taking it easy on layout. Hey, most controls are just rectangles! I was amazed to figure out I didn't notice this besides all my effort. Shame on you, complicated RMGUI paradigms, deforming my line of thinking to the point a control is necessarily a difficult, non-trivial thing. It's abstraction lost... this time for good. The layout algorithm in nvWidgets is driven by a couple of enums and bitmasks, somewhat ugly, if you ask me, but it works fine for the simple applications they need to build (have a look at NVSDK 10.5 for GL). I'll probably need something slightly more complicated, but at least I now know I was not keeping it simple enough.

The third thing is that nvWidgets tries to apply some separation between the logic and the rendering in a way that is a bit easy-going but nice in concept. More on that later.

The key thing is that it accumulates input in a buffer, not very differently from how a typical GUI manager would do, and then flush ASAP to the focused control.
I am still not 100% sure of how they manage their focus, it looks to me that in some odd cases (that is, a control taking place of the focused one, overlapping the focus point) would cause the focus to be transferred "transparently" without actually causing a loss of focus. It's probably far less worse than it sounds. Seems kinda rare and bad UI design since start.

A thing that made me somewhat skeptic was the way they manage their frames. Their frames are easy going, being basically just a border with no fill. This makes them order-independent. I wanted to support at least some blending, for which I should enforce a specific drawing order. This implied I had to truly cache some controls instead of drawing them to follow the IMGUI paradigm, and this is the first thing which I never figured out so far. Decoupling completely the rendering from the logic implies I can just work on the true hotspots (Model), and draw later (View), after all the controls' state are known. This still has the same layout issues as clicks would have already been resolved, but it's just so much better!
I am positive putting an IMGUI in a retained mode API must be done that way. No matter what, nobody can afford a VB Map per-control, tens of times per frame. The alternative, uniform fetching didn't look much better. Even if I could afford the GUI drawing itself, it would completely screw parallelism. It's just a no-go.
With decoupling, all of this is gone and the manager is left with a simple, comparisons of input vectors. The geometry is updated only on change. There are some issues with persistence, but I'll figure out something.



Now coming to your replies...
Quote:
Original post by swiftcoder
Unicode support. My currently in-development GUI toolkit does this via cairo/pango, but I don't know of any others which can manage this, and it makes foreign localisation one hell of a lot easier.
My app is unicode-enabled ground up. I've even tested some hebrew, hindi and arabic. I still have some issues in correctly guessing the caret insertion positions. For the time being, I won't support arbitrary caret positions, because they interact with some additional machinery I plan to deploy (see below).
Quote:
Original post by swiftcoder
- Automatic layout. IMGUI's are a bit easier to layout manually, but I really don't want to have to hand-configure (in code, no less) the GUI for each container at each resolution/aspect-ratio.
I'll be honest. I still have no idea on how to correctly resize windows/controls to manage aspect ratio.
Right now, I am thinking about assuming 4:3 ratio, by putting the coord system origin where it would be if the monitor would be 4:3.
That is, on a 4:3 monitor the coordinate system would have origin in the lower left corner (resolution independent). On a 16:9 and 16:10 screen the origin would be arranged so that normal 4:3 screen would be centered, meaning that I would get some "good pixels" for x<.0 and x>1.0, with windows popping "more or less on the center". This is just wrong for many cases, for which an alternative solution is definetly necessary, with x=0 at lower left corner depending on needs.
The method used to layout in the above mentioned nvWidgets is rather minimal but somewhat ok for very simple prototypes. I'm not sure I need so little.
Quote:
Original post by swiftcoder
- Theming. There is no point adopting a 3rd party GUI toolkit if I can't theme it to completely match the rest of my application. This doesn't just mean custom images, but custom metrics, layout, etc.
It seems a far more advanced functionality than I would need for quite a while. Also makes little sense for the time being, as I don't plan to release it to the wild any time soon.
Thank you anyway for this, I'll take a memo about that.
I would appreciate if you could point out some "extreme" theming examples. I always considered Gnome themes to be rather nice, but they're also somewhat basic.
Quote:
Original post by ApochPiQ
things like render GUI elements to textures rather than the framebuffer
This rationale has been there since day one. I have myself no idea how much time and effort I have spent on this up to now but I still think it's a primary requirement nowadays. There will also be special systems to remap input events such as drive mouse cursor on an arbitrary surface. Not quite arbitrary. Will probably be just quads and maybe cylindrical and spherical sections.
Quote:
Original post by ApochPiQ
optimize batching
Of the GUI itself you mean? For complex GUIs that would be incredibly difficult given my current architecture. Interestingly enough, the roadmap would sooner or later take me to a future architecture optimizing this transparently but it's too soon to say.
Quote:
Original post by ApochPiQ
Prototyping tools that allow me to edit and preview UI animations etc. outside the game are also invaluable. Ideally, a WYSIWYG editor should be available.
I'll be honest, I still haven't figured out how to be able to mix IMGUI with data driven design. Not in the details, at least.
My best bet is that if I provide a doButton(ButtonDescriptor&, bool&) call, I might eventually pool those descriptors as resources, maybe localized resources and then, in code do things like

ButtonDescriptor *bd = dynamic_cast<ButtonDescriptor*>(resourceManager.GetLocalizedResource(FINAL_CHAINSAW_MASSACRE_CONTINUE_BUTTON_DESCRIPTOR));
if(!bd) throw new BadButtonResource(blah);
if(doButton(*bd, persistentStateFlag)) {
doLabel("You hit that button!");
// or
// doLabel(LabelDescription&) similarly to above
}

This will happen in the future, hopefully soon, just not now.
Quote:
Original post by Grafalgar
The games that do benefit from application-UI's are data-intensive ones - like MMO's or sports games (since they need to display statistics). For everything else, simplicity is key :)
This is the driving principle of IMGUI! Have a look at this paradigm!

#7 Grafalgar   Members   -  Reputation: 544

Like
0Likes
Like

Posted 23 June 2010 - 05:43 AM

My concern with IMGUI, per my understanding of it, is that presentation and logic are very much tied together. That makes it difficult for an artist and programmer to work together on a UI, as the artist becomes entirely dependent on the programmer to 'hook up' changes to the layout before they can continue. There may be other implementations that serve the alleviate this, but the few cases I've come across rely entirely on someone writing both the logic and presentation simultaneously.

In my approach the UI authoring tool is all data. Really, it's nothing more than a key-framed sprite animation system. The UI backend loads it up and you simply play animations at the appropriate times. Simple C#-like event lists 'react' to button presses, mouse movements, and so on.

I've gone to great lengths to make the separation between data and functionality extremely clear so that an artist or designer can work independently from a programmer, prototyping their UI before it ever makes it into the game.

#8 Krohm   Crossbones+   -  Reputation: 3044

Like
0Likes
Like

Posted 23 June 2010 - 06:02 AM

I also considered going for a similar approach, only a few days ago this was my choice. It required alot, alot of extra machinery (not that IMGUI simply plugs in, but it requires way less).

The main concern about the data-driven model you also ended up with is linking callbacks. I suppose it's much easier in C# than it is in C++, but I simply had to jump thuru hoops to make an RMGUI work for generic data, on generic code polling arbitrary widgets. Too many variables for now bordering on the realm of deep magic.

The IMGUI examples I've seen are also very simple, but I don't agree on the need of tightly coupling logic and presentation, in fact, I just trashed a few hours of work to implement this separation. It is worth that, as I've noted in my previous message, logic and presentation must truly be temporally separated in my opinion, unless you're not scared of locking buffers/uploading uniforms potentially hundreds of times per frame.

#9 Grafalgar   Members   -  Reputation: 544

Like
0Likes
Like

Posted 23 June 2010 - 06:24 AM

I'm certainly curious to see how you separate logic/presentation using IMGUI :) To me the approach seems tightly woven together, in that you specify a button to be rendered and then action if that button is to be interacted with. So, per my understanding, if an artist wants to add an additional button they need to create the resource and ask the programmer to add it in, before it can be visible on screen.

Now, it's easy enough to create a scripting language to exactly that (I think Torque uses the same approach? Or am I thinking of Unity?), but from my experience artists *really* don't like writing code. They much prefer laying things out in an editor and let the programmer handle the hooks.

How does IMGUI handle complex animations? I admit that I am not intimately familiar with the approach aside from the handful of examples I've seen.

#10 swiftcoder   Senior Moderators   -  Reputation: 9846

Like
0Likes
Like

Posted 23 June 2010 - 01:36 PM

Quote:
Original post by Grafalgar
To me the approach seems tightly woven together, in that you specify a button to be rendered and then action if that button is to be interacted with.
And how is this functionally any better in a RMGUI? Sure, the artists can add a button to an RMGUI in the GUI designer, but until the programmer writes a function for the button to call, it doesn't do anything.
Quote:
So, per my understanding, if an artist wants to add an additional button they need to create the resource and ask the programmer to add it in, before it can be visible on screen.
And that brings us to the meat of the issue: why in hell is your artist adding buttons willy-nilly? The artist doesn't know what functionality is required, that would be the programmer's responsibility.

You are thinking too much in an RMGUI workflow, where the artist designs the UI, and the programmer makes it work, typically requiring some cycles of artist<->programmer before it is correct.

With an IMGUI, the programmer figures out functionality, stubs out the GUI hooks *beforehand*, and then the artist modifies the layout and themes GUI based on the stub.

Quote:
How does IMGUI handle complex animations? I admit that I am not intimately familiar with the approach aside from the handful of examples I've seen.
The exact same way an RMGUI does: you run a timer, which updates certain attributes of the GUI elements.

Simple IMGUI's (like the nvidia sample) obviously can't do this, because they don't persist any state behind the scenes, but an IMGUI of any complexity may retain just as much information as a RMGUI - the key element being that the user doesn't have to deal with it.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#11 Grafalgar   Members   -  Reputation: 544

Like
0Likes
Like

Posted 23 June 2010 - 06:28 PM

Quote:
Original post by swiftcoder
And how is this functionally any better in a RMGUI? Sure, the artists can add a button to an RMGUI in the GUI designer, but until the programmer writes a function for the button to call, it doesn't do anything.


Exactly - the artist can lay out the visuals without programmer intervention. In fact, an artist can add twenty buttons, sprites popping in an and out, and replace the 'face' of the UI without needing the programmer at all. To make the buttons do something the programmer needs to step in, of course, but ideally a simple visual change would not require a code change.

With IMGUI once the layout & code are tied together, an artist can't simply add/remove buttons/sprites without modifying the code. So the artist, who really only cares about visuals, now needs to have intimate knowledge of the underlying code as well. The separation between logic & presentation does not seem clear enough to allow either to work completely independently.

Now, granted, a tool could potentially merge the presentation and logic together allowing the programmer and artist to work independently, but I haven't seen anything like this. Most IMGUI's I've seen are very programmer-friendly (script file editing and such), and not very artist friendly.


Quote:
Original post by swiftcoderAnd that brings us to the meat of the issue: why in hell is your artist adding buttons willy-nilly? The artist doesn't know what functionality is required, that would be the programmer's responsibility.


Usually artists and programmers, working together, have a close understanding of what's needed on screen. They will discuss the need to have a button here, a sprite there, a window yonder. The artist can start laying out screens w/ the needed buttons and whatever else and get it ready for the programmer to tie it all together.

Similarly, the programmer can work ahead based on an agreement between himself and the artist, knowing that (at some point) a screen will come with a button named "PLAY_GAME" or whatever, and when that button is pressed, the game will launch. Functionality can be built independently from the presentation, and vice versa. But, as I said, it does require that the artist and programmer are in agreement with what to expect in the UI.

Quote:
Original post by swiftcoder
With an IMGUI, the programmer figures out functionality, stubs out the GUI hooks *beforehand*, and then the artist modifies the layout and themes GUI based on the stub.


You can do the exact same with RMGUI. A programmer can literally fire up a tool, lay out some buttons and windows that he expects, hook up the functionality, and tell the artist to go hog-wild. I don't see where IMGUI has a strength in that regard ;) It's a process change, not a code/system change.

Quote:
Original post by swiftcoderThe exact same way an RMGUI does: you run a timer, which updates certain attributes of the GUI elements.


Coded by hand or does a tool do that? I don't think there's any reason NOT to have a tool do the bulk of an IMGUI system, but that the UI performs layout and logic simultaneously makes me curious how it would be done without turning an IMGUI solution into an RMGUI-like system.

#12 swiftcoder   Senior Moderators   -  Reputation: 9846

Like
0Likes
Like

Posted 24 June 2010 - 02:48 AM

Quote:
Original post by Grafalgar
Quote:
Original post by swiftcoder
With an IMGUI, the programmer figures out functionality, stubs out the GUI hooks *beforehand*, and then the artist modifies the layout and themes GUI based on the stub.
You can do the exact same with RMGUI. I don't see where IMGUI has a strength in that regard ;) It's a process change, not a code/system change.
I never said it was a particular strength of IMGUI, but nor do I see it as a weakness.

There is also nothing stopping you from producing an IMGUI designer tool which allows the artist to add buttons, etc. and then produces the stubbed-out source code - just like all RMGUI designer tools do currently.

Quote:
Quote:
The exact same way an RMGUI does: you run a timer, which updates certain attributes of the GUI elements.
Coded by hand or does a tool do that?
The *exact* same way an RMGUI does. I even use the same source files between my older RMGUI and an IMGUI.

The only fundamental difference between an IMGUI and an RMGUI is the way in which elements are specified, and the way in which state is propogated to/from the application. Most of the IMGUIs you see are trivial examples, and thus don't implement the full feature set of an RMGUI - but they aren't in any way representative of any limitations of the method.

Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#13 MoundS   Members   -  Reputation: 162

Like
0Likes
Like

Posted 24 June 2010 - 10:15 AM

Quote:
Original post by Grafalgar

Exactly - the artist can lay out the visuals without programmer intervention. In fact, an artist can add twenty buttons, sprites popping in an and out, and replace the 'face' of the UI without needing the programmer at all. To make the buttons do something the programmer needs to step in, of course, but ideally a simple visual change would not require a code change.

With IMGUI once the layout & code are tied together, an artist can't simply add/remove buttons/sprites without modifying the code. So the artist, who really only cares about visuals, now needs to have intimate knowledge of the underlying code as well. The separation between logic & presentation does not seem clear enough to allow either to work completely independently.


The simplicity and separation of concerns that you describe here are not a property of IMGUI vs RMGUI but rather the fact that your GUI is static.

A lot of UIs aren't that simple. They're dynamic. They have controls that operate on data in the app. They have widgets that appear and disappear depending on the state of the app. They have lists and hierarchies of widgets that reflect the structure of data in the app.

These are inherent properties of the UI, and you can't shield an artist from them. You can free them from having to work in code, but they still have to understand concepts like "there will be one of these panels for each item in this list" or "this panel will be displayed whenever this condition is true."

This is all true regardless of whether your GUI library has an IM or RM interface. I think the reason that you associate properties of static UIs with RMGUI is that RMGUI is really only good for dealing with static UIs.

With RMGUI, you give the library an initial definition of the UI, and the library holds onto it, assuming it will stay the same. If you want any dynamic behavior, you have to go out of your way to change the UI definition inside the library.

IMGUI APIs are designed for dynamic UIs. You write a function that gets called each frame to look at the state of your application and produce the UI for that frame. And you can use all the same control structures you normally use, like if statements and for loops, which map quite naturally to the dynamic properties I described above. In other words, instead of giving it a static definition of the UI, you give it parameterized definition that will yield different UIs depending on the state of the application.

Unfortunately, when a lot of people watch Casey's presentation, they seem to miss this point and come away with other less important points, like some of the assumptions he made in his particular implementation.

So yes, the simplest way to implement it is to have presentation and functionality tied together in the code. That's true for RMGUI as well, but it's not inherent to either concept. In either case, you can still get presentation information from external specifications and have tools for manipulating it.

Anyway, for your case, if you're willing to limit yourself to static UIs with only buttons, labels, and sprites, then the two approaches are essentially equivalent (the parameterized definition would have no parameters!), so I wouldn't worry about it, unless you're interested in eventually supporting more.


#14 Krohm   Crossbones+   -  Reputation: 3044

Like
0Likes
Like

Posted 24 June 2010 - 11:39 PM

Quote:
Original post by swiftcoder
Quote:
Original post by Grafalgar
How does IMGUI handle complex animations? I admit that I am not intimately familiar with the approach aside from the handful of examples I've seen.
The exact same way an RMGUI does: you run a timer, which updates certain attributes of the GUI elements.

Simple IMGUI's (like the nvidia sample) obviously can't do this, because they don't persist any state behind the scenes, but an IMGUI of any complexity may retain just as much information as a RMGUI - the key element being that the user doesn't have to deal with it.
I couldn't figure out a way to do this, unless IDs are guaranteed to be "very unique". I think however Grafalgar has a point here because we're putting state in a IMGUI which was supposed to be almost state free or left management to the application. The paradigm probably does not work equally well for complex widgets. I feel like it would be violating some kind of unwritten rule, conceptual integrity maybe. Driving those attributes from outside the GUI manager doesn't just feel right to me.
I cannot see how to justify existance of a modifiable data blob for a control which I am guaranteeing to be living for a frame. I just don't yet feel right for me. Maybe I could declare in advance some special widgets to be retaining state, but this sounds too much RMGUI to me, I'm not sure.

I find no much of an issue in providing more complicated backends, as far as I can just fire doControl and forget about it. If the state is internally managed, that would be ok, but it's quite easy to require more. Just require a button to pulsate on hover and suddendly its state changes. Scripted GUI maybe? It probably makes sense at this point to just go retained mode.

Considering the design a bit more in detail, I see an IMGUI can be massively complicated. Layering must be done with consistently less information, for example. I start to think that "complex" IMGUIs are probably not worth the effort, although they can still be extremely useful in the low-profile GUI field, very likely far better than any RMGUI system.
I am still positive presentation can be separated, but I am not yet well aware of the degree of the benefits. It probably allows only minor changes to occur, such as some theming.
Quote:
Original post by MoundS
A lot of UIs aren't that simple. They're dynamic. They have controls that operate on data in the app. They have widgets that appear and disappear depending on the state of the app. They have lists and hierarchies of widgets that reflect the structure of data in the app.
...
This is all true regardless of whether your GUI library has an IM or RM interface. I think the reason that you associate properties of static UIs with RMGUI is that RMGUI is really only good for dealing with static UIs.
I have to agree with this, I needed to do that for a quasi-contract I got some months ago and the pain involved in "dynamicizing" the GUI was absolutely surprising.

#15 MoundS   Members   -  Reputation: 162

Like
0Likes
Like

Posted 25 June 2010 - 08:10 AM

Quote:
Original post by Krohm
I couldn't figure out a way to do this, unless IDs are guaranteed to be "very unique". I think however Grafalgar has a point here because we're putting state in a IMGUI which was supposed to be almost state free or left management to the application. The paradigm probably does not work equally well for complex widgets. I feel like it would be violating some kind of unwritten rule, conceptual integrity maybe. Driving those attributes from outside the GUI manager doesn't just feel right to me.

I cannot see how to justify existance of a modifiable data blob for a control which I am guaranteeing to be living for a frame. I just don't yet feel right for me. Maybe I could declare in advance some special widgets to be retaining state, but this sounds too much RMGUI to me, I'm not sure.


IMGUI libraries can retain state behind the scenes. It's not a violation of any IMGUI principle. It's only a violation if you force the application to manage it. And of course, if you can retain state, you can do anything an RMGUI library can do (animations, etc).

The only problem is figuring out which data goes with which widget. I've found that trying to manually associate a unique ID with each widget is too big of a burden to place on the application developer, so my library automatically figures out the data associations using a combination of widget order (inside basic blocks), macros to track control flow, and IDs to keep track of lists where the items can change order. You can read a lot about this topic on the Molly Rocket IMGUI forum.

Quote:
Considering the design a bit more in detail, I see an IMGUI can be massively complicated. Layering must be done with consistently less information, for example. I start to think that "complex" IMGUIs are probably not worth the effort, although they can still be extremely useful in the low-profile GUI field, very likely far better than any RMGUI system.


Not sure what you mean by this. I agree that writing an IMGUI library is more complex than writing an RMGUI library, because an RMGUI library just gives up on trying to manage things and forces the application to do it. But that just forces the application code to be that much more complex. Why solve the problem over and over in every app, when you can just solve it once in a library?

Quote:
I am still positive presentation can be separated, but I am not yet well aware of the degree of the benefits. It probably allows only minor changes to occur, such as some theming.


If you go the extra step of assigning text labels to your widgets (at least, locally unique labels), then you can do more than just theming. Layout order doesn't necessarily have to be the same as execution order. But if they're different, then you need a way of matching them up. So, yes, you would lose some of the benefits of IMGUI, but it would still be a lot better than full-fledged RMGUI.


#16 __sprite   Members   -  Reputation: 461

Like
0Likes
Like

Posted 25 June 2010 - 08:55 AM

A while ago I came across the Adobe Source Libraries and Adam and Eve, which are property model and layout libraries respectively, and uses declarative languages to specify the gui components. It's certainly pretty interesting from a design point of view (albeit the libraries are not being very actively developed atm).

Has anyone else come across these / have any view on this type of design?

#17 MoundS   Members   -  Reputation: 162

Like
0Likes
Like

Posted 25 June 2010 - 09:21 AM

Quote:
Original post by sprite_hound
A while ago I came across the Adobe Source Libraries and Adam and Eve, which are property model and layout libraries respectively, and uses declarative languages to specify the gui components. It's certainly pretty interesting from a design point of view (albeit the libraries are not being very actively developed atm).

Has anyone else come across these / have any view on this type of design?


I've seen a video presentation of it that was very good, but unfortunately I can't find it anymore.

It looks like they're focused on data binding issues. A lot of those issues are solved by IMGUI anyway, since widgets directly work with application data, and you can sync up two widgets by just having them refer to the same application data. Adam & Eve seems to go a little further. You can say things like "constrain these two values to the same proportions if this check box is checked". But I suppose you could implement that in a layer on top of IMGUI.

What I'm curious about (and what I didn't see in the presentation), is how Adam & Eve handles the dynamic issues I mentioned above. If it doesn't address those, I couldn't see using it for serious work.


#18 Jason Z   Crossbones+   -  Reputation: 4901

Like
1Likes
Like

Posted 25 June 2010 - 09:24 AM

For what it's worth, I implement my GUI completely the same as normal game objects. This means that they end up using the exact same material system as my other rendered objects, allowing complete customization - anything the engine can render can be done for a GUI too. All of the logic is handled by controllers being added to the objects that will respond to picking queries, click events, UI events and whatever else.

Try to find a way to do this - then all of your scripting tools, game editor tools, and any other tools that work with your game objects automatically can be applied to your GUI... Plus then you have no problem making a 3D GUI instead of a standard 2D version, and you have much less code to maintain since you are reusing everything!

#19 Burnhard   Members   -  Reputation: 100

Like
0Likes
Like

Posted 25 June 2010 - 09:56 AM

I'm generally against automatic layout. However, it obviously depends on your use case. The last GUI I designed and implemented was for a fixed resolution screen (on a CE device), so automatic layout probably wouldn't have gained me any time. I'm a bit of a perfectionist when it comes to placing GUI components too, so I would have explicitly defined positions relative to borders and other components in any case.

The most important pattern I have in my code is a Listener/Observer pattern. It's very simple to implement and really is the only sensible way to do things with a GUI, as it keeps spaghetti to a minimum.

Bigger picture:

However, I find that the most important thing you can do before actually writing any code whatsoever is to design your GUI on paper. That is you should know pretty much 75-90% of everything your GUI is going to be expected to do and only then should you start to think about the components you'll need and how to display them (obviously people will chip in with new requirements/changes, hence you can never make it 100% - although big corporations try to, which is why any minor change usually takes them 6 months!).

In my last project I started with a requirements document that listed and numbered each requirement (I grouped them into logical sections according to what they did). From each requirement, I designed a GUI layout, broadly showing what each component would do/how it would work. This document ran to 80 or so pages and contained one screen layout for each dialog, plus a few for general layout and a description of how the basic components worked from a user interaction point of view (list box, scroll-bar, etc.).

The great thing about a design document is that is shows you've thought about what you're trying to do (!) and it allows other people to come in and critique or point out things you've left out or that may not work from an interaction point of view. As your design is mostly graphical, it's easy for others to visualise how the software will work too, which is a big bonus over just a list of "things that will happen" when you interact with component A or B, etc.

Once the requirements and GUI design is done, you can get to work on actually developing your GUI components. I was lucky in that I had a clever guy working on the framework. He produced for me a simple windowing system, a modified AGG graphics library and a modified freetype font library that we kind-of developed in parallel with the actual application.

From the design document, it was easy to produce the GUI, because I didn't have to think too much about how to do things (I'd already thought about it). The project came in early. My manager said in the technical meeting when it was signed off that it was the first time in 20 years he could sit there and say everything was done on time!

So if I were to give advice about GUI development, it's to design the user interaction side first, rather than jump in thinking about the technical side from the off.








#20 Krohm   Crossbones+   -  Reputation: 3044

Like
0Likes
Like

Posted 27 June 2010 - 08:27 PM

Thank you all for your posts. I took some time to think at what I should add right now, I considered expanding on this or that concept, but I won't as that would probably end up killing the discussion.
What I did in those hours was experimenting with the implementation I had produced and it's ability to scale. I still cannot trust it so I went back here and to MollyRocket's forums. I only had a quick glance at them previously, but with my code getting grosser and grosser, I couldn't just walk along. I originally hoped to at least be able to communicate my decisions, but I realize this is just impossible, as I'm still very confused. I will try to pour in some thoughts with some degree of separation.

Use multi-frame GUI!


So far, I've tried my best to work without the single-frame lag. I didn't do this because of the time lapse involved, I just didn't consider it useful. However, as I previously noted, I'll need to buffer the various draw requests anyway to guarantee some minimal performance scalability - with at least one drawcall per widget, things would have turned slow FAST - so right now alot of this machinery is already in place. Long story short, I was wrong and didn't correctly evaluate benefits and drawbacks.

I was thinking about rewriting towards a dual-frame GUI however, as you'll read below, I am now considering bite the bullet and go RMGUI instead.

Maybe finding nv-widget wasn't good


I was very impressed with this library because with a simple implementation and no IDs to generate everything felt like a dream.
However, I overlooked the fact that as icastano notes: "The NVIDIA IMGUI does not use IDs, but instead it requires widgets to stay at the same screen location to retain the focus. It's not ideal, but works fine for our purposes and keeps the code simple". I'm not truly sure of what he meant and I'll save you from the (probably useless) flow of thoughts that resulted but I now think the library made me overly optimistic. I am quite on the opposite line of thinking now, as I'm not even sure to be able to break this.

By the way, while at this I just want to link to Zero Memory Widget, maybe someone will be able to see the light.

ID generation: not for the faint of heart


I stopped reading at about half 2006 and I suddenly figured out that after all, given the amount of messages in four years, maybe IMGUI just isn't as well understood as I originally believed. To be completely honest, I now suspect the paradigm likely didn't deliver as expected. I can understand that many people like me will have discussed the trouble in their forums of choice instead of on MollyRocket's but it seems there just have been not so much traffic, isn't it?

In particular state retaining requires some kind of ID-ing which must be automatic - I agree with MoundS when he says that it's too much of a burden to place on library's user - and it turns out that ID generation is still being discussed.
Even worse than that, I'm now a bit confused about auto IDing. In general, state retaining seems to be VERY complicated.

Current line of thinking


Today I will redesign to reach a few different goals. First of all, IMGUIs are not about efficiency but on ease of use. I see a parallel with scripting and I have some script-ish machinery which could probably support the problem with some slight modifications. So, right now, I foresee different IMGUI systems.
The IMGUI I had developed up to now will become the native C++ IMGUI system. It will be nowhere as complex at originally anticipated. No complex widgets, little to no state retaining, no eyecandy, no theming.
Its only goal is to reach sufficient sophistication to support components on top of it and even this is now open to debate. I hope to never use this directly, if not in the initial stages.

I then considered a managed, script-assisted IMGUI system. This is quite more lengthy so it's discussed below.

The third system would be a managed, script assisted RMGUI. I hoped that I could delay this to the far future but with more and more doubts arising I am considering just biting the bullet. As I've started to think seriously at this only a few hours ago, I'm not even sure how far I want to go with this. Sadly, the very same problems of dynamic IMGUIs apply there so how to link the systems (GUI<->application) is not quite clear to me besides having callbacks. Years ago I used GLUI, which allowed to pour data directly in application's varables and it looked quite nice to me. I thought that maybe reflection could somehow help. Say for example that I pass a structure with all the variables I need to fetch and the system figures out the layout himself... It's a very blurred picture for the time being.

Managed, script only IMGUI


I considered this to be my goal for a couple of days as it promised enough functionality to be a robust backbone. Ideally, the managed IMGUI system will be the preferred way of doing things for a long time. Being based on scripts as I said, I can jump thuru a few hoops, and one of that is automatic ID generation. I was thinking at automatically generated IDs like
{
thisRef
callOff
{
groupIndex?
appResPtr?
}
}

where

  • thisRef is the "this" pointer from which the doControl call originated - who created that control and owns it.

  • callOff comes from VM magic and it's doControl CALL instruction's offset in the "code segment". It is, for all purposes a sub-line call identification, __LINE__ on steroids, where the control was created. This still doesn't quite work in loops, nor in recursive things, that's why I have

  • { groupIndex?, appResPtr? }: to discriminate controls born in the very same call, I've considered either a sequential group ID or storage id. Neither seems to work to me.
    The group index would be set at 0 with a BeginGroup call and essentially count the widgets in the group until a corresponding EndGroup call is issued.


Let me build an example. Suppose I want to model a shop in which I can mark the items (say swords and shields) to buy. Say the list of swords and shields can change, even during interaction based on some metric. Suppose I draw some kind of themed checkbox which draws the object, a green tick and pulsates the background a-la win7 (so the "pulse" is the button-specific, uninteresting state). When done, the user goes checkout.
I'd like to just do:
...
for(int i = 0; i < sword.length; i++)
doCheckBox(buySword[i], ...); // because I have no choice but saving that myself
for(int i = 0; i < shield.length; i++)
doCheckBox(buyShield[i], ...);
if(doButton("Check out")) CheckOut(sword, buySword, shield, buyShield);
...

Stuff like this will break {thisRef, callOff} big way as it cannot figure out the loop. With a sequential index, I could make it work anyway: if I can find a {thisRef, callOff} pair then I check the other parameters. There's a big issue here. If I want to hash by slot result, then I'm effectively asking the memory to stay persistent. This is all wrong.

Now, suppose in frame N I have 5 swords and 3 shields. In the following frames, I mark buySword[3] and buyShield[0]. Say it takes us to frame M.
At frame M+1 sword[3] becomes unavailable (say some designer wanted the sword to be available only at 20:00, with time going on while interacting. The very same code will be executed and what happens is that

  • If I am ID-ing on storage, then I'm ... in trouble. There's no way the app can mark the state as invalid. For first, it would admit there's state. What will happen is that GUI-internal state will be moved to the wrong button. The buy tick will move to the button originally associated with (frame N) buySword[4].
    This is more generally a very troublesome issue: no matter how smart the ID generation algorithm is, I just cannot see how it could deal with data shuffling, and this is a big problem.
    Hashing on "source data pointer" (as opposed to destination) just does not feel right, it would imply for example that I cannot change a button's text description without invalidating its state. That's a no-go either.

  • If I'm IDing on group index then I'm free of reallocating the buyXXX arrays which seems way more reasonable, but in this scenario, the very same thing happens.


RMGUIs do not solve the above issues, but it seems there's light at the end of the tunnel: they say since the beginning there will be state to manage.

In "ID Generation and data lifetime" (link above) tmadden notes in a very well thought message that "The key [for dynamic IMGUIs] is that IDs don't identify individual widgets, they identify entire trees of static code.", he proposed a way to deal with this involving an anchor and a dynamic_block. I still have to fully digest his message, but it looks it's just the object-oriented version (admittedly correct and exception-safe) than my BeginGroup and EndGroup calls.
Nonetheless, I feel like requiring the machinery he's suggesting coul be so complex one could just go for a standard RMGUI at this point!


So, summing up, even making up my mind and rethinking my goals, I am now very confused.
I seriously appreciate what burnhand wrote, it essentially builds down to "keep it real". The issue is that I cannot even tell if the examples I make up in my mind do have sense, if the GUI I have in mind can work. The fantasy shop example above will break pretty much everything I can think at.
This is terrible.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS