Thank you everybody for your suggestions. Reading them truly helped me focus. I'll reply them in the second part of the message, for this first part, I'd like to continue "the story" from what I've left in my original message.
In my first message, I wrote that "II was still not completely sold" from IMGUI paradigm. What truly scared me was the layout manager, for non-trivial widgets.
To solve this, I started looking at available libraries and yesterday I stumbled on
nvidia-widgets. NVWidgets, as I'll call it now on, looked just right to me.
The first thing I've noticed is that the need to generate control ID is gone in nvWidgets. I long wondered why IMGUI controls needed a ID, as their existance lasts a moment, and they're almost completely identified by their evaluation. Taking out the unique ID helped me incredibly in closing the circle.
The second thing that nvWidgets does is taking it easy on layout. Hey, most controls are just rectangles! I was amazed to figure out I didn't notice this besides all my effort. Shame on you, complicated RMGUI paradigms, deforming my line of thinking to the point a control is necessarily a difficult, non-trivial thing. It's abstraction lost... this time for good. The layout algorithm in nvWidgets is driven by a couple of enums and bitmasks, somewhat ugly, if you ask me, but it works fine for the simple applications they need to build (have a look at NVSDK 10.5 for GL). I'll probably need something slightly more complicated, but at least I now know I was not keeping it simple enough.
The third thing is that nvWidgets tries to apply some separation between the logic and the rendering in a way that is a bit easy-going but nice in concept. More on that later.
The key thing is that it accumulates input in a buffer, not very differently from how a typical GUI manager would do, and then flush ASAP to the focused control.
I am still not 100% sure of how they manage their focus, it looks to me that in some odd cases (that is, a control taking place of the focused one, overlapping the focus point) would cause the focus to be transferred "transparently" without actually causing a loss of focus. It's probably far less worse than it sounds. Seems kinda rare and bad UI design since start.
A thing that made me somewhat skeptic was the way they manage their frames. Their frames are easy going, being basically just a border with no fill. This makes them order-independent. I wanted to support at least some blending, for which I should enforce a specific drawing order. This implied I had to truly cache some controls instead of drawing them to follow the IMGUI paradigm, and this is the first thing which I never figured out so far.
Decoupling completely the rendering from the logic implies I can just work on the true hotspots (Model), and draw later (View), after all the controls' state are known. This still has the same layout issues as clicks would have already been resolved, but it's just so much better!
I am positive putting an IMGUI in a retained mode API
must be done that way. No matter what, nobody can afford a VB Map per-control, tens of times per frame. The alternative, uniform fetching didn't look much better. Even if I could afford the GUI drawing itself, it would completely screw parallelism. It's just a no-go.
With decoupling, all of this is gone and the manager is left with a simple, comparisons of input vectors. The geometry is updated only on change. There are some issues with persistence, but I'll figure out something.
Now coming to your replies...
Quote:Original post by swiftcoder
Unicode support. My currently in-development GUI toolkit does this via cairo/pango, but I don't know of any others which can manage this, and it makes foreign localisation one hell of a lot easier.
My app is unicode-enabled ground up. I've even tested some hebrew, hindi and arabic. I still have some issues in correctly guessing the caret insertion positions. For the time being, I won't support arbitrary caret positions, because they interact with some additional machinery I plan to deploy (see below).
Quote:Original post by swiftcoder
- Automatic layout. IMGUI's are a bit easier to layout manually, but I really don't want to have to hand-configure (in code, no less) the GUI for each container at each resolution/aspect-ratio.
I'll be honest. I still have no idea on how to correctly resize windows/controls to manage aspect ratio.
Right now, I am thinking about assuming 4:3 ratio, by putting the coord system origin where it would be if the monitor would be 4:3.
That is, on a 4:3 monitor the coordinate system would have origin in the lower left corner (resolution independent). On a 16:9 and 16:10 screen the origin would be arranged so that normal 4:3 screen would be centered, meaning that I would get some "good pixels" for x<.0 and x>1.0, with windows popping "more or less on the center". This is just wrong for many cases, for which an alternative solution is definetly necessary, with x=0 at lower left corner depending on needs.
The method used to layout in the above mentioned nvWidgets is rather minimal but somewhat ok for very simple prototypes. I'm not sure I need so little.
Quote:Original post by swiftcoder
- Theming. There is no point adopting a 3rd party GUI toolkit if I can't theme it to completely match the rest of my application. This doesn't just mean custom images, but custom metrics, layout, etc.
It seems a far more advanced functionality than I would need for quite a while. Also makes little sense for the time being, as I don't plan to release it to the wild any time soon.
Thank you anyway for this, I'll take a memo about that.
I would appreciate if you could point out some "extreme" theming examples. I always considered Gnome themes to be rather nice, but they're also somewhat basic.
Quote:Original post by ApochPiQ
things like render GUI elements to textures rather than the framebuffer
This rationale has been there since day one. I have myself no idea how much time and effort I have spent on this up to now but I still think it's a primary requirement nowadays. There will also be special systems to remap input events such as drive mouse cursor on an arbitrary surface. Not quite arbitrary. Will probably be just quads and maybe cylindrical and spherical sections.
Quote:Original post by ApochPiQ
optimize batching
Of the GUI itself you mean? For complex GUIs that would be incredibly difficult given my current architecture. Interestingly enough, the roadmap would sooner or later take me to a future architecture optimizing this transparently but it's too soon to say.
Quote:Original post by ApochPiQ
Prototyping tools that allow me to edit and preview UI animations etc. outside the game are also invaluable. Ideally, a WYSIWYG editor should be available.
I'll be honest, I still haven't figured out how to be able to mix IMGUI with data driven design. Not in the details, at least.
My best bet is that if I provide a doButton(ButtonDescriptor&, bool&) call, I might eventually pool those descriptors as resources, maybe localized resources and then, in code do things like
ButtonDescriptor *bd = dynamic_cast<ButtonDescriptor*>(resourceManager.GetLocalizedResource(FINAL_CHAINSAW_MASSACRE_CONTINUE_BUTTON_DESCRIPTOR));if(!bd) throw new BadButtonResource(blah);if(doButton(*bd, persistentStateFlag)) { doLabel("You hit that button!"); // or // doLabel(LabelDescription&) similarly to above}
This will happen in the future, hopefully soon, just not now.
Quote:Original post by Grafalgar
The games that do benefit from application-UI's are data-intensive ones - like MMO's or sports games (since they need to display statistics). For everything else, simplicity is key :)
This is the driving principle of IMGUI! Have a look at this paradigm!