Sign in to follow this  
JasonBaldwin

Creating UI Library for games - How would you want it to be

Recommended Posts

I'm still new to programming games and i thought a good project would be to write a UI library to be used in games. I want it to stick to these contstraints:
  • Easily customizable - thinking of making a visual editor
  • Rendered with either OpenGL or DirectX. (If not then is there some way of rendering without using those API's so that it doesnt matter what API the user is using?)
  • Easily Extendible so you can make extra controls or something to that extent
Also what controls would be useful to you? So far I've come up with:
  • Button
  • List Box
  • Check Box
  • Radio Box
  • Check Group
  • Radio Group
  • Progress Bar
  • Static Text
  • Static Image
  • Slider
If there are any others you want just say. To implement all of this I've decided to use OOP and c++. I'm going to make each widget inherited from a "Widget" class. I'm not that good at making design choices so any ideas on how to structure this library? How do i write it so you can customize things and extend them? Thanks in advance for your help

Share this post


Link to post
Share on other sites
Some form of component-based design (possibly with facets/mixin/properties instead of only having properties in component) could be better in this matter than the general inheritance approach.
Anyways, this is the approach that I have pursued with my own little user-interface subsystem, and it looks quite easily extensible and flexible so far, although I have not implemented any of it yet (still working on higher prioritized subsystems).

Share this post


Link to post
Share on other sites
Quote:
Original post by JasonBaldwin
I'm still new to programming games and i thought a good project would be to write a UI library to be used in games.

I want it to stick to these contstraints:

  • Easily customizable - thinking of making a visual editor

  • Rendered with either OpenGL or DirectX. (If not then is there some way of rendering without using those API's so that it doesnt matter what API the user is using?)

  • Easily Extendible so you can make extra controls or something to that extent



Don't bother with the visual editor. When you think about how a system is going to generalise to different resolutions and aspect ratios, and you quickly lose most of the benefits that a visual editor gives you. Instead, think about how to make your components flow like a web page does. You can still provide a visual editor if you feel the need, but generally I don't see the point.

Also, don't render in OpenGL or DirectX. Many people will actually not use either of those directly, and will instead have their own rendering system. Ideally, you should be able to hook into that via an abstract interface. If you can break your GUI rendering down into either rendering images or rendering text, you can just have the user supply functions or classes for these. You can supply D3DX examples to show it working if you like.

Quote:
Also what controls would be useful to you?


Drop-Down List Box, Fixed List Boxes (with multiple selection ability), List Views (ie. multicolumn like Windows Explorer), perhaps TreeViews too.

It's also important to come up with a decent solution to scrolling things. I may want to fit a large panel into a small area and be able to scroll it up and down. I may also want scrollbars to appear automatically when the panel area is not big enough to show the content. (This implies you probably need some sort of Panel widget too.)

Quote:
To implement all of this I've decided to use OOP and c++. I'm going to make each widget inherited from a "Widget" class. I'm not that good at making design choices so any ideas on how to structure this library? How do i write it so you can customize things and extend them?


If you don't know what you're doing, it's usually best to start by programming a game, rather than a library for a game. It's quite hard to get libraries right until you have a good idea of what you're doing.

Anyway, a Widget base class is fine, but the other questions are rather vague. It depends what sort of customisation or extension you have in mind.

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Also, don't render in OpenGL or DirectX. Many people will actually not use either of those directly, and will instead have their own rendering system. Ideally, you should be able to hook into that via an abstract interface. If you can break your GUI rendering down into either rendering images or rendering text, you can just have the user supply functions or classes for these. You can supply D3DX examples to show it working if you like.


Exactly what do you mean by this? Will the user have to write their own classes or functions for rendering?

Share this post


Link to post
Share on other sites
I thought of doing something like this.

I got pretty tired of hard coding buttons in a project I am working on and deciding they need to be a couple of pixels to the left, compiling a new exe, then deciding they looked better the first way. So I had this idea that a GUI object could load its layout at runtime from XML. A GUI object would also have a Reload() method so you wouldn't even need to restart the application to see changes. You could also have methods like GUI::AddButton() so you dont need to mess around with XML if you don't want to.

I got a bit stuck on how to interface a GUI object with a rendering engine. The idea I has was to pass a renderer object that implements an interface with methods like Blit() and Text() in a call to GUI::Draw()


// An interface that you provide as part of your API
class GUIRenderer
{
virtual void Blit()=0;
virtual void Text()=0;
};

// The GUI object
class GUI
{
// ...
};

// elsewhere, the user's renderer inherits from your interface
class myNormalRenderer : public GUIRenderer
{
// implement Blit and Text!
}

// somewhere else again
GUI gui("layout.xml");

// Then, while drawing a scene
gui.Draw(&myRenderer);




The user of your API then has to have a renderer object, inherit from your provided interface, and implement the necessary methods.

That was the rough idea I had anyway. I went as far as looking up the docs for CEGUI to see how that handled the renderer interface. It has a different interface for different graphics APIs. In the DirectX one, you pass in your IDirect3DDevice9*.

Edit: Woah this thread really moved while I was typing all that.

Share this post


Link to post
Share on other sites
Quote:
Original post by JasonBaldwin
Quote:
Original post by Kylotan
Also, don't render in OpenGL or DirectX. Many people will actually not use either of those directly, and will instead have their own rendering system. Ideally, you should be able to hook into that via an abstract interface. If you can break your GUI rendering down into either rendering images or rendering text, you can just have the user supply functions or classes for these. You can supply D3DX examples to show it working if you like.


Exactly what do you mean by this? Will the user have to write their own classes or functions for rendering?


Really, a UI library is a subsystem written specifically for allowing the user to interact with the game via an on-screen interface. The rendering of the UI isn't really included in this.

The UI is a black box of code that:

- Receieves keyboard, mouse and/or controller input events passed to it via a concise interface. It doesn't care about how these input events are caught.

- Interacts the input events with some component hierarchy. Might have a screen management system etc so the user can switch 'pages' in the UI.

- Have an interface to which an external renderer should be hooked or bridged such that the UI can be drawn.

- A UI is actually quite a small thing, it shouldn't do the things that most peoples do.

It's also important to abstract functionality properly in your component hierarchy. If you ask most people what a UIButton's functionality should be, they will say that it should have 4 images of a button that change state when the user moves their mouse over etc. This is incorrect and is a perfect case where you need to be pedantic. A UIButton is a clickable region, with no appearance. You should derrive different types of button from this one. The more functionality you correctly abstract away, the more extensible your UI becomes.

A couple of other things:

- For a UI to work it doesn't need to work like Win32. It doesn't need all that functionality.

- There have been very few games with a good UI, i probably couldn't think of more than 5 that are good (imo). There is plenty to learn in this area.

Share this post


Link to post
Share on other sites
Quote:
Original post by Kylotan
Don't bother with the visual editor. When you think about how a system is going to generalise to different resolutions and aspect ratios, and you quickly lose most of the benefits that a visual editor gives you. Instead, think about how to make your components flow like a web page does. You can still provide a visual editor if you feel the need, but generally I don't see the point.

The point is that UI artists can easily design and modify the interfaces without having to edit raw text files and then load the game just to see their changes. That's nowhere near user-friendly (to an artist no less) and increases iteration time immensely. If you want to support different resolutions and aspect ratios, build it into the tool.

In this particle case, I agree that the OP doesn't need a visual editor to start with (or at all, depending on how far the project does or doesn't come along). He/she should just focus on getting the UI system designed and working first. But unless I'm completely misinterpreting your post, I don't see how anyone could suggest that a visual editor isn't necessary for GUI design in general. Even if you had to use the Windows Forms designer and parse resource files, it's better than nothing.

Share this post


Link to post
Share on other sites
Quote:
Original post by JasonBaldwin
Quote:
Original post by Kylotan
Also, don't render in OpenGL or DirectX. Many people will actually not use either of those directly, and will instead have their own rendering system. Ideally, you should be able to hook into that via an abstract interface. If you can break your GUI rendering down into either rendering images or rendering text, you can just have the user supply functions or classes for these. You can supply D3DX examples to show it working if you like.


Exactly what do you mean by this? Will the user have to write their own classes or functions for rendering?


Probably. The key point is that complex UI objects devolve into a series of sprites (and possibly text, and possibly render targets). If you build the complex objects out of the simple objects then the user just needs to supply an implementation of 'sprite' (and optional kin) in your interface.

Share this post


Link to post
Share on other sites
There is a bit of coupling that needs to go on between the renderer and a flow-layout type tree.

For instance, if your drawing text then it is usually the renderer that is going to be the source of the text extent information which is required for the layout algorithm. as an example stuff like font hinting and kerning are going to depend on your specific renderer - ie freetype or gdi.

I wrote something like this but havent quite been able to find a really good way to abstract this renderer/flow coupling in a completely satisfactory way.

Share this post


Link to post
Share on other sites
I figure all GUI's run recursively or that's how I designed mine.

Some components are window and group boxes. I designed GUI's in C++, but never got farther than all the basic components. Oh yeah and multi column list boxes are a pain. (at least if you design them like how .NET's work)

Make sure you design how the event system will work before coding much.

One base class Component then all of the others are derived from it. Container components aka windows and group boxes contain arrays of Components. If 2 windows exist on the form then when the user clicks on one events are passed recursively through the top level components downward. Components can have focus and can also be in a path of focus. Like clicking on a button on a window gives the window path focus and the button focus as it travels recursively from the top down. This makes sense because you will realize that there are certain situations like a window or other object where a container needs to know if any of its children have focus.

The focused component is referenced from the GUIManager which deals with the top level components. Key input instantly goes to focused components while mouse clicking and rollover happen on a recursive design. Remember that for this to work, when a component is added to a container it must register all it's events to its parents and the parent registers them to its parent and so on.

If your stuff gets very complex then think about using buffers. Each component has a buffer the size of the components width and height that acts as a bitmap. Rendering occurs from bottom to top based on state changes. I use this in my flash GUI to get performance that would be otherwise impossible to get with redrawing every frame. The idea is that if a window hasn't changed you just render it's buffer. If something has changed you render that objects buffer to it's parent buffer. Albeit complex GUI's with animations can thwart this optimization.

oh yeah and the GUIManager controls all top level components and handles locked focus and such. Common with dialog boxes. The user chooses something and it gives them a yes or no choice and won't let them go back to the window without hitting an option or an X. Window swapping is the easy part of the GUIManager responsibilities, but make sure to code things to work with that idea. Also realize that input always goes to the GUIManager first. If the input "falls through" the GUI it's handled by the game. So you might have things on mouse click or mouse move that look like:
if(!GUI.HandleMouse(mouseState)){
//game needs to handle it
}

Share this post


Link to post
Share on other sites
Here is a pretty nice vector-based, DirectX GUI for C#, released under the Creative Commons Attribution NonCommercial license. It's modelled after the Winforms UI, with events/event handling implemented like regular Winforms. Worth taking a look at, since you'd get an idea of how someone else wrote a GUI.

http://www.avengersutd.com/wiki/Odyssey_UI

Cheers.

Share this post


Link to post
Share on other sites
I've decided to:

  • Write a rendering system in OpenGL and Direct X

  • Create an interface for the user to do his own rendering

  • Create a visual editor that works in % to eliminate the problem of different resolutions and aspect ratio's

  • The user must provide input



Thanks to everyone for their help

Share this post


Link to post
Share on other sites
Quote:
Original post by JasonBaldwin
Create a visual editor that works in % to eliminate the problem of different resolutions and aspect ratio's

Sadly it isn't enough to work in percent, as it will lead to stretched controls on widescreen/portrait orientation monitors. Instead you have to specify each coordinate in terms of pixels and percentages, so that the designer can explicitly define some dimensions, and allow others to shrink or grow as necessary.

Share this post


Link to post
Share on other sites
If you want any chance of producing something useful and practical, you'll need to step on the shoulders of giants that have tackled the task before.

For class design, look into existing libraries. Windows API and its recent wrappers in .Net, Java's AWT/Swing, Eclipse SWT, QT, perhaps various Linux layout and window managers.

For layout, look into CSS. It's well understood by developers, and is designed in a device/resolution agnostic manner.

Regardless of which you look at, all have their downsides, and all are incredibly complex, with many quirks and problems.

UI is not a trivial matter. To provide something designers are comfortable with, and can do practical things that directly convey their thoughts, you'll need to cover what CSS/Latex/Postscript do. In other words, really lotsa stuff.

To just design something trivial, you'll limit your potential userbase to a small number of *coders* who just need something to display their boxes and buttons with. You won't however appeal to designers.


Your general design will be based around 1 single entity - Entity.

These entities are then laid out in some way or another. Experience has shown that tree-like layout is very limiting, especially when it comes to non-trivial designs.

This implies that you'll need to handle some form of hierarchical containment, as well as arbitrary positioning.

Next problem will be relative positioning. In general, any absolute units, in either percent, pixels or centimeters are useless. Entities are laid out relative to something. At basic level relative to borders of parent, later relative to each other.

Next problem will be enforcing min/max constraints. Here, there is no perfect solution. Ultimately some component will be simply too large, and will start breaking out of its desired bounds. How to properly handle this remains the $6 million question. Even browsers today have trouble with this.

Last, and most trivial problem, is implementing the entity. Likely, this will look trivial, something like:
 virtual void render(Canvas c)


Canvas is your particular API implementation, DX, OGL, .... You then provide basic primitives, such as renderTexture, drawText, drawLine, ...).

Each different type of Entity is nothing more but a readily provided implementation. Here, you can re-use, base class for all ListBoxes, then specializations for different behaviours. And so on.

You'll also need to abstract the event pump. There's several choices. In general, a single UI thread, either applications main thread, or separate, guarded thread for UI dispatching.

But as said, designing a useful UI toolkit is far from trivial, and is a huge task.

Share this post


Link to post
Share on other sites
Thanks for the idea of using css. I'm definitely going to look into it.
Quote:

Your general design will be based around 1 single entity - Entity.

These entities are then laid out in some way or another. Experience has shown that tree-like layout is very limiting, especially when it comes to non-trivial designs.


Exactly what do you mean by this? Please could you give an example of how not to limit my design

Thanks

Share this post


Link to post
Share on other sites
I skipped the better half of the thread, but I was very disturbed by what I read from a few people "displaying images and text"... I think the *best* solution is actually to use triangles and whatnot to build your GUI, basically don't rely on anything resolution dependent, you'll have alot more flexibility with, say, svg+cairo, though I can't speak for the speed (I know cairo surfaces can be rendered by opengl though)

I may be a bit biased because of what I've seen, I'm SURE that if done right raster based GUI systems can work fine and look great, but what i've seen, especially from CEGUI, is ugly images that don't work correctly together, and far worse, if you use a higher resolution than what was intended, the images end up so that one pixel in the gui image actually covers 16 square "screen pixels" or more, and then you get to see the texture filtering method plain as day, and believe me... it's ugly...

I didn't sleep last night so I may not be the clearest but my point is simply take a vector based approach not a raster based one.

Share this post


Link to post
Share on other sites
Quote:
Original post by JasonBaldwin
Thanks for the idea of using css. I'm definitely going to look into it.
Quote:

Your general design will be based around 1 single entity - Entity.

These entities are then laid out in some way or another. Experience has shown that tree-like layout is very limiting, especially when it comes to non-trivial designs.


Exactly what do you mean by this? Please could you give an example of how not to limit my design

Thanks


What *is* a component?

All entities/components share one thing - layout outline. Button, checkbox, List... as far as layout is concerned, they are all rectangles. They might have some elaborate border inside, but from layout perspective, they are rectangles.

Each of these components will act as event producer, and possibly consumer. When user clicks something, a message is sent to all registered listeners. Who sent it or who received it is not important at this level.

Quote:
I think the *best* solution is actually to use triangles and whatnot to build your GUI


Implementation detail. The UI talked about here is definitely vector-based, not raster. I don't think anyone was proposing the use of raster.

When it comes to units, there's nothing wrong with using some absolute units. Saying x pixels doesn't in any way limit flexibility. Sometimes, you *want* to have a window 32x32 pixels.

Raster is far from undesirable. If you look in windows, you have a choice of different DPI for desktop. By specifying raster images in absolute pixels, you can scale them effectively through DPI modification. DPI exists for this very reason, since it provides a natural mapping between raster and vector space.

One does not exclude the other. It's unrelated to the problem of layout though, it's just an implementation detail.

And triangles are awful for any practical design. You get incredible number of redundant information, need to look at mundane details such as winding, need to worry about overlap and intersection....

The only practically acceptable option is to keep this device specific. By saying renderTtexture(), the renderer will choose what it likes best. Either 2 triangles, or a single rectangle, or rasterized scanline conversion.

Share this post


Link to post
Share on other sites
Quote:
Original post by Zipster
Quote:
Original post by Kylotan
Don't bother with the visual editor. When you think about how a system is going to generalise to different resolutions and aspect ratios, and you quickly lose most of the benefits that a visual editor gives you. Instead, think about how to make your components flow like a web page does. You can still provide a visual editor if you feel the need, but generally I don't see the point.

The point is that UI artists can easily design and modify the interfaces without having to edit raw text files and then load the game just to see their changes. That's nowhere near user-friendly (to an artist no less) and increases iteration time immensely. If you want to support different resolutions and aspect ratios, build it into the tool.


I think a visual editor can be useful, but unfortunately it's common for people to use the existence of an editor to build entirely the wrong approach simply because it doesn't take any longer to do it when the tool does the hard work for you. Building different resolutions and aspect ratios into the tool is one example of what I think is completely the wrong approach - these problems should be addressed by the underlying formatting scheme. Of course, if you have a tool that lets you preview a GUI and quickly generate the output for it, that's fine. But that should not be what is foremost when you set out to "write a UI library to be used in games", as the original poster has said. Get the underlying representation right first and then write tools for that if necessary.

Quote:
I don't see how anyone could suggest that a visual editor isn't necessary for GUI design in general. Even if you had to use the Windows Forms designer and parse resource files, it's better than nothing.


It's quite easy to knock up HTML pages or XUL dialogs without any real need for real-time visibility of how it looks, though it does help a small amount. I think the perceived need for a visual editor comes from the dark days of struggling with 'Visual' C++ or Visual Basic which have virtually no real control over layout except X,Y positioning. Still, also bear in mind I am approaching this from the perspective of someone coding a UI library, and such a person shouldn't look to tools to paper over the cracks in the implementation. Layout is one of those things that should make sense when you look at the specification, because merely testing that it looks ok in your own preview mode doesn't mean it'll be ok on everybody else's system.

Quote:
Original post by Ademan555
I skipped the better half of the thread, but I was very disturbed by what I read from a few people "displaying images and text"... I think the *best* solution is actually to use triangles and whatnot to build your GUI, basically don't rely on anything resolution dependent, you'll have alot more flexibility with, say, svg+cairo, though I can't speak for the speed (I know cairo surfaces can be rendered by opengl though)


Yet most of your ingame assets are likely to be raster images. That's just how a lot of stuff will work, with textures, etc. I expect many GUI artists will be able to generate vector graphics, but even they will be implemented in terms of bitmaps.

Quote:
I may be a bit biased because of what I've seen, I'm SURE that if done right raster based GUI systems can work fine and look great, but what i've seen, especially from CEGUI, is ugly images that don't work correctly together [...]


What about the thousands of professional games? I expect 99% of them just use raster images, not vectors.

Quote:
I didn't sleep last night so I may not be the clearest but my point is simply take a vector based approach not a raster based one.


Vectors are better in theory, but I'm not sure they're very compatible with the way most games are going to be rendered, or the way the assets are going to be generated.

Either way, my point was to have the GUI work with 'images and text', and that how you actually render those things shouldn't be an issue. The idea would be that someone could switch from a raster/bitmap system to vectors at any point and the GUI library wouldn't need to change at all.

Share this post


Link to post
Share on other sites
I'd suggest checking out the tutorials on this site regarding writing a GUI system in DirectX. Using the non-renderer-specific design, I was able to write such a library using SDL, and later ported it to OpenGL.

http://www.gamedev.net/reference/articles/article994.asp
http://www.gamedev.net/reference/articles/article999.asp
http://www.gamedev.net/reference/articles/article1000.asp
http://www.gamedev.net/reference/articles/article737.asp

Best of luck,

Share this post


Link to post
Share on other sites
I did this using the non-renderer-specific approach.
I have the base class GUI_widget, which is inherited by a GUI_surface (blank surface used to store others) a GUI_button, a GUI_window and, soon, a GUI_radiobutton. Each of these stores a container full of child objects, so when I hide a window or an entire surface, everything lower down in the hierarchy is hidden as well.

The renderer is an abstract base class containing only pure virtual functions. I've created an OpenGL GUI renderer as part of the main renderer class.

Share this post


Link to post
Share on other sites
I agree with Antheus on using CSS for layout. In furthermore, I recommend XForms.

First off, it is a standard. You get the advantage of being able to use already available tools instead of having to create your own. ( mentioned above) Additionally, it follows the widely used MVC pattern for UI architecture . Using a proven design methodology for an established problem prevents you from re-inventing the wheel. XForms utilizes XPath, XHTML, and XML, all of which already have great libraries to support them, further reducing the amount of work you would have to exert.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this