Quick Question re:Good Practice for this Game UI I'm designing...

Started by
10 comments, last by Zipster 9 years, 2 months ago

(Sorry if this gets overlong...)

I have a fairly clear idea in my head of what I want to achieve, but I've been struggling and just can't quite come up with something that I'm happy isn't needlessly sloppy or verbose, so I thought I'd ask here for some advice.

Basically, I'm making a 2D strategy game and I'm envisioning most of the "action" to take place on a main map screen, with the ability to switch to other screens that show various types of info etc. by pressing UI buttons, with those screens taking the place of the map but with the UI buttons probably remaining constant.

What I mean can be illustrated by something like Ports of Call on the Amiga:

http://www.videospielgeschichten.de/bilder/artikel/amiga/amiga_080.png

You can see the map and the three buttons along the right side as well as UI elements along the bottom. My idea would look something like that, and if you pressed a button then the corresponding button would, in most cases, render something to the map's area and keep everything else the same. However, it's perfectly conceivable I might want some option to draw to the entire screen and not use the UI buttons at all.

What would be a good architecture to accomplish something like that?

My game currently has a system of gamestates where the game renders based on the render() function of the currently active gamestate, and handles input events similarly etc. So I took that idea and figured I could keep everything in the main gameplay state (which seems to make sense to me, given that we're just displaying different information), then do something like have a UI_Screen class and render etc. based on the active UI_Screen class using polymorphism.

That's fine but what about the common UI elements, i.e. that most, if not all in practice, UI_Screens will use the "default" UI buttons used to switch between screens?

My first thought was that I just make the UI_Screen objects render to the world map area and handle their unique input events, and have the UI buttons always active and displayed at the edges of the screen. That could work, but as I said it might be a poor choice in case I wanted to remove the UI entirely.

So we'd need the option not to use those UI elements, but also have some single place where those elements were defined and a simple way of making a UI_Screen use them.

That's where I'm having trouble. Of course I could simply say that I'll always have a certain common set of UI elements regardless of the UI_Screen object that's active, but even as a learning exercise I'd like to figure out how to allow more flexibility in a way that makes sense.

Should I have some kind of top-level UI_Manager class, that contains objects of classes derived from UI_Screen (e.g. UI_Screen_Map, or UI_Screen_Player) and also contains some functions to set up the default UI using a UI_Button class for example? Then I could have UI_Screen_Map call UI_Manager::drawDefaultUI() etc. but not do that for a different UI_Screen class that didn't need those UI buttons?

Hope I've explained things clearly enough.

Advertisement

I use a UI manager approach. There is a lot to making a UI beyond visibility and I put most of that responsibility in the UI manager.

Basics of my approach:

  • Game loop gathers up all the inputs and translates it into a common form for my engine. These inputs are used by both the UI and the game.
  • UI manager gets first chance to process the inputs. If the UI decides it wants an input such as a keypress or button press, it marks the input consumed so that the game will not also respond to it.
  • UI elements are based off a common class that has the functionality needed for the UI manager to interact with them.
  • UI elements are a hierarchy of elements.

I gave my base element functions that I found handy for implementing the UI behaviors. The manager will figure out things like whether the cursor is over an element and notifies the elements of events like: OnCursorEnter, OnCursorLeave, OnCursorMove. If buttons are clicked there are functions for that too. Typical stuff that lets you build up functionality for specific types of controls. A button control might change colors on OnCursorEnter and go back to a regular state on OnCursorLeave, stuff like that.

The manager maintains a list of top level items. These I consider attached to the "screen". The manager decides what will get drawn based on a visibility flag on the items. You don't have to worry about calling specific functions like you mentioned above, you just let manager do its job. If an item is marked visible, it will be drawn. I let the elements decide whether they want to draw their children or not though.

In the cases you mentioned a system like this lets you control it in various ways:

  • Don't want to draw any UI? You could choose to not call the Render function on the manager.
  • Don't want to draw specific buttons: Set their visibility flag, or inactive flag, or whatever makes sense for your UI.
  • You could have your UI grouped: Make a parent element with some child buttons, hiding the parent will prevent any of the children from drawing.

Hmm... interesting. That should provide some useful ideas.


The manager maintains a list of top level items. These I consider attached to the "screen". The manager decides what will get drawn based on a visibility flag on the items. You don't have to worry about calling specific functions like you mentioned above, you just let manager do its job. If an item is marked visible, it will be drawn. I let the elements decide whether they want to draw their children or not though.

So in practice, does this mean that one logical way of going about switching from one menu screen to the other using this system might involve something like calling a function activatePlayerInfo() that simply sets the visibility flags of all UI elements to 0, except for the elements used by the player info screen?

So in practice, does this mean that one logical way of going about switching from one menu screen to the other using this system might involve something like calling a function activatePlayerInfo() that simply sets the visibility flags of all UI elements to 0, except for the elements used by the player info screen?

Yes, that's one way to do it. Once you get a UI system in place you'll probably find there are a bunch of ways to tackle the same problem. It will be influenced by your UI's visual design too.

Your player info screen sounds like a collection of related controls. I would probably make them all children of some parent. The parent could be an element that does nothing, just organizational, or it could draw a background or borders to make the placement of the player info controls look better.

Does the player info cover the entire screen and is it opaque? If so, might make sense to do what you suggested, turn off everything and only show the player info.

Would it be better to just draw the player info window on top of what is already there but make it the only window you can interact with? The UI manager can control this sort of functionality since the elements get fed their events via the manager. It is for things like this that I like to keep the UI manager in charge rather than putting the code to recognize clicks and movements directly in the controls.

An immediate-mode approach is typically good for simple, engineer-driven UI. You don't need to wrestle with an entire framework if you don't need one, and it's very quick to get everything up and running in a usable state. Simple example (excusing typos):

class GameUI : public UI::MenuState
{
public:
   GameUI()
   {
      menuAButton.parent = this;
      menuAButton.bounds = Rect(100, 100, 100, 50);
      menuAButton.text = "Menu A";

      menuBButton.parent = this;
      menuBButton.bounds = Rect(100, 200, 100, 50);
      menuBButton.text = "Menu B";

      menuCButton.parent = this;
      menuCButton.bounds = Rect(100, 300, 100, 50);
      menuCButton.text = "Menu C";
   }

   void render()
   {
      UI::renderButton(menuAButton);
      UI::renderButton(menuBButton);
      UI::renderButton(menuCButton);
   }
      
   void handleInput()
   {
      UI::handleInput(menuAButton);
      UI::handleInput(menuBButton);
      UI::handleInput(menuCButton);

      if (menuAButton.clicked)
      {
         // Do Menu A action
      }
      else if (menuBButton.clicked)
      {
         // Do Menu B action
      }
      else if (menuCButton.clicked)
      {
         // Do Menu C action
      }
   }

private:
   UI::ButtonState menuAButton;
   UI::ButtonState menuBButton;
   UI::ButtonState menuCButton:
};

Then call the render() and handleInput() methods from your game mode.

A tip I learned years ago for UI design:

Write the user's manual first.

If you can make a one-page diagram that shows what the buttons do and how the UI works, then you've got a great map, just implement the code that gets you there. If you can make a two or three or five page manual that shows how to do things, then great.

Sadly when people start with code the end result is a user manual that is 50 pages long full of convoluted exceptions, "this button means sell in this context, but when this menu is showing it means give to another player, and when another menu is showing it means destroy the object."

With that in mind, it looks like you are building a visibility toggle. To implement a direct binary toggle, you are turning to polymorphism and inheritance trees. What does the user manual look like? I imagine "Toggle showing inactive quests on minimap", "Toggle showing NCP in minimap", "Toggle showing shops on minimap". That doesn't say inheritance trees to me, instead it says a collection of flags.

Depending on how you implement those flags in code, I see it as either if( iconVisibleFlags & target.iconTypeFlag) { /*render*/ } or very similarly along the lines of if( ShouldDrawIcon(target.iconType ) ) { /* render */ }. If you were to really push for it, and I don't imagine doing it because of the cost of all the virtual functions, you could implement a visitor pattern query inside each object, giving the object the collection of flags that should draw and requesting the object to tell you if it should be drawn or not. Seems wasteful, though.

Thanks for the ideas. Think I should be able to piece it together reasonably well now.

Ok, after thinking it through and trying to visualize things with a diagram as best I could, I've come to the following solution:

- Have 3 main classes:

- UI_Manager: For managing everything.

- UI_Screen: For example, UI_Screen_Map, representing a whole UI screen's layout and behaviour.

- UI_Element: For creating buttons etc.

- Operate by having UI_Manager hold a pointer to the currently active UI_Screen, and call that object's render() function etc.

- The UI_Screen object determines the layout of the UI for a screen, and I feel as though a whole separate class would be appropriate for this, but maybe I'm wrong?

My reasoning is based on the need for unique code in UI screens that involve more than just placing buttons. e.g. In my game, UI_Screen_Map would display the standard sidebar buttons, perhaps some of its own unique buttons, and also display the world map (represented by its own WorldMap class that keeps track of the list of cities etc.).

No other screen would display the world map like that and other screens would have their own particularities, but does that justify a whole class or would it be better to keep things within UI_Manager and UI_Element more?

Should WorldMap inherit from UI_Element even though it's nothing like a button and actually holds the data for the cities in the game, just because it's to be drawn as part of a UI screen and allows us to treat it like a UI_Element (i.e. put it into some common list of UI_Elements to be drawn etc.)?

In which case, if not using polymorphism and a class for each screen, would simple enums be the obvious choice?

Just seems like having WorldMap inherit from UI_Element may seem a bit hackish, and for some reason I tend to shy away from using enums for things like this but maybe it would be cleaner and quicker.

- Either way, I figure the shared UI aspects like buttons on a main sidebar should be controlled by UI_Manager so they can be accessed from any screen that needs them. How should that be done?

I was thinking about a list of UI_Elements in the UI_Manager, perhaps a map indexed by a string id, and for the set of main UI buttons I could group them all into a single parent UI_Element node and add that to the list.

If I was doing that, I assume the right way would be to give UI_Manager some init() function, and in that function set up the UI_Element nodes that define the game's default UI setup and then put that into the list? At least that's the best I can come up with.

You don't really need a manager. Take a look at my example. All UI state is owned and managed externally, by whatever needs UI, and only passed into the UI code when it immediately needs it to be rendered or interacted with. If you want common elements, you simply have a set of methods in the UI library like drawButton/handleInputButton, drawLabel, etc., that take a state object to customize their look (position, size, text, color, etc.), and by virtue of the fact that all your UI scenes call the same methods, all the buttons will look and behave the same. Even layout can be handled this way, with something like a calculateBounds method, if you allow the state objects to be parented to each other.

I also get the feeling you may be overengineering. I'm thinking the simple way to make a GUI is to not add a bunch of classes and managers remembering data you need to update constantly.
There is some reading material on how to create a simple IMGUI: http://sol.gfxile.net/imgui/ http://www.fysx.org/tag/imgui/

This topic is closed to new replies.

Advertisement