GUI Design: Who's responsibility is rendering?

Started by
12 comments, last by Promit 20 years, 1 month ago
Ok, so I have a happy little GUI system, very nicely objected oriented and all that jazz. Currently, to render something like a form you call the member function Render() and it will render itself. if it is a container, it will call render on all of its children. The rendering code resides within the code for the control itself. The window manager is little more than an organized map of windows. This is just wrong. I think that the window manager ought to be responsible for doing stuff like rendering the window''s title bar, close button, etc. All of these tasks are currently done by the window itself. Now the question is, who renders other controls, like labels and textboxes and all that stuff? Do those controls remain responsible for drawing themselves? Or does the window manager do it? If the latter, how does the manager deal with new controls? I''m most interested in following the GTK+ architecture of render responsibility, mainly because I like the fact that you can so easily theme GTK+ to look like whatever you want, but at the same time everything retains a unified and consistent look. Obviosuly it''ll be a little different since GTK+ is not OO code and mine is, but I''m looking for the same basic idea. I''d prefer to avoid having to sift through all the GTK+ code, which is rather large. But in general, I need to figure out how to set up render delegation properly. I''m not sure whether all rendering should be done by a central authority, if that authority would be the window manager or something else, and how that authority would deal with new, custom controls.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Advertisement
Well, this is a design tradeoff. If there's a reason for the controls not to render themselves, then I would give that responsibility to some other part (like the window manager). For instance in the case of a 3D-engine you might want to sort the polygons on texture, opaque/transparent, etc which makes it unfit for objects to render themselves.

In the case of a GUI-system however, I think that mostly there's no reason not to allow the controls to draw themselves, quite the opposite, it makes the GUI-system more easily extendable. They should also handle the input (mouse over, mouse-clicks, etc) and by using some observer-pattern be able to inform the user-program what's happening. This also increases encapsulation (which is a good thing™ ).

However another thing you might want to consider is whether the controls are allowed to position themselves freely or if you'll use some sort of layout manager ala Java/AWT/Swing. You may also want to allow a container to clip its children to a certain area of the window/screen, this should of course be fully transparent to the child-controls.

[edited by - amag on March 18, 2004 6:54:39 PM]
This is a 3D engine. It''s absolutely necessary for the Window Manager to be responsible for rendering windows in order for Z-ordering with multiple windows to work correctly, and also for scissor test based clipping to work correctly (windows are completely ignorant of the resolution and as a result unable to specify scissor rectangles). That''s what I''m doing right now.

The question is more a problem is who is supposed to render the controls inside those windows, and how to deal with inherited types. For example, windows ultimately have the same base class as controls in those windows, so it could get akward splitting rendering responsibilities. At the same time, if I derive a new control type, there''s the problem that the Window Manager will not know how to render it. Also, there''s a question of theming controls (think GTK+ theming or Qt theming) that becomes insanely difficult with rendering responsibility left to each individual control.



I think I know what to do. Please, continue answering and suggesting and all that, but I have an Idea™.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
The way I do it is that the control is responsible for telling what it wants to show, but the actual rendering is done by somebody above it.

Essentially the gui controls talk to whatever is above it when rendering time occurs in a meta language(could meshes, and textures information only, or specifying what shape and where) and the top guy does all the real work.

If your gui is skinned, what you need is some sort of mapping between controls and they skin, and the mapping is included in the meta information.
Well to keep the Z order you simply call the render function of the controls in their Z order. Also you should check wether it is in necesary to render a control (for instance it could not be in the invalidated area or it could be hidden behind other controls).
To avoid rendering controls that, when all controls are drawn, will be hidden you construct a binary bitmap the size of the screen with 1 symbolising a covered pixel. With this you can then check if the control will display at all.

Emil Johansen
- SMMOG AI division
http://smmog.com
Emil Johansen- SMMOG AI designerhttp://smmog.com
Ok, I wasn''t aware of that your GUI was in a 3D-engine. In that case I would go with what Gorg suggests. Let the controls/widget-stuff tell the renderer what they want to render but leave the actual rendering to the renderer.
You could even make some sort of GUI-class that encapsulates what controls commonly want to draw.
Some pseudo-code:
class my_control : public control{public:  virtual void draw(gui &g)  {    g.add_textured_quad(tex_id, pos, size);    g.add_mesh(my_mesh);  }};
What I do is push my controls into the scene graph
If your system is going to be object oriented, you can''t have a single class that is responsible for defining the appearance of all the widgets. That would be silly, because each time you add a new widget, you need to modify that class.

It should be obvious that each widget should be reponsible for its own appearance.

However, it should also be obvious that widgets should not be responsible for rasterisation . If they are, then you''ll end up having widgets that are specialised for a particular rasterisation library (OpenGL, DirectX, whatever).

A problem with your question is the ''window manager'', and the assumption that it is responsible for ordering the widgets that are drawn. An easier way to do it would be to have an abstract container widget, which draws its members in order. Then, the top-level of your GUI is just a container. It isn''t responsible for ordering all the windows, just its immediate members.

Unless your GUI is badly designed, it will be true that if ''container 1'' is behind ''container 2'', then all widgets within ''container 1'' will be behind ''container 2''. So, you don''t need any ''central authority'' to tell your widgets in which order to render themselves.

From the actual point-of-view of rendering, three obvious options spring to mind.

One is to have a few primitive widgets which are available in several themed varieties. More complex widgets are built out of these widgets. Using a factory pattern, when you construct a widget, a widget from the appropriate theme is picked (or a generic complex widget is created).

Another is to have a few primitive widgets which are not themed, but to associate with each widget a ''theme'' object which is responsible for drawing the parts from which it is made. It seems to me that, in practice, this could allow for ''skinning'' -- visual changes to a widget -- but not full-on behavioural changes. For example, it would be difficult to make a scrollbar that behaved like Athena scrollbars, if the theme objects didn''t already know about it.

A better solution may be to combine the two above solutions. Have themed widgets for the purposes of themed behaviour, but have a widget-renderer for the purposes of changing the appearance of a widget. This way, you get the best of both worlds.

Another option, which is probably overkill for a game, is to define theme components, widgets and the GUI itself in a file. Then you read the file in and display it. libglade does things this way. If your game features a good scripting language, you might end up scripting your GUI with that, rather than doing it all in C++. You could allow the widgets themselves to be scripted.
CoV
quote:Original post by Mayrel
If your system is going to be object oriented, you can''t have a single class that is responsible for defining the appearance of all the widgets. That would be silly, because each time you add a new widget, you need to modify that class.

Every game using an engine will want its GUI controls to look different. With the drawing code encapsulated in one single class all that''s necessary to change the appearance of every single control is to subclass that drawing class. With every control defining its appearance in its class, you''d have to subclass every single control. That would certainly be more logical, meaning that the OO model of the code would make more sense, but encapsulating the drawing code in a single class is much easier and requires less work to change the appearance of controls.



- Christoph

---
Teamwork Software - Stuff That Does Something
quote:Original post by Captain Nuss
quote:Original post by Mayrel
If your system is going to be object oriented, you can''t have a single class that is responsible for defining the appearance of all the widgets. That would be silly, because each time you add a new widget, you need to modify that class.

Every game using an engine will want its GUI controls to look different. With the drawing code encapsulated in one single class all that''s necessary to change the appearance of every single control is to subclass that drawing class. With every control defining its appearance in its class, you''d have to subclass every single control. That would certainly be more logical, meaning that the OO model of the code would make more sense, but encapsulating the drawing code in a single class is much easier and requires less work to change the appearance of controls.


I refute this.

It''s easier if you somehow do less coding with the single-class approach. But, in reality, you don''t. When a single class is responsible for defining the appearance of all the controls, that class must contain code to render all the controls. This is no less code than is required with putting code to generate the appearance in the object itself.

The disadvantages are obvious -- you have to modify existing code if you want to add a new control to your system.

The only conceivable advantage with a single-class approach is when storing rendering data and functions that are common to all controls on them -- a class that provides visual builing blocks for controls, whilst the controls themselves state how they are put together. If you read the rest of my post, you''ll notice that I advocated just such a technique.
CoV

This topic is closed to new replies.

Advertisement