Usable GUIs

Started by
7 comments, last by Zanthos 19 years, 2 months ago
After hiting up the usual round of comics (which happens to include ok/cancel), an odd idea sprang to mind. Currently, UIs are designed in terms of buttons, radio boxes, etc. All related directly to the physical layout and type of control used. What if, instead, we simply identified options (and their relation to other options, such as mutually exclusive for radio buttons), and actions? An example implementation might use XML for the GUI definition, and XSLT to transform that into an XML representation of the actual buttons and their layouts. The idea, basically, is to separate the available actions and options and selections and their usage restrictions from the actual layout. The layout could then later be modified to improve usability with no changes needed to the code. Of course, this doesn't fix problems that result from the usability-model conflicting with the programming-model, but it would certainly solve issues like ok/cancel button placement.
Advertisement
You get similiar benefits without the added complexity of figuring out how to make the code respond to "nonstructural" (for lack of a better word?) GUI by using XUL or Glade or XAML (I think it's called that microsoft just came out with in beta or something?)

With Glade, fiddling with a GUI is as easy as editing an XML file.
Avalon is using something like this to define how application UIs look.
Quote:Original post by Rebooted
Avalon is using something like this to define how application UIs look.
Avalon is the platform that accepts XAML code.
I had a similar idea awhile back where instead of directly laying out GUI controls you in some way described what information the GUI needed to present and to get from the user and then the GUI layout would be created automatically with user preferences altering the way it constructs it.

However the question is how do you actually describe what the GUI needs to present to the user and what needs to be input by the user? If you restricted this to creating say dialog boxes it wouldn't be too difficult but if you want such a system to work with any application type you care to mention the problem becomes more complex.

As for things such as XUL and XAML they do make constructing GUIs easier, however you're still doing pretty much the same thing as you did when you were creating a GUI using the Win32 C API, you're creating specific controls and laying them out in a certain way. It is a far simpler task in XAML to create a good customisable, flexible GUI than in the Win32 API but you're still thinking about GUIs in the same way.

My idea (and if I'm interpreting what he's saying correctly nuvem's idea) is to stop thinking about GUIs in terms of what control should I use and where should I put it and to start thinking about them in terms of here's what my GUI needs to do. You would then have tools to take this description and work out what controls are needed and how to lay them out etc. If this was done at run-time you could have user preferences that control how the GUI is created and layed out, thus Word on one persons computer could look completely different to Word on another persons. So instead of saying I want a toolbar with these buttons and call these functions when they are clicked on, you say I want these actions available to a user. From the description a toolbar could be created, or prehaps a combo box, depending upon user preferences. You could also include something that alters the GUI depending on how a person uses it, here's a quote from Douglas Adams that I think is appropiate:

Quote:There's now a new generation of smarter office chair beginning to arrive that makes a virtue of doing away with all the knobs and levers. All the springing and bracing we learned about is still there, but it adjusts to your posture and movement automatically, without you having to tell it how to. All right, here's a predicition for you: when we have software that works like that, the world will truly be a better place


Of course actually implementing such a system could prove to be a challenge. [grin]
Quote:Original post by Monder
My idea (and if I'm interpreting what he's saying correctly nuvem's idea) is to stop thinking about GUIs in terms of what control should I use and where should I put it and to start thinking about them in terms of here's what my GUI needs to do.
Exactly.

Quote:Of course actually implementing such a system could prove to be a challenge. [grin]
A challenge is somewhat of an understatement :) It is a decidedly non-trivial task, but I'd be willing to wager it can be done.

One major issue is how deep to go. Do you attempt to classify the entirety of the programs functionality, or do you leave most of it hard coded, and attempt to generate only specific toolbars/dialog boxes?

The former is a tremendous amount of work, but that latter leaves questions as to how useful the system would end up being if the resulting applications are just as unusable outside those specific areas, or if the usability problems arise from the design and selection of those areas (it doesn't matter how usable your options dialog is if it's filled with an endless number of ridiculous and unrelated options).
Well the latter case wouldn't be too difficult to do, if you were restricting your descriptions to toolbars and dialog boxes I reckon it'd be relatively simple to implement. What you really want is some way to describe the entire GUI (i.e. the former case) and currently I only have very vague ideas of how you'd do this.
It has been done already. See my GUI on the http://openglgui.sourceforge.net
Quote:Original post by HellRiZZer
It has been done already. See my GUI on the http://openglgui.sourceforge.net


You didn't read it all, did you? [rolleyes]

For data retrieval (ie, forms), this would definitely be less complicated, than say, an IDE :). Aside from the bonus of not having to pixel-position the components individually, there's also automatic consistency, there's no chance of OK and Cancel buttons being swapped around accidentally or what have you.
I suppose auto-positioning/resizing layouts(ala Swing), get's us their to some extent, but we would still have to flesh out (possibly large parts) the interface, what might be functional isn't necessarily aesthetically pleasing.

Lets say we have a piece of software called X, in which exists a number of sub systems, each with their own user-configurable options. Additionally then may have functions which can be performed at will of the user. So we've got two different ways in which the user can interact with the system, settings and commands. The commands are exposed using a common interface for each subsystem, which the main control system can extract, and populate a menu. Each menu item corresponds to a subsystem, eg:

File | Edit | Options | Help

And within each menu item, the individual commands for the subsystem are listed, although, how do we know when to add aesthetics, like dividers, and icons? These could be specified by the subsystem, but would mean we've hard-coded UI specifics, which is not the aim of the exercise :)

The options panel may require some hard-coding, or possibly a nudge in the right direction for the UI generation algorithm, possibly by identifying the panel as a subsystem_options_panel or what have you.. This might translate as two panes divided horizontally by a fixed bar. On the left hand pane is the list of subsystems, and on the right hand pane is the automatically generated layout for inputting data. Validation rules would be easily extractable from the options exposed by the subsystems, such as data type, string length, enumerations, etcetc.

This topic is closed to new replies.

Advertisement