Does the interface of every graphic adventure game suck?

Started by
14 comments, last by valrus 9 years, 8 months ago

Hmm... At least some of that could be done these days via a physics engine: you can put things on top of objects that have a sufficiently flat top surface, and can't put very big things into very small things, for example.

However, for less physical elements, or elements less suited to physical simulation (cartoony worlds or magic, for example), I'm not sure of how to go about it save by attempting to list all possible types of interaction and appropriate parameters for them (for example, object A has the property "surface", which means that things may be placed on it, with the parameter "small", meaning that only objects of size "small" or below are accepted).

I do still think that the context-sensitive menu, as I and others have mentioned, is a potential middle-ground: each object specifies what actions may be taken with it, and contains logic by which it responds to those actions; in some cases that may allow for an inventory item to be specified, but in many the action alone--such as "eating" a plate of food--may make sense.

As to Infocom, to what extent did they have an actual object model, rather than simply--but exhaustively--filling in interaction combinations?

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Advertisement


Hmm... At least some of that could be done these days via a physics engine: you can put things on top of objects that have a sufficiently flat top surface, and can't put very big things into very small things, for example.

Yeah, you're right, but full physics engines open their own can of worms. Some emergent behavior is good. Too much emergent behavior and you can't construct a (traditional) adventure game because you can't count on anything. I need a semantically stylized world not an interactive physics simulation. The AAA games are exploring this space anyway and they are sure as hell better at than me.



I do still think that the context-sensitive menu, as I and others have mentioned, is a potential middle-ground: each object specifies what actions may be taken with it, [...]

Oh, I mean I agree ... I am just wondering (1.) if you could standardize some kind of model for specifying what can be done to what (2.) expose an interface to this model that doesn't involve (literally) spelling out actions with text in (literal) menus.

As to Infocom, to what extent did they have an actual object model, rather than simply--but exhaustively--filling in interaction combinations?

Oh, no, they did it basically the way I am saying -- they had to: it was actually an optimization for space on the tiny computers of that time. Their software was way ahead of its time. The first object oriented codebase sold commercially and the first virtual machine sold commercially. this is pretty interesting if you are interested:

http://www.gdcvault.com/play/1020612/Classic-Game-Postmortem

Oh, no, they did it basically the way I am saying ...

Impressive! I haven't yet read the article to which you link, but have put it aside for later reading: it sounds rather interesting indeed!

I am just wondering (1.) if you could standardize some kind of model for specifying what can be done to what (2.) expose an interface to this model that doesn't involve (literally) spelling out actions with text in (literal) menus.

Funnily enough, I actually had a shot at something like that--although I'm not sure that my experience is terribly useful here. In my case, I had a set of spells, and objects responded to those spells in a variety of ways; to a given spell, some objects might be inert, others had default behaviour, and others had custom behaviour. For example, given a "pull" spell, an anchored object might not respond at all; most non-anchored objects (such as rocks) might have go with the default behaviour of moving towards the player; and a few objects might have custom behaviour, such as water rippling or a pendulum swinging.

Alas, my implementation didn't work out, however. (I think that I recall that one problem was that players found it difficult to intuit what a given spell might do.) However, this may well not reflect on the central idea of such a system as you describe--it's just an anecdote of my own experience.

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan


In SCUMM, this would just have been "Use place mat on door, use letter opener on keyhole, pull place mat, pick up key, use key with door". Again, there's a loss of specificity -- you can no longer specify *exactly* what's done with the place mat -- but again that's not much of a loss.

Oh just to follow up on this ... (I meant to write this earlier but forgot) ... the problem with the loss of specificity isn't that guessing verbs is good or that a "use" button is bad, it is that if you pare it down this much you open the door on the style of play in which you just try "use"-ing every object on every other object and every room-based interactable.

Having to be specific rules out this kind of guessing because of the combinatorial explosion of verbs and prepositions.

Prepositional phrases worked for modifying some objects if, I think, the object exposed a property for a slot associated with the preposition -- I'm guessing this is how it worked internally anyway; for example in The Hitchhiker's Guide to Galaxy pretty much the whole solution to the Babel Fish puzzle involved prepositional phrases and the object model they induced "Hang dressing gown on hook (so the "hook" has an "on" slot). Cover grating with towel (the grating also has an "on" slot which <cover> <with> knows to map to). Place satchel near door. Place junk mail on satchel.

I want a pure GUI that is as rich as the infocom text UI. Ultimately what I think is interesting about infocom games was the object model rather than the pseudo-NLP. The object model allowed semi-emergent behavior: you want to put the letter opener on top of the clock? okay, it is on top of the clock -- because the clock knows it can have stuff on it. You want to put the elephant on the clock? No, the clock says it can't have something that big on it.

This might be overstating the internal richness of the model. Having read the source of some of those games (not HHGG, though), and having programmed in similar systems in the MUD days, I don't recall this kind of relationship between complex user input and a complex object model, especially with respect to prepositions. What complexity the model did have (although I would call it "elegant" rather than complex or rich) wasn't dependent on the richness of the input, because by the time the engine is performing operations on the model the input complexity has been stripped down to nearly SCUMMy simplicity.

I think the reason the graphic adventure games didn't really support the more "emergent" possibilities of a text adventure was more a function of the "other" side of user interface: limitations on game feedback, rather than limitations on user input. When the results of user interaction with the world are expressed solely in text, that's easy, cheap, and flexible the way graphics aren't. You can put anything on anything (that makes sense), in anything (that makes sense), etc. because there's no real additional cost to generating the appropriate sentences. (Consider King's Quest IV, the most complex of the text-input King's Quest games. There's little in KQ4 like the freedom and emergent possibilities of an Infocom game, even though the parser is reasonably sophisticated. The constraining factor wasn't the input, but that results needed to be portrayed visually.)


[...] Having read the source of some of those games (not HHGG, though) [...]

How did you read the source of an infocom game? Is there code in the public domain for one of them? I mean, for a modern personal computer: Zork was public domain but that would be DEC fortran.


I think the reason the graphic adventure games didn't really support the more "emergent" possibilities of a text adventure was more a function of the "other" side of user interface: limitations on game feedback, rather than limitations on user input. When the results of user interaction with the world are expressed solely in text, that's easy, cheap, and flexible the way graphics aren't.

Yes, I agree totally with this.

This is why I am saying that Infocom's natural language thing is kind of just a lot of hype and what you would need to do in graphics to do what they did in text would be to make a "visual language" for representing a very simple object model i.e. when something is "on" something it always looks like this; when something is "in" something it always looks like this. And then allow the user to interact with it i.e. when the user "opens" something he always does this.

So in order to do this you would need some kind of stylized graphics and I am thinking of maybe a mode in which you focus on a room object and the game kind of switches to first person with the visual language inter-actability thing enabled. Sort of like when you "examine" an object in text.

I mean, I don't really have this all worked out but that is why I started this thread.

How did you read the source of an infocom game? Is there code in the public domain for one of them? I mean, for a modern personal computer: Zork was public domain but that would be DEC fortran.

The original Muddle code for Zork is here; not sure of its licensing: http://retro.co.za/adventure/zork-mdl/ . You can run it, I think, on the Confusion engine. For a modern PC, I think there are also ports of some of them to more recent Inform parsers like 6 and 7. I couldn't say how accurate they are to the original but I believe the general execution model hasn't changed much. (Well, for 6. I have no idea what's going on in 7 'cuz I can't read it.)

This is why I am saying that Infocom's natural language thing is kind of just a lot of hype and what you would need to do in graphics to do what they did in text would be to make a "visual language" for representing a very simple object model i.e. when something is "on" something it always looks like this; when something is "in" something it always looks like this. And then allow the user to interact with it i.e. when the user "opens" something he always does this.

So in order to do this you would need some kind of stylized graphics and I am thinking of maybe a mode in which you focus on a room object and the game kind of switches to first person with the visual language intractability thing enabled. Sort of like when you "examine" an object in text.

Ah, yeah, I get ya.

Here's an idea, maybe not exactly what you're thinking of but similar. There's a view mode where everything interactive in the scene is represented (maybe as itself, maybe as a label, maybe as a low-poly model of itself textured with an image of its label) a little "spread out" from its representation in the world. In between objects are lines that represent relationships ("in", "on", "belongs to", "near", etc.) Your inventory is all the objects that are connected to you by "belongs to". You change the world by changing these lines. To give a picture to Sam, you move the "belongs to" line from the picture to Sam rather than to you. You move the vase to the floor by moving the vase's "on" line so that it goes to the floor rather than the table. Etc.

This is weirdly artificial but I think it'd work if the game universe was also weirdly artificial (if, say, you're in a simulated reality and gain the ability to see the object model that underlies your world).

This topic is closed to new replies.

Advertisement