Sign in to follow this  
transformation

high level game architecture...

Recommended Posts

transformation    140
Hello I was thinking of the most highest set of modules you can have in a game. Immediately I got input, simulator and renderer. Now I'm trying to write down responsibilities for each and I know that responsibilities can vary a lot but I just want to get teh basic ones down. For the input it's simple, read input devices and report, but report to what? It definetly reports to teh simulator so that the world can update itself. But does it also report to the renderer for GUI purposes? Or does the simulator do that instead? The simulator is just the world: AI, physics and collision detection (Anything else?). I'm having trouble drawing a line between the renderer and the simulator though. I think a fair defeinition of the renderer module is polygone organization. Though does hidden surface removal go into the simulator or the renderer? How about lighting? The calculations of lighting are definetly part of the simulator because it involves physics. But I my understanding is that it's very tightly coupled with the renderer as well... especially nowadays since you have vertex and pixel shaders... I'd like to get s discussion going on this. I'm sure it will help a lot of people understand what makes a game better. Thanks in advance.

Share this post


Link to post
Share on other sites
A game typically begins in a menu state with several choices (new game, load, options) ... each of which triggers a transition to a new state. How do you plan on representing this structure given that all you have is input, simulator, and renderer?

Another example which will lead you down the same path is this:

Most games have a pause feature. This captures the simulator continuation, halts the simulator, and then later, when the player unpauses, the simulator is re-entered at the captured continuation point.

This 'pause' signal is completely out-of-band as far as the simulator is concerned. It has no effects on the simulation whatsoever ... it only effects the mapping between game time and real time.

So what does this mean for you? A game is always capturing input and providing output ... that's the nature of such an interactive system. But it is not always simulating (or for the pedantic: it is not always simulating the same system). The meta-game activities I mentioned (interacting with a menu, pausing) will cause you pain with your current architecture.

Share this post


Link to post
Share on other sites
JohnBolton    1372
Quote:
Original post by transformation
I think a fair defeinition of the renderer module is polygone organization. Though does hidden surface removal go into the simulator or the renderer? How about lighting? The calculations of lighting are definetly part of the simulator because it involves physics. But I my understanding is that it's very tightly coupled with the renderer as well... especially nowadays since you have vertex and pixel shaders...

The simulator maintains the state of the game/world, the renderer displays it. The simulator should be fundamentally the same regardless of how the game is rendered (and even if it is not rendered), though there may be some differences due to optimization and practicality. In some cases, the work is even duplicated.

Unless hidden-surface removal somehow affects the state of the game (I can't see how it could), it should be in the renderer. Unless lighting affects the state of the game, it should be in the renderer (and lighting has nothing to do with physics).

A more difficult line to draw is the line between AI and animation.

Share this post


Link to post
Share on other sites
Kitt3n    468
>A more difficult line to draw is the line between AI and animation.

The 'logic/sim' should manage the statemachine (ie idle, walk, attack, ...)
and send a signal about what the current animation is to the rendersystem.
Also the logic should be able to scan the animation for 'marker-points (ie
at this point in the animation, do a footstep sound, or do a 'hit' effect),
independantly from the rendersystem (because the renderer might or might
not be there, and you don't want to re-implement this standard behaviour
for each renderer).

The AI will have an overview what is in the world, and use the statemachine
to achieve desired behaviour.

Seperating the rendering from the logic is really interesting, in my current
project, we can disable rendering entirely.. or we could do a 'console-
renderer' - the rendering itself is more or less irrelevant.

In my case, we have a core-library, a render and logic library (both
depending on the core library), a logic-ai lib and finally the game-lib,
which brings together the renderer and the logic and also handles things
like gui.
The advantage is that you can't make 'wrong' connections, ie the renderer
doing some kind of ui-state-switch. You are forced to do the ui-stateswitch
in the game-module, because you just can't access the gui from the logic,
core or render modules.

Share this post


Link to post
Share on other sites
transformation    140
[quaote]A game typically begins in a menu state with several choices (new game, load, options) ... each of which triggers a transition to a new state. How do you plan on representing this structure given that all you have is input, simulator, and renderer?[/quote]

Menu state would just be a form of input would it not? That was one of my initial questions, does the input module report to the renderer? I think it should for this very situation. And the pause: that would just put a stop to the simulator and most of the input mechanisms... Maybe I didn't understand you properly but is there any other module other then simulator, input and renderer (or more precisely presentation)?

Quote:
Unless hidden-surface removal somehow affects the state of the game (I can't see how it could), it should be in the renderer. Unless lighting affects the state of the game, it should be in the renderer (and lighting has nothing to do with physics).


That makes sense.

Quote:
A more difficult line to draw is the line between AI and animation.


But they'd both be part of the simulator right? All the calculations I mean. The positioning of a charater is just another state of an object within the world, so where and how it's positioned would be determined within the simulator. And wouldn't the same go for AI?

Quote:
Original post by kitt3n
...


so the UI is left... You think it'd be fait to say it's part of the input module?

Share this post


Link to post
Share on other sites
BeerNutts    4401
Quote:
I was thinking of the most highest set of modules you can have in a game. Immediately I got input, simulator and renderer.


I can beat that:
Module list: Game Module.
I win a prize?

FWIW, in my game, I had the modules you're talking about. the simple module is input, and you pass the input to the sim/logic.

The sim/logic, then most act on the input and update your character. It then mus update the other objects/characters in the game. Then it checks the interaction between the objects/characters, and your character.

Finally, it reacts to the interactions, and updates the game world stats (health changed, armor picked up, etc.)

When the logi is complete, it should have created a new world image that the render can easily take and display on the screen.

Repeat.

Share this post


Link to post
Share on other sites
Kitt3n    468
>does the input module report to the renderer?
No - the render module should do what the name suggests "render things",
not process things. The renderer should NOT decide to move objects,
change menu's or anything like that.

The simulation should process things, the renderer will take the
current state of the simulation and display that. User-input is
what makes the simulation do something usefull (provide input for your
simulation).

>so the UI is left... You think it'd be fait to say it's part of the
>input module?

If you see 'input' as 'capturing keystrokes / mouseclicks', I would
add a module "base library", with all kinds of usefull functionality.

Then I would define a "game module" which - using the input functionality
of your base library - provides a thin wrapper to abstract these key/mouse-events (eg 'w' key was pressed) into simulation events (eg move forward).
Furthermore, it will manage capturing input, advancing the simulation,
doing the rendercall, and handle ui parts / menu's.

Just my 2 ct :)

Share this post


Link to post
Share on other sites
ApochPiQ    23061
IMHO your model is too simplistic. You need to take into account more aspects of the game. When you find yourself having trouble figuring out what part of your design something fits into, it's usually a good sign that the design itself is missing something. I'd suggest adding a few more areas, at which point you should find things much easier to work with.

The first thing to do is look at things from a very low level of abstraction. At the lowest level of abstraction we have basically three things: an input pump, an output pump, and a state loop that connects them. The input pump will read keyboard, mouse, joystick, and any other input as appropriate - but it is responsible only for reading it and packaging it in a way that makes sense to the rest of the game code. The output pump is similar, but in reverse - it gets a list of things to display, and displays them - nothing more, nothing less. Finally, the state loop basically just repeatedly takes input, produces output, and ends when the game exits. The state loop also serves as our connection point to higher levels of abstraction.

In the next layer of abstraction, we start to do useful stuff. Here we build the game state machine: intro, menus, gameplay, exit (or whatever). This layer consists of three primary components: user input dispatcher, user interface service logic, and plumbing logic. The input dispatcher pulls input from the input pump and acts on it, talking to other parts of the system as needed. The UI service logic takes care of things like drawing custom GUI elements, displaying prompts and messages to the user, and so on; this is the "framework" that the UI itself is built on. Finally, the plumbing logic acts as the connection between the UI code and game logic itself. The plumbing layer is usually quite thin.

With these two layers to build on, we get to the good stuff - game logic. The game logic is usually split into two aspects: internal game logic, and user interface logic. The internal logic is the actual gameplay. The UI logic is stuff like the GUI, menus, key mapping, and so on. The actual gameplay logic is likely to be the bulk of the game for most nontrivial games - this includes stuff like AI, collision detection, win-condition checking, etc. etc. In large games it is common for further layers of abstraction to be added on top of this simply to build the game logic in a more efficient manner.



Of course as with all design plans this is not 100% applicable to everything. In some cases, layers may be so thin that they barely even exist. Some layers can literally be removed in certain games. Sometimes a seemingly trivial or small element can account for a massive chunk of a game's code. Generalizations are usually inaccurate; YMMV; and other standard disclaimers apply.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this