In Game I might have a World class which also have Update() and Draw() methods. And in World there is a Player, but the Player class should be able to call a function in Graphics when a key is pressed. I can't do that in either Update() or Draw() since they only are passed one of Input and Graphics. In this case I find it really useful for one of them to global since I don't have to bother about this issue. I can see two options to solve this without the use of globals.
- Change Game::Update() and World::Update() to take a Graphics* parameter.
- Add a Graphics* mGraphics member variable to each class that needs it, and set it with SetGraphics(Graphics* pGraphics) on initialization (what if I forget to set it?)
Maybe there are more options, what do you recommend doing?
Ah, but like I noted before, I would argue here that the problem is with the design whereby "the Player class should be able to call a function in Graphics when a key is pressed." That's giving the player object responsibility over two very distinct systems (graphics and input), especially when the player appearance to be nothing more in this design than an entity within the logical world. You are now conflating, and thus coupling, your logical objects with your input and your rendering. With a bunch of singletons/globals this is less obvious, but when you remove them as you have suggested here, you
discover those dependencies.
Ideally all three of those (logic, rendering, input processing) should be independent of each other. Communication between them should occur via the higher level container for each subsystem (in this case, the game itself, so perhaps in your "game" class).
The input system collects a bunch of keyboard events, or whatnot. Your game translates those (maybe via some kind of action mapper). Then it asks the world for the currently-controlled entity and uses the public interface of that entity to apply the actions. Once all that's done and the rest of the world goes through it's simulation step, the game collects all the visible/renderable entities and constructs (or updates) rendering descriptions for them, and feeds those rendering descriptions to the renderer object itself.
In this system:
- The "Player" class makes the most sense as the action mapper; all it has are a pointer to the currently-controlled world entity, and a bunch of methods to translate unmapped key codes into actions based on the entity state (for example, making the "V" key either draw or stow the entity's weapons, depending on the entity's current "are weapons drawn" state). The responsibility of the player is to change entity state.
- World logic, input logic and rendering are all independant, tied together at the topmost level by the game itself, which uses a Player object to translate between input and world systems.
- Similarly, the topmost game object also feeds input to the graphics by iterating all the active and renderable game entities and creating render descriptions (sprites, for example) using that data. The game determines whether to use animation sequence 0 (weapons stowed) or 1 (weapons drawn) for an entity based on that entity's state, but the graphics system only knows about and only has to see a bunch of sprites.
You arrive at the quandary you posed about the player object by incorrectly taking the OO mentality too far -- OO design is about modelling objects in real-world terms, but only to extent. Your initial description of a Player does model a human player very literally -- it
is the human player who, however indirectly, causes graphics to change when they press keys. But the in-code version of that object shouldn't be developed that literally, because it leads to exactly the kind of spaghetti of dependencies you encountered.