Posted 24 October 2012 - 06:28 AM
Then each of your game objects can obtain one of those object types, register it somewhere, and then upload new data to it (like change the texture). However i dont think you should do GameObject.Draw(), but rather make the "somewhere" where the renderable objects are registered draw them all at once when requested.
Thats how i would do it at least.
class GameObject //Moves data between a LogicalGameObject and GraphicalGameObject, eg. if its a player, the LGO has the position, which the GameObject updates to the GGO so it will render at the correct position.
LogicalGameObject lgo; //handles the modeling side. physics etc. can work independently.
GraphicalGameObject ggo; //makes a visual representation of lgo, using the data from lgo passed to it by GameObject in some way.
(is this MVC by the way? looks like it...)
The benefits of this would be that you can run the game logic without any of the graphics stuff and it works just as it would work if you turn your monitor off, and all the code linking the logical representation and the graphics etc. (sound? network replication?) are done by the GameObject, so if you want to change how the logical object is rendered, all you need to do is change how the GameObject class is implemented.
Lets say if you have the network replication object (replicates the object based on data from the network) so you sometimes get the current outdated state of that object on the server.
What you could do, is have 2 LogicalGameObjects, the other existing in a local World, and the network replicated one in the ServerWorld. The GameObject would thus contain what the server saw some time ago and what you see. What the gameobject could do, is tell something (another class, ClientServerWorldsSmoothUnifier? xD) to try and make the local world smoothly match the server world, with prediction and all. (eg. SmoothlyUnifyWorldATowardsB(A,B) or SmoothlyUnifyATowardsBSoIfIShootItTheServerWontTryToTellMeHeWas2MetersAway(A,B))
So the GameObject would observe the "physics" (Logical version) and all and pass the relevant information between them so they all react correctly to each other, and allow you to remove such a component by just stopping passing data to it. (A bad way to do it would be to lets say play a sound directly in hit detection and draw it when the physics move it. Instead you want to tell the owner of the physical representation when those events happen so it can tell rendering and sound to do their thing.)
Controls (eg wasd for a character) also are a representation of the object, but a very shallow one. Pressing w for example tells that "The object tries to move forward", like the graphical representation is "The area occupied by the objects surface looks like this from this position in the world.", which is also shallow. Or the network tells "The objects position was this an unspecified amount of time ago, and heres some data that might help you figure out what happened during that time."
Separate logical (physics etc.), rendering, audio, network etc. representations of an object and have a higher class contain thouse components and pass relevant data from relevant components to components that use that data.
You should have renderable object types (by how theyre rendered) draw themselves using an abstract renderer interface (so you can implement it using opengl then change to directx. or perhaps you dont need a renderer with other functionality than render() if you use lets say SFML?)