Sign in to follow this  
DeafManNoEars

Looking for Advice On Design

Recommended Posts

Looking for Opinions and how you handle such stuff in your own applications. 1) Do you prefer to have objects taking care of themselves...(i.e. a RenderableObject that will hold a pointer to the render device and draw itself)? 2) Or would you prefer registering a RenderableObject with the render device and have the device take care of the rendering? This is what I was thinking, please tell me how much this sucks, as I have no formal training and limited experience. Actually I have a lot of expereince but have never really exposed my source code or designs to people for critiquing. I want to have a GameObject that owns a pointer to a RenderableObject and a PhysicsObject. RenderableObject and PhysicsObject are abstract classes with one or two functions (Render( float fET, DWORD data ) for the former and Update(float fET) for the latter). I was going to derive, say, a MeshObject from RenderAbleObject. So I derive a Player from GameObject. I instantiate this Player and register it with the RenderDevice and the PhysicsDevice. The render device will keep a list of all materials and textures used as well as what items are currently in view and call GameObject->IRenderableObject->Render(...) on the appropriate objects while the PhysicsDevice calls GameObject->IPhysicsObject->Update(...). My concerns with this, is that I don't think I really like casting the RenderableObject pointer in the GameObject/Player into a mesh for the player and then back to a RenderableObject for the renderdevice. I also don't know if I should just pass a temporary pointer to the RenderDevice to the RenderableObject to aid in the rendering and loading functions, or if the Renderable object should just own a pointer to the device. The RenderDevice and PhysicsDevice take care of all memory management on all registered components. It is the GameObjects duty to release all none physics and rendering related data. My GameObject maintains a flag indicating whether it has been registered or not and can clean up on it's own if it hadn't been registered. 3) Is there a way I can force an object to be registered? (Should I have a factory function in my RenderDevice that will create the appropriate object, and pass back a pointer to an IRenderAbleObject); 4) Is it OK that I don't force registration and just keep track of whether an object has been registered or not? -Seems improper to me. 5)Can somebody point me to some documentation that can describe the pattern I am looking for, tell me a better idea, or explain what is flawed in my design? 6)How can I make GameObject a cleaner composition of derived RenderableObjects and PhysicsObjects. I guess I am really just looking to hear how others take care of these things. I can always think of many different design solutions that work but I don't have any good form of Metrics to weed out the poor, breakable designs and select the more appropriate ones. Thank you for your time. Seth edit: Gave 3) its own paragraph :) [Edited by - DeafManNoEars on August 13, 2007 3:51:44 PM]

Share this post


Link to post
Share on other sites
Well I think you're on the right track with composition. Using a Renderable object and Physics object is preferable to some kind of messy multiple inheritance setup.

I personally would recommend keeping the renderable mesh object as more of a data store for the mesh data. Let the render device worry about what to do with that data. That way it can batch it, sort it, do whatever it wants with the data without the renderable having to worry about the rendering logic.

To clean up your GameObject what you might want to do is have a map of component objects. Each component could have a virtual execute function or something like that. When you call execute it would call update or render or any other logic which needs to get updated on a frame by frame basis.

Share this post


Link to post
Share on other sites
Quote:
Original post by Dancin_Fool
Using a Renderable object and Physics object is preferable to some kind of messy multiple inheritance setup.


Yeah, I did not want to have to mess around with and keeping track of that. I want a simple, yet powerful interface to start off with that I can expand easily.

Quote:
Original post by Dancin_Fool
I personally would recommend keeping the renderable mesh object as more of a data store for the mesh data. Let the render device worry about what to do with that data. That way it can batch it, sort it, do whatever it wants with the data without the renderable having to worry about the rendering logic.


And for the loading of the mesh. Would you suggest the GameObject pass the filename to the RenderDevice in the RegisterComponent method and allow the device to perform all loading tasks as well?

Quote:
Original post by Dancin_Fool
To clean up your GameObject what you might want to do is have a map of component objects. Each component could have a virtual execute function or something like that. When you call execute it would call update or render or any other logic which needs to get updated on a frame by frame basis.


That is almost exactly what I was thinking.

I have gotten some bad habits that I acquired through the typical design as you code, instead of design before you code. So I never really developed a taste for the different patterns and designs.

I want this project to be well thought out, simple and most importantly (GET COMPLETED).



Share this post


Link to post
Share on other sites
I think I can summarise your problem: It sounds like you're struggling with the problem of cleanly interfacing your lower-level APIs (rendering and physics) with the higher-level ones (actual scene entities).

I would use a combination of the mediator pattern and state pattern to provide a clean separation of responsibilities and a loosely coupled link between high level scene objects and low level rendering/physics.

On the low level side:
The RenderDevice only knows how to draw polygons with textures etc
The PhysicsDevice only knows about bounding volumes and velocity etc

On the high level side:
The GameObject just contains data for everything it needs to know about itself [smile]


The RenderDevice and PhysicsDevice do not know anything about GameObjects...not a sausage.
GameObjects do not know anything about the RenderDevice and PhysicsDevice either.

So how do we get them to talk if they dont know the other exists?
Answer: The mediator and to a minor extent the state pattern

The mediator is a class that lives somewhere between the high-level and low-level code, it knows about GameObjects and it knows about Render/Physics Devices. It is responsible for requesting data (state) from GameObjects and passing it to the devices.

What sort of data do we get from a GameObject? Well we request a state class.

When we want to render a GameObject we request a RenderChunk, this class contains the vertex data, texture data/handles, shaders, blend states, basically anything thats useful to rendering.. all of this is renderable state.
The GameObject stores a RenderChunk instance, the mediator requests a pointer to it from the GameObject and passes it to the RenderDevice which can use it to draw the object.

Same goes for phyiscs, the mediator requests a pointer to a MoveableChunk (or whatever you want to call it) and then passes it to the PhysicsDevice for processing.

Each frame you just call render, or update, or process on your mediator, e.g. If you mediator was called SceneManager, then something like this would work:
sceneManager->process( linkedListOfVisibleGameObjects );

Your GameObjects might just look like this:

class GameObject
{
public:
RenderChunk* getRenderChunk() { return &renderChunk; }
MoveableChunk* getMoveableChunk() { return &moveableChunk; }

// Other stuff
private:
RenderChunk renderChunk;
MoveableChunk moveableChunk;
};


No need for icky multiple inheritance here [smile]

You may or may not want the RenderDevice or PhysicsDevice to cache (hold on to) the pointers to the state chunks, it depends on exactly how you implement things really.

Edit:
A good design technique to code by is the 'Single Responsibility Paradigm', basically you dont give a class more than one responsibility.

To demonstrate how the design I suggested above follows this paradigm:
GameObject - Responsible for knowing itself, (read: aggregating state chunks)
RenderChunk - Stores state useful for rendering
MoveableChunk - Stores state useful for physics processing
RenderDevice - Renders textured/shaded vertices as providd by a RenderChunk
PhysicsDevice - Updates and process MoveableChunks
SceneManager - Mediates state-passing between GameObjects and Render/Physics Devices

[Edited by - dmatter on August 13, 2007 5:18:11 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by DeafManNoEars

And for the loading of the mesh. Would you suggest the GameObject pass the filename to the RenderDevice in the RegisterComponent method and allow the device to perform all loading tasks as well?



No I would recommend another class for loading of mesh files. This object would know how to parse a mesh file. You would then pass in an empty mesh object and it would populate it with the appropriate data. This could then be added as a component to your game object.

There's a few good examples of component based game engines out there. I personally don't recommend going completely component. I think every design has it's place and a good balance between composition and heirarchical is the way to go.

My engine consists of a micro-kernel which works like a small operating system. It controls the delagation of tasks and manages the various subsystems. I've found it to be a good design which scales well.

Share this post


Link to post
Share on other sites
OK so,

Quote:
Original post by dmatter
I think I can summarise your problem: It sounds like you're struggling with the problem of cleanly interfacing your lower-level APIs (rendering and physics) with the higher-level ones (actual scene entities).


Yup, That is exactly it! ;)

Quote:
Original post by dmatter
... mediator pattern and state pattern to provide a clean separation of responsibilities ...

So how do we get them to talk if they dont know the other exists?
Answer: The mediator and to a minor extent the state pattern


That is what I am looking for. Thank you.

I think my real problem was the following thought process..."If some of my goals are Encapsulation and loosely coupled design, then I couldn't possibly want GameObject to know about RenderDevice. If I don't want GameObject to know about RenderDevice then I must not want anything to know about both of them, tying them together except my main gameclass."

So, even when a third party "mediator" ties them together, their data and implementation are still independent of eachother's. It also seems that if the mediator is dealing with their data and not their implementation this can extend to the mediator being loosely coupled to either.

Quote:
Original post by dmatter
What sort of data do we get from a GameObject? Well we request a state class.

I can see this extending well to multithreading


I think I "get it".

Quote:
Original post by Dancin_Fool
No I would recommend another class for loading of mesh files.


This only seems natural now.

Thank you all. I am sure there are myriad possibilities. Thank you for making one clear. This is where I was heading and just needed some guidance I appreciate it.

rate-ups all around.

Share this post


Link to post
Share on other sites
Quote:
Original post by DeafManNoEars
I think my real problem was the following thought process..."If some of my goals are Encapsulation and loosely coupled design, then I couldn't possibly want GameObject to know about RenderDevice. If I don't want GameObject to know about RenderDevice then I must not want anything to know about both of them, tying them together except my main gameclass."


Many people struggle with getting this part of their engine pipeline to work cleanly.
I think the problem extends from the fact that when dealing with C++ and OOP, people tend to lean towards to functional idioms and the data in a class is only operated on by the class itself, so:
"We're dis-intersted in the internal state of a class, only in what functions or services the class provides for us."

This is typically a good approach because the internal data (or state) of a class is usually tightly linked to its implementation and hiding data and implementation is what encapsulation, abstraction and OOP is all about.

However to get this particular part of the pipeline to work efficient is difficult using this approach.
Instead we need a more state-oriented approach, we need to think of state as an object itself, so in a way the state is the interface and is not tightly linked to the implementation.

So:

A state chunk "is" the state
A GameObject "owns" state
A state loader "initialises" the state
A device "uses" the state

Each class has a single responsibility and everything centers around the concept of state being an object itself, so its all still OOP.

Quote:
So, even when a third party "mediator" ties them together, their data and implementation are still independent of eachother's. It also seems that if the mediator is dealing with their data and not their implementation this can extend to the mediator being loosely coupled to either.

Yes exactly.
Since the state itself is in effect an object that interface the GameObject and the devices then everything is loosely coupled even the mediator [smile]

[Edited by - dmatter on August 13, 2007 5:23:01 PM]

Share this post


Link to post
Share on other sites
Hi, I'm working on the same part of my "engine" and have a few questions/suggestions:


1:

One thing Im having trouble with here is the term "Device". I know it's used quite a bit when dealing with DirectX but I've only used OpenGL. I believe DirectX can render to several windows (devices) at the same time while in OpenGL you have to switch what is the current context using something like wglMakeCurrent(HRC RenderContext).

How much does the RenderDevice handle? Is it responsible for "allocating" a window to draw in (ie. storing things like the RC and hWnd in Windows). I realize this is implementation dependant, but one option must be better than the others right? What about setting camera position? I was thinking maybe making a Camera class that is a "utility class" under RenderDevice would be a good idea. The user could then create an instance of this and pass it to RenderDevice::SetCamera(Camera camera).


2:

I would like the RenderChunk to be part of the RenderDevice code (almost like in our own rendering API). Maybe even have RenderChunk in the RenderDevice namespace so that it would be created with

RenderDevice::RenderChunk* data=new RenderDevice::RenderChunk;

In this "API" I would also like a class that could be named RenderChunkManager that had a method like

RenderChunk* LoadMesh(const char* filename)

this would load load a mesh from disk and pack the data in a RenderChunk instance. It would then add this instance to a list (in the list element the filename would also be stored) and then return a pointer to that instance. So when the GameObject gets asked for the data to render it we just supply this pointer. This method would be called by our GameObject init code (e.g. when an entity spawns). The RenderChunkManager would then check if the filename has been loaded already and if that's the case find it in the list and return that pointer. This way the same data is only loaded once.

Another benefit of having the same class (or set of classes, our "API") store and render the data is that the other objects doesnt have to know in what format we store the data in. For example, when using VBO's the data stored in app memory (ie the data stored in RenderChunk) could be only a handle (or pointer, havnt gotten into this VBO stuff yet) to the memory on the GPU.

Having GameObject storing a pointer to a RenderDevice::RenderChunk object breaks the idea of it not knowing about the RenderDevice. But maybe this doesnt matter since if you wanted to switch renderer you would only replace the pointer declaration?

What is your thoughts on this?


Again if anyone has a link to a good paper that would be awesome, the only thing I have is this


I have loads of more thigs to say..but this will do for now ^^

Cheers
Thunder Sky

Share this post


Link to post
Share on other sites
Quote:
Original post by Thunder Sky
One thing Im having trouble with here is the term "Device". I know it's used quite a bit when dealing with DirectX but I've only used OpenGL. I believe DirectX can render to several windows (devices) at the same time while in OpenGL you have to switch what is the current context using something like wglMakeCurrent(HRC RenderContext).

DirectX is actually a group of APIs for graphics, sound, input and networking; the graphics specific part is called Direct3D (D3D for short).
In D3D a device is roughly equivalent to a graphics card, if you had two graphics cards (or GPU and monitor combinations) then you need two devices.
A single device can render to multiple windows under the same GPU.
So a D3D device basically abstracts the abilities of the graphics card into a useable API, since most desktop computers have a single graphics card most of the time you only ever need one D3D device.

In practice we often create some sort of slightly higher abstraction over Direct3D or OpenGL and we can almost re-define what a 'device' means to us. Some people call it the 'Renderer', or unimaginatively just 'CGraphics'.

Quote:
How much does the RenderDevice handle? Is it responsible for "allocating" a window to draw in (ie. storing things like the RC and hWnd in Windows). I realize this is implementation dependant, but one option must be better than the others right?

Ahh, well you're right this is all down to someones implementation.
In some engines the device is responsible for the window it renders to, in others its responsible for the windows it renders too (note the plural of window [wink]).
In my engine I decided that the concept of a 'window' is specialisation upon a generic 'render-target', so the device doesnt know anything about windows it only knows about render-targets. The responsibility of looking after one or more windows belongs to the OSPlatform class. I've written about this over in the Software Engineering forum.

Quote:
What about setting camera position? I was thinking maybe making a Camera class that is a "utility class" under RenderDevice would be a good idea. The user could then create an instance of this and pass it to RenderDevice::SetCamera(Camera camera).

Yes, that sounds like a good idea.
It all comes down to what sort of level you abstract a device at, this sort of level is perfectly healthy.
You might decide that a camera is actually nothing more than a matrix with some sugar, so arguably the RenderDevice need only know about matrices and you can say that the camera class is a higher level component than the device and could belong with the GameObjects side of things.

Quote:
I would like the RenderChunk to be part of the RenderDevice code (almost like in our own rendering API). Maybe even have RenderChunk in the RenderDevice namespace so that it would be created with

RenderDevice::RenderChunk* data=new RenderDevice::RenderChunk;

I half agree.
I do see the RenderChunk to be a component of the graphics library code rather than the higher-level scene library code.
I personally wouldn't put it within the RenderDevice (unless you gain some better data encapsulation this way).
I'd have a common namespace for both the device and the state chunk:

namespace Graphics
{
class RenderChunk {..};

class RenderDevice {..};
}

Graphics::RenderChunk myChunk;
Graphics::RenderDevice myDevice;

Quote:
In this "API" I would also like a class that could be named RenderChunkManager that had a method like

RenderChunk* LoadMesh(const char* filename)

this would load load a mesh from disk and pack the data in a RenderChunk instance. It would then add this instance to a list (in the list element the filename would also be stored) and then return a pointer to that instance. So when the GameObject gets asked for the data to render it we just supply this pointer. This method would be called by our GameObject init code (e.g. when an entity spawns). The RenderChunkManager would then check if the filename has been loaded already and if that's the case find it in the list and return that pointer. This way the same data is only loaded once.

I wouldn't [smile]

Let me explain... the RenderChunk is quite simply all the data necessary to render something, it's purely a collection of state and the 'concept' of a RenderChunk provides no knowledge of what this state represents.
However, in order to populate the state chunk with data we do need knowledge of what the state represents; this knowledge comes from the GameObject.
It is the GameObject that is repsponsible for 'knowing' itself, so it knows what the state represents.

We might have a MeshGameObject (which might inherit from GameObject, or in a component based engine a mesh is a component of GameObject - but for the moment it doesnt matter). This MeshGameObject 'knows' that the data in a RenderChunk is a model, with textures and shaders etc.
We can now do something similar to what you described and have a MeshManager, perhaps with a method such as:
MeshGameObject LoadMesh(std::string filename);

But heres the nice bit, we could also have a NURBSObject which contains the functionality for calculating NURB-spline curves.
This class would still have a method called getRenderChunk(), but instead of just returning a RenderChunk filled with mesh data, it would calculate the curve and fill the chunk with curve coordinates.

There can be all sorts of things that will return a RenderChunk using a consistent interface: HeightMapGameObject, BezierPatchGameObject, WaterGameObject, CloudGameObject....

In short, the idea of mesh-loading and mesh resource management once again belongs at a higher-level than the RenderChunk; and by not making assumptions about what we think a RenderChunk will hold we can get a wide range of different effects and functionality from our high level scene-objects all fed through the same consistent interface.

Quote:
Another benefit of having the same class (or set of classes, our "API") store and render the data is that the other objects doesnt have to know in what format we store the data in. For example, when using VBO's the data stored in app memory (ie the data stored in RenderChunk) could be only a handle (or pointer, havnt gotten into this VBO stuff yet) to the memory on the GPU.

Heres my thoughts on this, the vertices (and indeed all data) held within the RenderChunk is API-agnostic and format independant, meaning that it is consistent no-matter whether you're using OpenGL, Direct3D, VBOs, VARs, D3DVertexBuffers, etc. All the data held (or referenced to) in the state chunk is held on system-side memory (normal RAM, created using new[]).

The RenderDevice maintains the underlying GPU-side memory storage (VBOs etc), when a RenderChunk is to be rendered by the RenderDevice its data must be transferred from the system-side RAM to the GPU-side VRAM. The simplest way to do this is a simple memcpy or sub-update, but this only works if the two formats are compatible, you might need to transform the data and feed it to VRAM manually.

Of course copying data from system-side to GPU-side every frame is slow, and you'll likely find that the data doesnt change all too often and copying it over again is pointless because its the same data.
Textures and shaders and the best candidates not to change between frames, you can actually just create them once at startup and forget about them (probably how you do it at the moment i bet), but vertices can change quite often, especially for dynamic objects and for large scenes we need to re-use vertex buffers for different objects because theres simply not enough memory on the graphics card for all our vertices.
In this case the RenderChunk can keep a handle (an index or pointer) to the vertex buffer that it was last copied (cached) to, when it comes to drawing a RenderChunk the RenderDevice first checks whether the vertices are still cached, if they are then great; no copying needed, if the vertices have since been thrown out and replaced with another object then we need to copy them back into VRAM again.

Who does the copying? Well in some engines they create a Shader class that has a methods responsible for this (because a shader knows which vertex streams it needs so we know it can do the copying of vertex data correctly). In my engine (which still under development) I have a StreamUploader class that has this responsibility. The SteamUploader lives alongside the RenderDevice.

Quote:
Having GameObject storing a pointer to a RenderDevice::RenderChunk object breaks the idea of it not knowing about the RenderDevice. But maybe this doesnt matter since if you wanted to switch renderer you would only replace the pointer declaration?

This is why I'd put them in a common namespace and not within each other. However if you did want to go with RenderDevice::RenderChunk then you can accept that a GameObject will need to know the RenderDevice namespace.

Quote:
Again if anyone has a link to a good paper that would be awesome, the only thing I have is this

Huston Design Patterns

A renowned discussion for a similar system to the one described above << There are more, use the forum search feature [smile]

[Edited by - dmatter on August 14, 2007 6:32:45 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by DeafManNoEars
Looking for Opinions and how you handle such stuff in your own applications.

1) Do you prefer to have objects taking care of themselves...(i.e. a RenderableObject that will hold a pointer to the render device and draw itself)?

2) Or would you prefer registering a RenderableObject with the render device and have the device take care of the rendering?


Neither one. I prefer that the RenderableObject, (contains all the geometry, textures, etc.) know nothing about the Renderer, nor vice-versa. Instead, there is an Effect class which takes a pointer to a RenderableObject and draws it with the current Renderer. Effect classes are queried at runtime to find the class capable of rendering the requested effect on the current renderer and hardware.

Share this post


Link to post
Share on other sites
Just wanted to add: (and I hope it helps someone)
-->Sorry for all of the ranting. Just feel like putting this into words. I know it is off topic, deal :P.

I had been programming as a hobby for a while now and have made several working apps. That's it, though, they work. They are not cleanly built, they were not easy to build, they contained many hacks and they undoubtedly contain a fair share of bugs.

I am an Engineer by nature and so I was uncontrollably interested in how everything works. I spent a lot of my time writing my own implementations of RBTrees, Lists, circular buffers and learning how things were implemented.

I never bothered to learn the different design patterns, strategies and techniques. I have read many books on improving Cpp code about exception safety, RAII, SCppL and the likes but never on design patterns. In my readings and in my code itself I have seen many and actually used (maybe even correctly?) some of the patterns and strategies. I started to think out my designs more.

In reality, though, I never really DESIGNED. Everything just evolved. I would start at the lowest level because that was where the work needed to be done. This is what I wanted to learn. I would just dive right in. I was looking at "What can I do? How do I do this? What next?". I would just create components using some of the tools that were available to me (mostly SC++L), find ways to link them together using some of the patterns that I read about.

This would obviously lead me to crazy dependencies and a cluster-f*ck of code that was difficult to look at and even more difficult to maintain and evolve.
It was just a conglomeration of other people's ideas.

<SideNote> I design military and aerospace batteries for a living. We are on the 2 Mars Rovers right now.</sidenote>

I thought about how I design a battery. I don't start with the smallest components and work my way up without thinking about how it will all go together. On the contrary, I get a performance spec. from the customer defining the volumetric, mass and performance requirements. I then design how the system should look, what it's behavior should be, how to meet these requirements etc...Only then do I go down to the lowest level and build to my design specifications.

I didn't extend any of my experiences into designing C++ systems.

What I am starting to think of now is that the lower level code that I am creating now is really akin to the nuts, bolts, hardware and of course some Li-Ion cells. The managers, factories and the likes are the torquedriver and wrench to secure the nuts to the bolts.

But in the end, it is still ME that needs to use these tools and provide myself with my own performance spec. That is an important point that I was not grasping. These tools can't use themselves. I can still, at the highest level of all, use a screw driver (SceneManager) to screw a bolt (GameObjects->RenderData) into an aluminum plate (RenderDevice). Or something like that. Sorry for the bad analogy.

Finally, two questions I haven't asked myself before but plan on asking myself quite often from here on out:
"What do I ultimately need to be able to do? How do i ultimately want to use these tools?"

So from now on I think I will design my systems based on ease of use first, reusability and encapsulation next; and with that in mind build it and then HELP it evolve to preserve those ultimate goals. Maybe that's more of molding, but anyway. :)

Thanks for bearing with me!!!!!
Whew. *wipes sweat off brow*

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this