Sign in to follow this  
schragnasher

Renderer vs render component

Recommended Posts

schragnasher    120
Iv began refactoring my nasty object heiarchy into a component design, its coming along well and i understand alot of the basics, but im a little confused on how an entity would get rendered. My games are all 2d, and so far the "Scene Graph" is simply a vector. Obviously for a more advanced game than iv been making i need something better, with a parenting tree like relationship. The question i have is how the render components in my entities and the scene graph will be setup. In the old system entities drew themselves, so my first inclination was having render components draw themselves. The problem was that they have no way of parenting with other render components and they are alot more abstracted from the vector the entities are held in that i dont trust they will stay in the correct render order. My thought was leting the render component basically control a scene graph node. Is this headed in the right direction? How do i create a parent relationship between two render components? Thanks for any insight you can give.

Share this post


Link to post
Share on other sites
Zipster    2359
The render component really shouldn't do the rendering itself, rather it should control "renderable" objects which are handed out by the graphics engine and represent models, particle systems, meshes, etc. The render component then just adds and removes these objects from the scene, and the graphics engine handles all the actual rendering details. So for a 2D game, you could have a Sprite object, and the render component just does something like:
void RenderComponent::Initialize()
{
MySprite = new Sprite("Tony.dds");
TheGameScene->Add(MySprite);
}

void RenderComponent::Shutdown()
{
TheGameScene->Remove(MySprite);
delete MySprite;
}

Share this post


Link to post
Share on other sites
chapter78    146
What language are you working in?

Drawable objects have an object called Render() that is called by the main render task. There's nothing to stop you doing subsequent calls to Render() on other drawable objects from within the 'main' drawable object, creating a hierarchy of drawables based on relative positions.

A render frame might look like this:

RenderTask -> Begin iterating list (2 items)
RenderTask -> Space Background
RenderTask -> Planet -> Orbiting Moons (x4)
RenderTask -> Finished iterating

The planet object is responsible for drawing its moons. There are probably better ways of doing it, but this is quite basic and works for me.

Share this post


Link to post
Share on other sites
kauna    2922
Quote:
Original post by schragnasher
What about parenting?
I have a planet with a moon. I need to parent the moon to the planet so the matrix changes pass down the chain.


I am not a master of this scene graph thing, but it may help that you think your planets as more abstracted entities, which own a graphical presentation of them.

So the graphical presentations don't need to know about the hierarchys of the actual objects.

I understand your approach of planets having "parents" ie. objects they are orbiting around and that fits to the scene graph ideology. This parent-child link however is artificial in on sense. There are real physical forces which actually makes the moon orbit the planet.

Consider the abstracted entity owning a physical presentation of the object too which interacts with the surrounding planets in the way you wish (ie. simple orbiting vs. complex solar systems where planetary movements affect each other). Also, the physical presentation may be just a mathematical sphere for a planet.

In this arragement, you can place your graphical presentations (planets) in their own spatial structure which suits best for rendering them.

Second you may place your physical presentation in another spatial structure which helps solving the required physical calculations (you can still use simple orbits).

As for the abstracted objects, they may be in a single list which you update as often/rarely as needed. How they show up or behave, that depends on the components they own.


One example to explain the reason of this kind of arragement : what if you have a spacecraft travelling through your system? does it have a parent object? In my opinion not, even if it is orbiting a planet. It could well be orbiting two planets by locating itself between a planet and it's moon. So the visual spatial tree handles the visibility of the object. The physical object tree can be used to speed up the calculation for the effects of gravity caused by the planets for the ship.

edit:

Quote:

The planet object is responsible for drawing its moons. There are probably better ways of doing it, but this is quite basic and works for me.


My proposed setup is against this kind of arragements. Planet having moons doesn't make it responsible for drawing them. The moons orbit the planet by the physical rules and that's it.




These are just my ideas, not straight implementation suggestions.

Best regards!

[Edited by - kauna on March 11, 2010 10:29:19 AM]

Share this post


Link to post
Share on other sites
haegarr    7372
My solution for parenting is this: A PlacementComponent provides the co-ordinate frame that relates the entity to the world. Parenting (as well as Orbiting, PathFollowing, ...) are specializations of Controller, a class that, well, controls co-ordinate frames. A parameter of Parenting is obviously the parental frame, itself usually provided by another PlacementComponent. Other parameters are the (local) position and orientation. It obviously listens on changes of the parental frame and computes the new placement for its own frame when necessary.

Share this post


Link to post
Share on other sites
schragnasher    120
Quote:
Original post by haegarr
My solution for parenting is this: A PlacementComponent provides the co-ordinate frame that relates the entity to the world. Parenting (as well as Orbiting, PathFollowing, ...) are specializations of Controller, a class that, well, controls co-ordinate frames. A parameter of Parenting is obviously the parental frame, itself usually provided by another PlacementComponent. Other parameters are the (local) position and orientation. It obviously listens on changes of the parental frame and computes the new placement for its own frame when necessary.


So you parent the position components of the entities? basically...

Share this post


Link to post
Share on other sites
haegarr    7372
Quote:
Original post by schragnasher
So you parent the position components of the entities? basically...
The PlacementComponent provides a co-ordinate frame. This means in particular a 4x4 homogeneous matrix, i.e. position and orientation, and if wanted also scaling (I don't use that). The interface of the component allows to set the position (as vector) and orientation (as quaternion). These values are ever to be understood as the world position and orientation, i.e. the PlacementComponent relates the object directly to the world.

Now, a Controller can be bound to the frame. The Parenting subclass of Controller is able to output both a position (vector) as well as an orientation (quaternion). As such it can drive both parts of the frame. Binding a Parenting instance to the frame of a PlacementConponent hence means that the global position and orientation of the entity are no longer freely available but driven by the controller.

To do so, the Parenting requires to refer to another frame that is used as parental frame. The Parenting also provides a position (vector) as well as an orientation (quaternion) and an interface to set/get these values. Obviously, these values are the local position and orientation. In other words, an enity provides local position and orientation if and only if a Parenting instance is bound.

That said, parenting an entity to another is an explicit set-up and not just be done by nesting (like in a scene graph). I've chosen this approach because parenting is just one mechanism of a dozen that are useable to control co-ordinate frames. Think e.g. of PathFollowing as another Controller. PathFollowing has a spline (or something else) path as parameter, and a Frenet frame wanders on it. The PathFollowing controller than can copy the position and orientation of the Frenet frame into the controller frame. Or think of Aiming, a Controller that doesn't influence the position but only the orientation so that the z axis of the controlled frame point towards a target that is a parameter of Aiming. Or ...

Share this post


Link to post
Share on other sites
aaron_ds    486
For what it is worth, I've been using the following solution in my projects. So far it has gotten me through pong, tetris, and pacman.

When the scenegraph is created, game objects are loaded with position components, geometry components, and texture components. The geometry components are API independent. They literally contain a list of Vertex structs.

Later on in the loading process, the renderer utilizing the visitor pattern, traverses each game object in the scenegraph. It reads the texture component and creates an api-specific texture component. It reads the geometry component and creates an api-specific geometry component.

Each frame, the renderer traverses the scenegraph and does the following for each game object: binds the api-specific texture component and draws the api-specific geometry.

My current code uses OpenGL, so textures are turned into OpenGL textures using glGenTextures, and glTexImage2D and then bound using glBindTexture. Generic geometry used to be processed into display lists, but now they are processed into VBOs using glGenBuffers and glBufferData and drawn using glBindBuffer, glVertexPointer, glTexCoordPointer, and glDrawArrays.

There's no reason why instead of being drawn directly the game objects couldn't be batched by the renderer to eliminate state changes. I'm really happy with the set up and the except for changing from display lists to VBOs the code has changed very little from pong to tetris to pacman.

Share this post


Link to post
Share on other sites
theOcelot    498
If you need parenting, it should be done in your physics simulation (such as it is), not your graphics. Fundamentally, whether two objects are connected is a property of the actual entity, not their graphical representation, and should be represented accordingly.

Share this post


Link to post
Share on other sites
schragnasher    120
Thanks for the replies but i think i am more confused now. seems everyone handles entity management differently. ill go muddle through some more and see if i can get a workable system.

Share this post


Link to post
Share on other sites
haegarr    7372
There are many ways to go, and especially the "component based entity system" is not a thing that is clearly defined. So one ever gets just examples of how to do things, and at the end has to decide which way to go for themselves.

An entity is an explicit or implicit collection of components. As such "a connection between 2 objects" should not be done between entites directly. Some people store the co-ordinate frame directly in the entity, using the reasoning that really many components require access to it, so that the probability of needing a co-ordinate frame at all is very high.

I'm using an explicit PlacementComponent. It is neither directly part of the graphics sub-system, nor the physics sub-system (assuming that physics deals with mass bodies), but simply the Placement sub-system if you want so. Especially
that it is an own Component prods to this fact. (Of course, many of the other sub-systems use the co-ordinate frame, too, but for reading accesses in the first instance.) So I second that placement, and parenting more than ever, is not "part" of the graphics sub-system.

The PlacementComponent defines nothing than the position/orientation in world, and hence has a clear purpose. In its raw form, it does so by just providing the storage of appropriate values (and some conversion functions). You can set these values manually to place the entity in the world, and then it would stay there for ever. That is not always what you want. You may want to alter the placement according to ... well, several imaginable conditions. That is what I use the Controller class for. Each effective sub-class of Controller implements another way to control a co-ordinate frame (and hence a placement of an entity if the frame is one stored in a PlacementComponent). Another way of setting the values is scripting, of course.

I personally define Controller not as a sub-class of Component but as a kind of extension with which the PlacementComponent (in this case) can be augmented. Doing so means to extend the static placement to become a dynamic one. However, it is also principally possible to derive Controller from Component.

Share this post


Link to post
Share on other sites
schragnasher    120
So after some thinking and reading iv gotten to this point.

I have a Entity tree and a render tree

I add an Entity to the Entity tree, attach a PositionComponent to it, the PositionComponent creates and stores a TransformNode in the render tree.

Next i attach some kind of RenderComponent to the Entity. It obtains a ptr to the PositionComponent's TransformNode and attaches a RenderNode to it and stores it.

Then i can attach a new Entity to the existing one in the tree, when i attach a PositionComponent to this entity it obtains its Parents TransformNode and attachs a new TransformNode to it.

Does this sound workable?


Share this post


Link to post
Share on other sites
haegarr    7372
It is a workable solution, yes. On the other hand, it seems me to have some drawbacks, or at least suffers from an unclean design (but maybe I'm wrong, because I don't know the entire picture).

The first impression I had from your description is that your entity tree starts to resemble a scene graph. That nesting entities automatically leads to parenting gave me that impression. I found automatic parenting at the days of scene graphs already strange. Obviously it is often the case that parenting is wanted in such a case, but there are enough cases where not. Moreover, using other mechanisms like aiming, tracking, and so on, show a discontinuity because they break the parenting also if nesting is given.

To build a second tree with TransformNode instances is okay to express and compute the parenting (with the expection of the top, but that can be handled especially). However, the TransformNode should still not be part of the graphics sub-system. Let it be part of the "spatial sub-system", nothing else. It simply has too much meaning to other sub-systems besides the rendering sub-system as well.

At least, I would definitely not attach a RenderNode to the TransformNode. That again is a resembling of the old scene graph principles. Instead, all RenderComponent instances should be collected by the rendering graphics sub-system. If an entity will be recognized for rendering, then its reference to the TransformNode (or whatever) can be used to determine its placement in world. I.e. let the dependency be from the renderable to the placement, not vice-versa.


BTW: I would not use PositionComponent as name, because IMHO "position" is commonly used for a translational effect only, but the orientation is as important as the position.

[Edited by - haegarr on March 12, 2010 10:34:03 AM]

Share this post


Link to post
Share on other sites
schragnasher    120
Quote:
The first impression I had from your description is that your entity tree starts to resemble a scene graph.


Yeah i had the same thought. It seemed redundant. I guess i should just cut out any thought of parenting actual entities. Just hold a vector of them for updating, which was the origional plan. Like you mentioned before parent make more sense to be explicit...i think im starting to understand the idea. LOL possibly.

"Parenting" as were calling it would be considered more a behavior, than some kind of implied relationship of entities. Like tracking and aiming, as you said, these three bahaviors are somewhat related.

So "Parenting" becomes a type of entity ControllerComponent, it keeps a parent entity and updates its transform/position/whatever i call it when the parent updates.

So each frame i grab all the renderables in the vector of entities and dump them to the renderer?

Share this post


Link to post
Share on other sites
haegarr    7372
Quote:
Original post by schragnasher
So each frame i grab all the renderables in the vector of entities and dump them to the renderer?
That is a possible way, but another one may be more suitable. Between the "may be rendered" and the "to be rendered" is the visual culling. Not all renderables are actually rendered at each frame.

IMHO it should happen so: The graphics sub-system has access to all renderables, e.g. because it takes care of a collection of all RenderComponent instances. When it comes to rendering (i.e. all updates of the placement and so are already done), it decides by using frustum culling and/or other visibility tests which renderables are actually to be rendered. For each renderable that passes this test a RenderJob is created and pushed into a RenderQueue. After this the Renderer is invoked. The Renderer has then the possibility to sort the RenderJobs due to necessity (e.g. transparency) and performance (e.g. costs of state switches) criteria. (The sorting may have happened already during adding the jobs to the queue; that doesn't contradicts the principle.) That said, the Renderer is just one part of the graphics sub-system.

Share this post


Link to post
Share on other sites
schragnasher    120
Right, there is obvously more to it than just grabbing and rendering. Since my stuff is all 2D its not super complicated.

So to simplify it for a first pass, when i make a rendercomponent i send a copy to the rendersystem. then each frame i render it. All the parent behavior will be part of a seperate component that the render system does not care about as it gets all its transformations from the render component.

Thats sounds reasonable and workable to me lol
Thanks for all the help for a noob!

Share this post


Link to post
Share on other sites
Zipster    2359
Ideally the graphics sub-system shouldn't know anything about "components", since that's specific to the design of your higher-level game engine and the lower-level graphics engine shouldn't be coupled to that. So instead of sending a RenderComponent to the graphics system, the RenderComponent requests a handle to a graphics resource (such as a Sprite) when it is first loaded or initialized. When it comes time to render, it just sends the handle to the graphics system.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this