Jump to content
  • Advertisement
Sign in to follow this  
Brain me

Game State vs. Scene Organization

This topic is 3230 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm designing the architecture for my next game and I'm hoping for some input regarding the game logic/state and scene organization. It's much more efficient if when updating any components of a game world, the scene organization is updated as well, that way the renderer is able to display the scene quickly. How much of a performance decrease and how feasible is it to design the game logic to completely disregard any scene organization and have that accounted for at some later point? I have a few ideas for this method that I would like to test, but I would also like some input from others, especially anyone who has designed a game like this.

Share this post


Link to post
Share on other sites
Advertisement
What I am trying to do is keep all of the game logic completely independent of any graphics rendering or engine specifics.

My idea at the moment is that when the state is first loaded it is run through a partitioning algorithm (probably just a slightly modified octree for large outdoor areas), which produces something I've coined the "RenderingState". When the game logic runs (AI, position updates, physics, etc) this produces some collection of changes to the GameState. These changes are run through the partitioning algorithm and the RenderingState is modified accordingly. Lastly, the GameState is modified using the collection of changes.

With this implementation, game logic and physics is completely independent of any scene organization or other graphical components.

The negative side affect of this is that changes are recorded and then applied to two states, instead of traditional game logic which is aware of spacial organization and can handle everything in one pass.

Sorry if the initial post was misleading.

Input?

Share this post


Link to post
Share on other sites
That's still pretty vague. I don't understand what engine specifics you are trying to keep your logic away from. What exactly makes you feel obligated to take extra steps to establish a relationship (segregated or not) with your renderer?

At the most basic level, it's preferable is to have a simplified copy of any "solid" scene geometry handy for queries. A lot of different engines and objects could make good use of queries to this tree, so I consider it to be a separate component accessible through the engine. If that's something you want to avoid, I'd say you have a problem as so many things both in the game and rendering logic may need the tree. For example, physics in the game logic would reject a buncha polygons for collision checks and the renderer would run visibility tests.

On a second read of the OP...

Quote:
how feasible is it to design the game logic to completely disregard any scene organization and have that accounted for at some later point?


I'd say don't be a fool. An entire game can be broken down into initialization, processing entities for a frame, and then rendering those updated entities.The only relationship I can think is important for the last two tasks is interpolating physics states with an additional time sample. With what you seem to be asking for, all the physics, AI and other similar logic would be crammed into your renderer or in some other obscure location. (Which would pass as game logic at that point) We compartmentalize code for the sake of sanity. Why would you want to relocate code away from a location that's just fine to begin with? What makes your game logic so special that it must be separated from your engine?

Share this post


Link to post
Share on other sites
Brain me is mostly just trying to figure out how to create a sensible separation between graphics and everything else, which is certainly a good idea. You want to define how the entities interact, then worry about how to show it on the screen a little later.

There was a thread about this not too long ago in this forum. I'll try to find it, but the gist of it was that you just need to go ahead and use separate data structures for simulating and rendering, and keep them in sync with each other.

Share this post


Link to post
Share on other sites
Quote:
Brain me is mostly just trying to figure out how to create a sensible separation between graphics and everything else, which is certainly a good idea. You want to define how the entities interact, then worry about how to show it on the screen a little later.


I got that. I don't see how it's any more complicated than what I mentioned here.

Quote:
An entire game can be broken down into initialization, processing entities for a frame, and then rendering those updated entities.The only relationship I can think is important for the last two tasks is interpolating physics states with an additional time sample.


That's what my framework runs on, and I have yet to have problems.
This is really a decision the developer should make though. I wouldn't go through the trouble of making a "Rendering State" or a "Game State", as I can't imagine a scenario where additional nodes like that are necessary. I've always been one to just... update, render (interpolation where needed to ensure sync), next please!

Brain Me, you seem to still have a good idea of what you are doing, but you probably need to whip out a pen and paper and plan your flow out before starting to implement things.

Share this post


Link to post
Share on other sites
Spatial partitioning doesn't need to be associated with graphics; all it really is is an indexing scheme for a spatial database.

You could represent your game world as a spatial database with an interface like:


interface SpatialDB
{
List<Entity> GetEntitiesInAABB(AABB box);
List<Entity> GetEntitiesOnLine(Point start, Point end);
List<Entity> GetEntitiesInFrustum(Frustum frustum)
/* ... etc ... */

/* Updates to the spatial DB would either be done through the Entity
* objects - possibly triggering the SpatialDB to reclassify via: */
void EntityIsUpdated(Entity e);
/* or through making the spatial DB responsible for getting/setting
entity position data, though that's a bit grody: */
void SetEntityPosition(Entity e, Point p);

}


Implementations of SpatialDB can use whatever spatial partitioning scheme is appropriate, possibly even multiple schemes combined. You could also incorporate zyrolasting's different classes of geometry by adding flags to the query methods.

Even if nothing's being rendered, you still need spatial partitioning for efficient collision detection, AI, etc.

Share this post


Link to post
Share on other sites
Quote:
Original post by theOcelot
you just need to go ahead and use separate data structures for simulating and rendering, and keep them in sync with each other.


That was my plan, but I was wondering how efficient that is. With this method, changes would be made to a structure representing the game state (through AI, physics, etc.) that is independent of rendering. These changes would then be applied to the structure used for rendering.

For example, if this was how Entities were represented:


struct Entity
{
GameStateRep gameRep;
GraphicsRep graphicsRep;
unsigned char Update( float dt );
};

unsigned char Entity::Update( float dt )
{
gameRep.Update( dt );
graphicsRep.Update( dt );
}



The game logic is completely independent of any graphics, which is what I'm aiming for, but how much of a performance hit would something like this produce over game logic that integrates scene organization (instead of moving the position in the game world to (100, 100, 100) for example, it is moved between scene nodes directly...)?

Share this post


Link to post
Share on other sites
You shouldn't organize your scenes spatially the same way you organize them for rendering - it doesn't make sense

Spatially you may have a hierarchy of transforms, and it's common to want to query this structure to find nearby entities

For rendering, you want your meshes organized by shader and material, you don't typically care about spatial relationships (except maybe depth for sorting by depth), you just want to organize everything for efficient drawing by the gpu.

Share this post


Link to post
Share on other sites
Quote:
Original post by Brain me
Quote:
Original post by theOcelot
you just need to go ahead and use separate data structures for simulating and rendering, and keep them in sync with each other.


That was my plan, but I was wondering how efficient that is. With this method, changes would be made to a structure representing the game state (through AI, physics, etc.) that is independent of rendering. These changes would then be applied to the structure used for rendering.


Ah, here it is. Combining Scenegraph Hierarchy with Physics. I remember now I had the same question as you. swiftcoder explains a lot of stuff near the end.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!