Cross-system interfaces

Started by
5 comments, last by M4573R 13 years, 2 months ago
I'm creating my engine using simple component-based entities, and separate projects to build my systems into libraries. I want to have my systems usable alone, but of course to make a game they have to communicate at some point. So I have a base application project that handles the window and game loop, so I'm thinking that's also where I'm going to put all the code that combines the libraries into something usable. I'm looking for ideas or guidance to head me in the right direction for doing this. For example, physics will have it's own motion and collision components wrapped around bullet, and graphics will have it's own drawable, transform, texture components, etc. They're both holding onto positions. Graphics keeps stores them in a hierarchy, and bullet stores them however it does :D. The guidance I need is for how to write a clean interface so that, for instance, physics can share its position and rotation with graphics without the two libraries relying upon each other. My first idea was to have them each expose messages they would accept and send out, and I would create a sort of translator that resides in the entities that is responsible, for instance, receiving physics messages and passing them off to graphics in a new message. Another idea would be to have each system expose an interface that the appropriate messages should derive from that then gives functionality that both desire. That way the message could be defined in the application project, and would act as both a physics transform message and a graphics transform message. This is my most ambitious project so far and it's been a little daunting to work on :D
Advertisement
In my system, I have a 'Transform' component that all entities have by default. Then 'Geometry' components, or (Physics) 'Body' components share that transform, which keeps position, orientation, and scale information. When you attach a 'Body' component, it will set the transform matrix in the 'Transform' and set a flag so it doesn't automatically create a new transform each frame.
I suppose all my libraries already depend on my common entities and components, but it just feels weird to define a transform component in my common library. Plus if physics wants to update position multiple times, I don't want to send out a message until it's completely done so I don't call redundant code in other systems. It makes for a nice separation.
One of the great things about a component based system is that, if you take the right steps early, it becomes really easy to thread them later. It is fairly simple to double buffer your data, and common to expose state changes through messages.
For instance, you could keep the graphics matrix seperate, and have a single function that copies your entity matrix to the graphics matrix. This buffers the draw matrix so that it is free from dealing with your AI/Physics. Immediately after syncing your draw matrix, you can start your tasks related to AI/Physics knowing that the rendering processing can be running in another thread (ie, view culling, batch sorting, etc).
Likewise, you would make sure to send all your physics changes from the AI code, and could later sync from the physics thread your new matrix before finally syncing that to the graphics.

AND guess what? You don't even have to worry about threads yet, but when you get there things will be a lot more simplistic since you clearly defined the places where components would share data, and only have a couple for loops where they sync takes place to lock around.

One of the great things about a component based system is that, if you take the right steps early, it becomes really easy to thread them later. It is fairly simple to double buffer your data, and common to expose state changes through messages.
For instance, you could keep the graphics matrix seperate, and have a single function that copies your entity matrix to the graphics matrix. This buffers the draw matrix so that it is free from dealing with your AI/Physics. Immediately after syncing your draw matrix, you can start your tasks related to AI/Physics knowing that the rendering processing can be running in another thread (ie, view culling, batch sorting, etc).
Likewise, you would make sure to send all your physics changes from the AI code, and could later sync from the physics thread your new matrix before finally syncing that to the graphics.

AND guess what? You don't even have to worry about threads yet, but when you get there things will be a lot more simplistic since you clearly defined the places where components would share data, and only have a couple for loops where they sync takes place to lock around.



Hah thats true. But it doesn't really help with my original predicament huh.gif

You mentioned that the rendering system and the physics system are relying on some shared data. A rendering system should be treated as an pure output device 99% of the time. This means it only needs copies of the original data, and should merely discard them once it is done using them. This means that doing 'clever memory saving tricks' like reading back vertex buffer or texture data (i.e. heightmaps) so the physics system can use the same data is huge mistake. Translating from 'game state' to 'render state' is what needs solving, and generally this is going to take the form of functions fed into a work queue to take the source data and transform its layout into something the GPU wants. Its a good plan too, as the rendering core is the most volatile part of an engine, as new hardware comes out, new versions of directx, and new platforms come into existince (new consoles, phones etc). They need their data to be translated and stored in wildly different formats for the hardware.
http://www.gearboxsoftware.com/

You mentioned that the rendering system and the physics system are relying on some shared data. A rendering system should be treated as an pure output device 99% of the time. This means it only needs copies of the original data, and should merely discard them once it is done using them. This means that doing 'clever memory saving tricks' like reading back vertex buffer or texture data (i.e. heightmaps) so the physics system can use the same data is huge mistake. Translating from 'game state' to 'render state' is what needs solving, and generally this is going to take the form of functions fed into a work queue to take the source data and transform its layout into something the GPU wants. Its a good plan too, as the rendering core is the most volatile part of an engine, as new hardware comes out, new versions of directx, and new platforms come into existince (new consoles, phones etc). They need their data to be translated and stored in wildly different formats for the hardware.



I could give the physics system ownership over skeletons and hierarchies I suppose... I usually just think that things like particle systems and bones just belong with graphics. However it would give me better locality as far as IK and hierarchies go. But still, the solution I'm looking for would be general to physics, graphics, input, sound, etc. I need a scheme to solve system-owned component communication through entities.

This topic is closed to new replies.

Advertisement