# C++ ECS : same transformation component for both physic and graphic

## Recommended Posts

I encapsulated Physics and Graphics with ECS successfully.

Here is a simplified version :-

Physics Rigid Body = Physic_MassAndInertia + Physic_Transform + Physic_Shape
Graphic Polygon Body = Graphic_Transform + Graphic_Mesh

I usually set their transform via :-

findService<Service_PhysicTransform>()->setPosition(physicEntity,somePos);
findService<Service_GraphicTransform>()->setPosition(graphicEntity,somePos);

It works so nice, and there is no problem in practice, because I always know its "type" (physic/graphic).

However, I notice that Physic_Transform and Graphic_Transfrom are very similar and duplicate.

For the sake of good practice and maintainability, I consider to group them into Generic_Transform.

findService<Service_Transform>()->setPosition(anyEntity,somePos);  //cool

However, there is a little difficulty.  The physic transformation is quite complex for a child inside a compound body.

Assume that a physic body B is a child of a compound body C.   In this case, B's transformation component currently has no meaning (by design).

If I want to set the child transformation setTransformation(B,(x,y,45deg)), my current physic engine will not set the B's transformation directly - it will set C's transformation that make B's position match (x,y,45deg).

Thus, it is not so practical to group them, except I code it like (ugly and worse performance):-

class Service_Transform{
public: void setPosition(Entity B,Vec2 pos){
bool checkIsPhysic = .... //check if B is physic
if(checkIsPhysic){//physic
Entity compound = .... //find C
ComponentPtr<Transform> cCompound = compound.get<Transform>();
cCompound->pos=pos*someValue; //calculate some transformation for C
}else{//graphic
ComponentPtr<Transform> cTransform=B.get<Transform>();
cTransform->pos=pos;
}
}
}

Should I group them  into 1 type of component?

I probably should not group them because its meaning and related implementation are very different, but I feel guilty ... I may miss something.

Edit: Hmm... I start to think that grouping component is OK, but I should still use 2 function (from different service/system).

Edit2: fix some code (pointed out by 0r0d, thank)

Edited by hyyou

##### Share on other sites

I don't think you should have two different transform components, no. You shouldn't implement the physics using the ECS-transform-component eigther. Physics is a vastly complex  system that requires (or at least should have) a separate implementation. So optimally, you'd just use the transform-component to communicate between the different system, which keeping a separate view of the physics-world:

struct PhysicsComponent
{
RigidBody* pBody; // pointer to a body aquired from the physics-world upon initialization => could also be queried instead
};

struct PhysicsSystem
{

void Update(double dt) // just to showcase...
{
world.Tick(dt);

for(auto entity : EntitiesWithComponents<Physics, Transform>())
{
auto& physics = entity.GetComponent<Physics>();
entity.GetComponent<Transform>().SetTransform(physics.pBody->QueryTransform());
}
}

private:

PhysicsWorld world;
};

(This is just some pseudo-code to get the idea across). So essentially Transform becomes a point for communication between different systems. What the physics-system wrote in, you can later read out ie. in the RenderSystem; also your gameplay-systems can just set the transform as well. Entities just register new rigidbodies to the physics-world, but the world doesn't know that an entity even exists, which keeps your physics separated & more flexibel. For example, its pretty easy in this system to exchange your physics with say bullet at some time, while what you originally do creates a sort of tight coupling between those independant modules.

As a general consensus, you should only implement the bare minimum required functionality inside the ECS, if possible. Do not use ECS as a basis for physics, rendering, HUD, ... but instead implemented those systems separately, and use the ECS as the high-level intercommunication-layer. At that, it really excells IMHO, but everything more will just result in many of the reasons why people despise ECS.

Hope that gives you a rough idea.

##### Share on other sites

I'm of the opinion that you should always have distinct, domain-specific representations of information.  Physics and rendering are two different domains, thus two transforms. You can split hairs over where these transforms live, and whether or not rendering/physics is part of the ECS, but regardless of where they are you should have one transform owned by "physics" and one transform owned by "rendering" (at least). Don't get distracted by the fact that they may appear similar or duplicate -- they are completely distinct conceptually. If you allow them to both own the same transform, you'll quickly find them fighting each other over even simple changes to functionality.

Don't think of data as a "point of communication". Communication is a behavior, and in the absence of code the only place behaviors exist are in assumptions and inferences. It may be fine for the rendering code to assume the transform mean one thing, and for the physics code to assume the transform mean something else, but when it comes to communicating that information across domain boundaries, you should't force the rendering code to make assumptions about what the transform means to the physics code and vice versa. This is exactly the type of behavioral coupling you'll come to regret

Edited by Zipster

##### Share on other sites
11 hours ago, hyyou said:


findService<Service_Transform>()->setPosition(anyEntity);  //cool

I'm not clear what the above code is supposed to do.

##### Share on other sites
15 hours ago, Juliean said:

... but instead implemented those systems separately

Thank Julien.

In a more real case, I structure it like this :-

rocket = HP + physic_owner + graphic_owner + ....
Physic body = physic_ownee + bulletBodyAndShape_holder
Graphic body = graphic_ownee
+ ogreBody_holder (Ogre2.1 graphic body)
+ cache_renderAttribute(pos,blend_mode,layer)
+ mesh_owner
mesh = mesh_ownee + filename
+ ogreMeshPtr(point to vertex buffer)

The physic_owner / graphic_owner / physic_ownee / graphic_ownee  are just attach-point.

If I want to create a rocket, I tend to create 1 rocket entity attach to 2-3 new physic body, 2-3 new graphic body.     The graphic body attach to a shared mesh.   Thus, for a single rocket, I create around 5-7 entities.

I copied the idea from https://gamedev.stackexchange.com/questions/31888/in-an-entity-component-system-engine-how-do-i-deal-with-groups-of-dependent-ent , not sure if it smells. ...

For me, it works pretty good, but I may overlook something.

P.S. Your comment in another topic of mine boosted my engine's overall performance by 5%. Thank a lot.

7 hours ago, Zipster said:

Don't think of data as a "point of communication"

That is enlightening, thank.

@0r0d  You are right. I have edited the post

Edited by hyyou

##### Share on other sites
4 hours ago, hyyou said:

Thank Julien.

In a more real case, I structure it like this :-


rocket = HP + physic_owner + graphic_owner + ....
Physic body = physic_ownee + bulletBodyAndShape_holder
Graphic body = graphic_ownee
+ ogreBody_holder (Ogre2.1 graphic body)
+ cache_renderAttribute(pos,blend_mode,layer)
+ mesh_owner
mesh = mesh_ownee + filename
+ ogreMeshPtr(point to vertex buffer)

The physic_owner / graphic_owner / physic_ownee / graphic_ownee  are just attach-point.

If I want to create a rocket, I tend to create 1 rocket entity attach to 2-3 new physic body, 2-3 new graphic body.     The graphic body attach to a shared mesh.   Thus, for a single rocket, I create around 5-7 entities.

Are these all single components? phyiscs_owner, phyiscs_ownee, HP, ...? If so, it does sound needlessly complex, as I laid out in the other post there's really not much point of having stuff like "filename" as a component, but you should see for yourself On the same note, having 5-7 entities for a single rocket sounds like huge overkill. Personally, I would have a rocket-mesh with a Mesh & Physics-component, and thats pretty much it; unless the rocket specifically needs sub-entities like a particle-emitter (Thats just what I came to agree on based on 4 years of working with an ECS across different projects; you might come to a different conclusion; though based on your workflow/toolchain, a certain few things like having many components & entities can make creating new content a nightmare).

12 hours ago, Zipster said:

Don't think of data as a "point of communication". Communication is a behavior, and in the absence of code the only place behaviors exist are in assumptions and inferences. It may be fine for the rendering code to assume the transform mean one thing, and for the physics code to assume the transform mean something else, but when it comes to communicating that information across domain boundaries, you should't force the rendering code to make assumptions about what the transform means to the physics code and vice versa. This is exactly the type of behavioral coupling you'll come to regret

I honestly don't agree to that statement. Whats wrong with using a Transform-component to communicate information between different systems, that all might have their own internal transform-structure? You're using data to transfer information anyways at some point, like sending information across a network, or storing and later loading a save-file. There's not code inside the save-file, its just data at that point, but because you agreed on a format, its save to store at one point and load at a later point. Thats how I see it with this example of the transform as well: You agree on a shared "format" for world-transform on a shared "Transform"-component, and systems can read/write to the transform while still being able to internally use their own data. You can still use buisness-logic to make the writing/reading of this transform-data more safe, but I don't see the benefit of code-based communication vs a data-"stream" of sorts.

##### Share on other sites
9 hours ago, Juliean said:

I honestly don't agree to that statement. Whats wrong with using a Transform-component to communicate information between different systems, that all might have their own internal transform-structure?

Actually it seems we do agree, at least on each system having its own internal structure. I just like to think in terms of domains as opposed to systems, since I find the conceptual delineation better for determining when different representations are actually needed. However if your systems are 1:1 with behaviors (physics, rendering, etc.), then it's essentially the same

I'm also in full agreement with having a separate component handle the transfer of data between systems, but in that case I don't see a need for any intermediary shared state between them. I'm not even sure what purpose it would have, or if it's feasible in the first place. How would you reconcile an OOBB or capsule used by physics, with a matrix transform used by rendering, with a sphere used by gameplay, all into a single representation that makes sense? And why would you even need it if each system already has sufficient internal state to function? The purpose of the transform component in this case would just be to copy the data from one system to another and perform any necessary conversion. Instead of each system assuming internal details about the other, you've inverted your dependencies, shifted that responsibility to a higher level, and made the data relationship explicit in code for all to see.

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628647
• Total Posts
2984035
• ### Similar Content

• By Josheir
In the following code:

Point p = a[1]; center of rotation for (int i = 0; I<4; i++) { int x = a[i].x - p.x; int y = a[i].y - p.y; a[i].x = y + p.x; a[i].y = - x + p.y; }
I am understanding that a 90 degree shift results in a change like:
xNew = -y
yNew = x

Could someone please explain how the two additions and subtractions of the p.x and p.y works?

Thank you,
Josheir
• By alex1997
Hey, I've a minor problem that prevents me from moving forward with development and looking to find a way that could solve it. Overall, I'm having a sf::VertexArray object and looking to reander a shader inside its area. The problem is that the shader takes the window as canvas and only becomes visible in the object range which is not what I'm looking for..
Here's a stackoverflow links that shows the expected behaviour image. Any tips or help is really appreciated. I would have accepted that answer, but currently it does not work with #version 330 ...
• By Kerrick
TL;DR: noob non-coding teacher somehow thinks they can build a narrative educational game in WordPress; plz halp how do I games?
I'm mostly a teacher with no coding background--I played with teaching myself Java for a bit but couldn't really code anything from scratch. I'm interested in developing (as a hobbyist) an educational game that would be a sort of choice-based narrative branching storyline, a bit like Fallen London/Storynexus. Because it's so storyline based and I don't need crazy 3d animation, I'm considering just building it as a WordPress site with a couple of gamification plug ins to handle inventory and choice consequences. The unique hook is that the game requires (suggests really) that the player accomplish real world building challenges to accomplish your goals. I.e. Your character has to cross a river? Get some popsicle sticks or cardboard or whatever and build a model bridge. Take a photo of your bridge and upload it to your portfolio to continue (and maybe I don't develop this feature right away).
I'm in this for three reasons: the educational value for families and teachers, the storyline and world I'm building that I'm super excited about, and the fame and massive wealth (just kidding but it has to have to potential to pay for itself).
Before I sink too much of my life into this, I want to know more about what I'll need to do to make it work.
Specifically my questions are:
1) I get that WordPress isn't optimal for developing games. But can I do it or will I have to learn a new engine because I can't make do what I want? (And if so, what's a better engine with low-to-no coding prereq that will still allow me to sell my game?)
2) How do I budget for a project like this? (The link to the Reddit post about legal fees was very useful thank you)
3) What are the additional considerations when creating a game like this intended for children with adult supervision? (Obviously privacy, and I need to cover my assets in case some kid takes "go build a bridge!" too literally and gets hurt with mama's table saw in the garage...)
4) Is this not the forum for this since what I'm talking about is something more like educational narrative fiction and certainly not the spectacularly complex and amazing projects you all are working on?
Thank you for any and all thoughts.

• I just finished up my 1st iteration of my sprite renderer and I'm sort of questioning its performance.
Currently, I am trying to render 10K worth of 64x64 textured sprites in a 800x600 window. These sprites all using the same texture, vertex shader, and pixel shader. There is basically no state changes. The sprite renderer itself is dynamic using the D3D11_MAP_WRITE_NO_OVERWRITE then D3D11_MAP_WRITE_DISCARD when the vertex buffer is full. The buffer is large enough to hold all 10K sprites and execute them in a single draw call. Cutting the buffer size down to only being able to fit 1000 sprites before a draw call is executed does not seem to matter / improve performance.  When I clock the time it takes to complete the render method for my sprite renderer (the only renderer that is running) I'm getting about 40ms. Aside from trying to adjust the size of the vertex buffer, I have tried using 1x1 texture and making the window smaller (640x480) as quick and dirty check to see if the GPU was the bottleneck, but I still get 40ms with both of those cases.

I'm kind of at a loss. What are some of the ways that I could figure out where my bottleneck is?
I feel like only being able to render 10K sprites is really low, but I'm not sure. I'm not sure if I coded a poor renderer and there is a bottleneck somewhere or I'm being limited by my hardware

Just some other info:
Dev PC specs: GPU: Intel HD Graphics 4600 / Nvidia GTX 850M (Nvidia is set to be the preferred GPU in the Nvida control panel. Vsync is set to off) CPU: Intel Core i7-4710HQ @ 2.5GHz Renderer:
//The renderer has a working depth buffer //Sprites have matrices that are precomputed. These pretransformed vertices are placed into the buffer Matrix4 model = sprite->getModelMatrix(); verts[0].position = model * verts[0].position; verts[1].position = model * verts[1].position; verts[2].position = model * verts[2].position; verts[3].position = model * verts[3].position; verts[4].position = model * verts[4].position; verts[5].position = model * verts[5].position; //Vertex buffer is flaged for dynamic use vertexBuffer = BufferModule::createVertexBuffer(D3D11_USAGE_DYNAMIC, D3D11_CPU_ACCESS_WRITE, sizeof(SpriteVertex) * MAX_VERTEX_COUNT_FOR_BUFFER); //The vertex buffer is mapped to when adding a sprite to the buffer //vertexBufferMapType could be D3D11_MAP_WRITE_NO_OVERWRITE or D3D11_MAP_WRITE_DISCARD depending on the data already in the vertex buffer D3D11_MAPPED_SUBRESOURCE resource = vertexBuffer->map(vertexBufferMapType); memcpy(((SpriteVertex*)resource.pData) + vertexCountInBuffer, verts, BYTES_PER_SPRITE); vertexBuffer->unmap(); //The constant buffer used for the MVP matrix is updated once per draw call D3D11_MAPPED_SUBRESOURCE resource = mvpConstBuffer->map(D3D11_MAP_WRITE_DISCARD); memcpy(resource.pData, projectionMatrix.getData(), sizeof(Matrix4)); mvpConstBuffer->unmap(); Vertex / Pixel Shader:
cbuffer mvpBuffer : register(b0) { matrix mvp; } struct VertexInput { float4 position : POSITION; float2 texCoords : TEXCOORD0; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float2 texCoords : TEXCOORD0; float4 color : COLOR; }; PixelInput VSMain(VertexInput input) { input.position.w = 1.0f; PixelInput output; output.position = mul(mvp, input.position); output.texCoords = input.texCoords; output.color = input.color; return output; } Texture2D shaderTexture; SamplerState samplerType; float4 PSMain(PixelInput input) : SV_TARGET { float4 textureColor = shaderTexture.Sample(samplerType, input.texCoords); return textureColor; }
If anymore info is needed feel free to ask, I would really like to know how I can improve this assuming I'm not hardware limited
• By Vegermy
What are some of the best ways to actually sell your prototype or a Proof of Concept to a company? Would a power point presentation be best? Or maybe a video like a teaser trailer to go along with a power point presentation? I'm just trying to get some ideas of how to best go about this.

• 10
• 9
• 9
• 10
• 21