Unity Part 1: Unity ECS - briefly about ecs

1427 views

Part 1: Unity ECS - briefly about ecs
Part 2: Unity ECS - project design
Part 3: Unity ECS - operations on Entities
Part 4: Unity ECS - ECS and Jobs

The rule of thumb explanation How does ComponentSystem often look like:

public class SomeSystem : ComponentSystem
{
private struct Group
{
public EntityArray Entities;
public ComponentDataArray<SomeComponent> SomeComponents;
}
[Inject] private Group m_group; // Inject the group entities with given components

protected override void OnUpdate()
{
float dt = Time.deltaTime; // Nice cached deltaTime, we're on main thread, so we can use Unity's API
for (int i = 0; i < m_group.Length; i++)
{
// some operates on data
}
}

// System was enabled (ComponentSystemBase.Enabled = true)
protected override void OnStartRunning()
{
// probably some more caches for optimization, preparation for Updates
}

// System was disabled (ComponentSystemBase.Enabled = false)
protected override void OnStopRunning()
{
// probably some clean up
}
}

Let's take a look at this first:

struct Group
{
public EntityArray Entities;
public ComponentDataArray<Foo> Foos;
public ComponentDataArray<Bar> Bars;
}

What is group?

Group is filter of all Entities in active world that have Foo and Bar component
It's like array of matching entities with references to all specific given components.
So, group is like EntityArray but with references to components, EntityArray itself is just array of Entities (and entity is just an index).
Group is constructed with a set of required components, subtractive components. It's also synced.
Why do we have Length field? Isn't Foos.Length the same?
Yes, you are rigth! They're the same. Length is length of EntityArray, so also Length of Foos and Bars, because each entity has Foo and Bar component
It's more obvious in this case:
for(int i; i < m_group.Length; i++)
{
// Using m_group.Foos.Length or m_group.Bar.Length would be a bit confusing in this case
// because we iterate over all entities and can get access to ANY of their components
}
To summarize - every array in injected group would have same Length, because it's length of Entities and each entity has every given component.
Separation of Length field is just for convenience.

How to manage indexing? The injection magic.

for(int i; i < m_group.Length; i++) // iterate over all entities from group
{
// It's safe to iterate like that, because every array in group has the same length
// and indexing is also injected(synced) in that way to use it exactly like this:
var actualEntity = m_group.Entities[i];  // Actual iterating Entity
var actualFoo    = m_group.Foos[i];      // Foo component "attached" to actualEntity
var actualBar    = m_group.Bards[i];     // Bar component "attached" to actualEntity
}

How do I manage lifetime of systems?

Well, you don't really need to!
Unity takes care about that. Systems track injected groups and will be disabled if there are no matching Entities and will be enabled if there appears one as well.
But if you really want to, take a look upwards, see OnStartRunning and OnStopRunning ?
I mentioned ComponentSystemBase.Enabled = true which is probably what are you looking for. It's property that allows you to active/disable system manually.
Systems don't update in order I want
Convenience attributes for rescue!
[UpdateAfter(typeof(OtherSystem))], [UpdateBefore(typeof(OtherSystem))], [UpdateInGroup(typeof(UpdateGroup))]
where UpdateGroup is empty class.
You can even control update before/after Unity's phases by typeof(UnityEngine.Experimental.PlayerLoop.FixedUpdate) or other phases in same namespace.

You can just inject it just like
 [Inject] private YourSystem yourSystem;
easy as that. You can also use
World.Active.GetExistingManager<YourSystem>()
or if you're not sure if it exists but it should, use
World.Active.GetOrCreateManager<YourSystem>()

Why to use EntityArray in system? What can I do with this index?

Since systems most often operates on components, not directly on entities, it raises question "Why do I even need this index".
Saying "it's just and index" doesn't mean that is not usable at all. It's very important integer.
If you haven't tried Unity's ECS implementation, you probably don't know where is it needed.
It's needed among others for functionality from EntityManager that holds EntityData and controls adding/removing (and much more) components from a given entity.
But actually you don't want to add/remove components via EntityManager.
You'd rather do it after update to not break the group (you'll get error about accessing deallocated nativearray),
so you want to use PostUpdateCommands (EntityCommandBuffer). More about that in one of futures part of article.

World vs EntityManager

EntityManager is not weird, magic class, it's ScriptBehaviourManager and "merges" entities and their components.
In ECS we have a lot of managers. ComponentSystem is also ScriptBehaviourManager !
World  holds all managers in one piece. We could colloquially say it manages the managers.
I know what's your question - yes, we can create multiple worlds, sounds interesting, isn't it? Maybe we'll take a look at this in future.

ComponentData and SharedComponentData. What's the difference?

The difference is trivial, ComponentData is just a component, SharedComponentData is as it says, component shared between different Entities.
Very good explanation you can read in docs:
Quote

IComponentData is appropriate for data that varies between Entities, such as storing a world position. ISharedComponentData is useful when many Entities have something in common, for example in the boid demo we instantiate many Entities from the same Prefab and thus the MeshInstanceRenderer between many boid Entities is exactly the same.

So, with same IComponentData (eg. Position) changes from Entity0 won't change Position from Entity1,
ut with same SharedComponent (eg. Renderer) if you change material from Entity0, it'll change also material from Entity1.
However you don't really want to change SharedComponents a lot, actually very rarely. More details THERE.
Well, there is one more difference - ComponentData have to be blittable.

Welcome!

Create an account

Register a new account

• Similar Content

• By trapazza
I'm trying to add some details like grass, rocks, trees, etc. to my little procedurally-generated planet. The meshes for the terrain are created from a spherified cube which is split in chunks (chunked LOD).
To do this I've wrote a geometry shader that takes a mesh as input and uses its vertex positions as locations where the patches of grass will be placed (as textured quads).
For an infinite flat world (not spherical) I'd use the terrain mesh as input to the geometry shader, but I've found that this won't work well on a sphere, since the vertex density is not homogeneous across the surface.
So the main question would be: How to create a point cloud for each terrain chunk whose points were equally distributed across the chunk?
Note: I've seen some examples where these points are calculated from intersecting a massive rain of totally random perpendicular rays from above... but I found this solution overkill, to say the least.
Another related question would be: Is there something better/faster than the geometry shader approach, maybe using compute shaders and instancing?
• By FedGuard
Hello all,

I would like to start off with thanking you all for this community. Without fora like these to assist people the already hard journey to making an own game would be exponentially more difficult. Next I would like to apologize for the long post, in advance...
I am contemplating making a game. There, now that's out of the way, maybe some further details might be handy.
I am not some youngster (no offence) with dreams of breaking into the industry, I am 38, have a full-time job, a wife, kid and dog so I think I am not even considered indie? However I recently found myself with additional time on my hands and decided I would try my hand at making a game.Why? Well mostly because I would like to contribute something, also because I think I have a project worth making (and of course some extra income wouldn't hurt either to be honest). The first thing I realized was, I have absolutely no relevant skill or experience. Hmm; ok, never mind, we can overcome that, right?
I have spent a few months "researching",meaning looking at YouTube channels, reading articles and fora. Needless to say, I am more confused now than when I started. I also bought some courses (Blender, Unity, C#) and set out to make my ideas more concrete.
I quickly discovered, I am definitely not an artist... So I decided, though I do plan to continue learning the art side eventually, I would focus on the design and development phase first. The idea being, if it takes me a year or more solely learning stuff and taking courses without actually working on my game, I would become demoralized and the risk of quitting would increase.
So I thought I would:
1: Keep following the courses Unity and C# while starting on the actual game development as the courses and my knowledge progress.
2: Acquire some artwork to help me get a connection with the game and main character, and have something to helm keep me motivated. (I already did some contacting and realized this will not be cheap...). Also try to have the main character model so I can use it to start testing the initial character and game mechanics. For this I have my first concrete question. I already learned that outsourcing this will easily run up in the high hundreds or thousands of dollars... (lowest offer so far being 220 USD) I am therefore playing with the idea of purchasing https://assetstore.unity.com/packages/3d/animations/medieval-animations-mega-pack-12141 with the intention of then have an artist alter and/or add to the animations (it is for a Roman character so some shield animations are not going to work the same way.). This way I could start  with the basic character mechanics. Is this a good idea, waste of money,...? Any suggestions? I then have a related but separate question. Is it a good idea to buy Playmaker (or some other similar software I haven't yet heard of like RPGAIO), and using this for initial build, then changing/adding code as the need arises?
3.Get a playable initial level ready as a rough demo and then starting to look for artist for level design and character/prop creation.
...

I would really appreciate some input from more experienced people, and especially answers to my questions. Of course any advice is extremely welcome.
• By GameTop
Dirt Bike Extreme - another game made with Unity. Took about 2 months to complete.
Take part in extreme motorcycle races across the dangerous and challenging tracks. Dirt Bike Extreme is easy to pick up but hard to master. Race, jump and crash your way and other mad rivals through the amazing tracks as you master the skills and physics of motocross in this high-speed racing adventure. Conquer challenging routes on 23 different runs, discover new bikes and become the best of the best! Over 257K downloads already!
Windows Version:

Mac Version:
https://www.macstop.com/games/dirt-bike-extreme/

• I've been experimenting with my own n-body simulation for some time and I recently discovered how to optimize it for efficient multithreading and vectorization with the Intel compiler. It did exactly the same thing after making it multithreaded and scaled very well on my ancient i7 3820 (4.3GHz). Then I changed the interleaved xy coordinates to separate arrays for x and y to eliminate the strided loads to improve AVX scaling and copy the coordinates to an interleaved array for OpenTK to render as points. Now the physics is all wrong, the points form clumps that interact with each other but they are unusually dense and accelerate faster than they decelerate causing the clumps to randomly fly off into the distance and after several seconds I get a NaN where 2 points somehow occupy exactly the same x and y float coordinates. This is the C++ DLL:
#include "PPC.h" #include <thread> static const float G = 0.0000001F; const int count = 4096; __declspec(align(64)) float pointsx[count]; __declspec(align(64)) float pointsy[count]; void SetData(float* x, float* y){ memcpy(pointsx, x, count * sizeof(float)); memcpy(pointsy, y, count * sizeof(float)); } void Compute(float* points, float* velx, float* vely, long pcount, float aspect, float zoom) { #pragma omp parallel for for (auto i = 0; i < count; ++i) { auto forcex = 0.0F; auto forcey = 0.0F; for (auto j = 0; j < count; ++j) { if(j == i)continue; const auto distx = pointsx[i] - pointsx[j]; const auto disty = pointsy[i] - pointsy[j]; //if(px != px) continue; //most efficient way to avoid a NaN failure const auto force = G / (distx * distx + disty * disty); forcex += distx * force; forcey += disty * force; } pointsx[i] += velx[i] -= forcex; pointsy[i] += vely[i] -= forcey; if (zoom != 1) { points[i * 2] = pointsx[i] * zoom / aspect; points[i * 2 + 1] = pointsy[i] * zoom; } else { points[i * 2] = pointsx[i] / aspect; points[i * 2 + 1] = pointsy[i]; } /*points[i * 2] = pointsx[i]; points[i * 2 + 1] = pointsy[i];*/ } } This is the relevant part of the C# OpenTK GameWindow:
private void PhysicsLoop(){ while(true){ if(stop){ for(var i = 0; i < pcount; ++i) { velx[i] = vely[i] = 0F; } } if(reset){ reset = false; var r = new Random(); for(var i = 0; i < Startcount; ++i){ do{ pointsx[i] = (float)(r.NextDouble()*2.0F - 1.0F); pointsy[i] = (float)(r.NextDouble()*2.0F - 1.0F); } while(pointsx[i]*pointsx[i] + pointsy[i]*pointsy[i] > 1.0F); velx[i] = vely[i] = 0.0F; } NativeMethods.SetData(pointsx, pointsy); pcount = Startcount; buffersize = (IntPtr)(pcount*8); } are.WaitOne(); NativeMethods.Compute(points0, velx, vely, pcount, aspect, zoom); var pointstemp = points0; points0 = points1; points1 = pointstemp; are1.Set(); } } protected override void OnRenderFrame(FrameEventArgs e){ GL.Clear(ClearBufferMask.ColorBufferBit); GL.EnableVertexAttribArray(0); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); mre1.Wait(); are1.WaitOne(); GL.BufferData(BufferTarget.ArrayBuffer, buffersize, points1, BufferUsageHint.StaticDraw); are.Set(); GL.VertexAttribPointer(0, 2, VertexAttribPointerType.Float, false, 0, 0); GL.DrawArrays(PrimitiveType.Points, 0, pcount); GL.DisableVertexAttribArray(0); SwapBuffers(); } These are the array declarations:
private const int Startcount = 4096; private readonly float[] pointsx = new float[Startcount]; private readonly float[] pointsy = new float[Startcount]; private float[] points0 = new float[Startcount*2]; private float[] points1 = new float[Startcount*2]; private readonly float[] velx = new float[Startcount]; private readonly float[] vely = new float[Startcount];
Edit 0: It seems that adding 3 zeros to G increases the accuracy of the simulation but I'm at a loss as to why its different without interleaved coordinates. Edit 1: I somehow achieved an 8.3x performance increase with AVX over scalar with the new code above!

• Continuing to work on “Eldest Souls” (first article here!), I’ve begun familiarising myself with the workflow between Fmod and Unity, and the integration system. I know much of this will be pretty obvious to most, but I thought I’d share my thoughts as a complete beginner learning the ropes of sound designing.
The library of sounds that Fmod provides has been very useful, at least as reference points. I’ve still kept to my ethos of producing the sounds myself as much as possible. Having said that, Fmod gives you 50 free sounds with your download, and I’ve used a wooden crate smash, a drawbridge and electricity sound you can hear in the foley video below.

The thing i found most useful was witnessing changes i made in Fmod being realised instantly in Unity. If a volume needed changing, or the timing of one of my effects was off, i can literally switch to Fmod and then back to Unity and immediately see the result of my alterations. It also seems apparent that using middleware such as this (or i've heard Wwise is also equally intuitive) grants the developer, and myself included, a great deal more flexibility and opportunity to edit sounds without going all the way back to a DAW, and bouncing down again. Needless to say, my workflow is so much faster because of it.
I've also loved the randomised feature of Fmod, whereby any sound can be made to sound slightly different each time it is heard. Taking a footstep recording i made for example, I was able to add further authenticity of uneven footsteps by randomising the pitch and volume of each playback.

I used this technique when creating footsteps for the first major boss in the game called "The Guardian". A big, over-encumbered husk of a monster. I also had fun rummaging through the garage for old tools and metal components for the “Guardian” (the first boss) footsteps. See below!

I also created a sword attack for our player, trying to sound different from the generic “woosh” I see in so many video games. I used a very “sharp” and abrasive sound to differentiate him from any enemies.

On another note, I recently upgraded my microphone to a Rode NTG2 shotgun, which has been phenomenal. I haven’t had to worry about noise interfering with the clarity of my objects, whereas before with the sm58 I had to be clever with my EQ and noise reduction plugins.
Important to note again that this still a “cheap” mic in comparison to most other products on the market, and all in all my entire setup is still very simple and affordable which I’m quite proud of. I’ve seen many musicians spend heaps of money on gear they don’t necessarily need. I much prefer being resourceful with less equipment, than to have more than I can understand or remember how to use.
It’s forced me to understand every aspect and capability of my tools, which I believe is a principal that can be applied to any discipline.

I have more fun little sound effect videos on my Instagram for those interested, where I post regular updates. Thanks for reading! (if you’ve made it this far)

www.sergioronchetti.com
INSTAGRAM
fallenflagstudio.com
×