The way I think of it is this: it has the properties of a baddie so therefore it's implied that it is a baddie and can be treated as one.
It's basically a form of duck typing, which is an alternate to using interfaces. Both are forms of polymorphism, but since we want our objects to be easily changed (essentially making them dynamically typed) we use duck typing instead of interfaces and inheritance. This of course goes along with avoiding problems related to the deadly diamond and virtual function calls.
The way I would structure it is to have one thread for logic scheduling and one thread for graphics/audio/input scheduling, and then saturate the rest of the cores with worker threads. However, I would only suggest separating the logic and "not logic" threads if you have them decoupled through some sort of buffer, as you would if you were implementing a fixed-update variable-render game loop. In my mind this type of threading only works if you have that kind of architecture. Ideally this is also utilizing a sort of MVC where the logic system is completely agnostic to the "not logic" system and its implementation, which is a big deal if you plan to do anything cross platform.
Another solution for dealing with related bone transforms is to consider them as higher-level things of their own rather than an explicit parent-child relationship. Think about how a weld joint works in a physics engine. This may not always be the best way to do it but it provides an elegant solution to situations such as the presented case of the cup on the table, where neither entity should be dependent on the other.
And when it comes to using packed, unordered component pools with systems that operate on multiple component types, you should have an array acting as a layer of indirection that maps IDs to offsets within the pool. This structure can also deal with handle invalidation through "magic numbers". Again, BitSquid has a good article on this concept. If you're worried about cache coherency when it comes to these indirected accesses, my advice would be to deal with different object archetypes in heterogeneous ways. For example, if not every entity has a velocity while every entity has a position, separate your entities into "static" and "dynamic" pools. That way, your "dynamic" entities can keep their position and velocity components packed and correlated.
In my system I've been developing I do as Zipster says. There is no reason for the native code to deal with specific resources - it's all routed through script. Since I'm implementing a sort of Scheme dialect, I have hashed keywords out of the box, which work just fine. I limit the use of strings strictly to user interfaces.
My reinterpretation of Zipster's example would be:
Plus, you have to deal with the issue of animating those suckers.
If you need procedural/destructible terrain or something it's much easier to polygonize voxel data using an algorithm like dual contouring (which I advocate over Marching Cubes as it has no corner cases and can handle sharp edges!)
If your jobs are the kind of "update culling", "animate n entities", definitely run them also in the main thread, especially if your frame processing cannot proceed further without completing them first.
Do you think it's better to run jobs on the main thread or instead spawn an extra worker thread and let the main thread sleep while there are jobs still pending?
You probably know a lot more than me, but it may be worth pointing out that every time I see someone ask about multithreading the overwhelming reply is "Don't!". As unless you really know about all the pitfalls it can bring it can just be more trouble than it's worth.
however maybe you do know your stuff, in which case good luck
Oh, I know what I'm getting myself into. I know all the issues about memory racing and whatnot across threads, so I'm going to minimize the number of times that threads need to synchronize, and use locks properly when I need to.
I gravitate towards keeping components as pure data and having systems provide implicit, automatic logic. Among other things, it makes inter-component dependencies much easier, as a system (and thus a task) requiring more than one component type will ignore entities that are inappropriate.
To use your example of a Movement component, you propose lumping position, velocity, and collision detecting together. The problem with this is that it enforces rules that don't need to exist, such as "every entity with a position is movable" and "every movable entity is collidable". You should split it up into three separate components with three separate domains of data:
The systems that you should use are:
Movement (adds velocity to position)
Collision (checks between collision shapes at positions)
As you can see, entities don't need to be movable or collidable to exist spatially, simply give them a Position component. This is useful for purely aesthetic static objects. Entities also don't need to be collidable to move, and don't need to move to be collidable. The latter is useful for static platforms and obstacles, while the former can be used for dynamic background props.
The key thing is that dependencies are automatically resolved if you have a system to detect when a system's desired component group is formed or unformed, registering and removing that entity from that system.
Components should all derive from a simple base class that holds the entity that component is part of. I'd avoid any deeper inheritance than that. You can use inheritance when designing your entities; for example, a robot chicken derives from a chicken class and adds a Robotic component (a bit contrived, but you get the idea).
The change in executable size will be negligible or even beneficial if you move entity definitions into data. In terms of runtime performance, entity-component-systems are easy to parallelize because, generally, each system operates on each entity independently.