Jump to content

  • Log In with Google      Sign In   
  • Create Account

Servant of the Lord

Member Since 24 Sep 2005
Offline Last Active Yesterday, 09:48 PM

#5298415 Blending order

Posted by Servant of the Lord on 28 June 2016 - 11:21 AM

There are "order independent transparency" methods, but they are more complicated, require more resources, and have corner-cases where they give wrong results.
So, in general, you need to render opaque first.

Okay, so the basic idea is - first render all opaque, then all transparent (no difference in order of transparent) - profit?

Render all opaque (preferably front to back, for speed reasons, but it's not required).
Render all transparent objects (must be back to front, or the visuals will be wrong).

#5298396 PowerUp stuck

Posted by Servant of the Lord on 28 June 2016 - 09:47 AM


There's many ways to go about it - so I'm not fixated on maps... but if I may inquire, why don't you like maps?

I think its because its kinda confusing. And I really dont use it much. I am more familiar with list though. And even if i have a chance I opt to list always.


Certainly. It's a different kind of tool, but useful to know.

As you know, a list holds 'values' at different 'indexes'.

myList[2] = "blue"; //Assigns the value "blue" to the element located at index '2'.

Maps also hold values, but they use 'keys' instead of indices. This means, we can map any value to any key, and a key can be something other than a number.

myMap["color"] = "blue"; //Assigns the value "blue" to the key "color".

Another thing with maps is that because the map doesn't need the keys to be in linear progression (3, 4, 5, etc...), you can do things like this:

myMap[34261] = "blue"; //Assigns the value "blue" to the key 34261, *without* the map having to be over 34,000 elements long.

It's a very useful tool in many circumstances.

Here's some examples:

textureMap[textureID] = LoadTexture(...); //Maps texture IDs to actual textures.
//Stores templates for types of enemies in a map, using the enemy type name ("goblin") as a key.

//Looks up the "goblin" template, and uses that template to create a goblin enemy and map it to an enemy ID.
enemyMap[enemyID] = enemyTemplateMap["goblin"].CreateEnemy(); 

Can you alos elaborate what do you mean by end time and the "amount time left"? quite confuse

There's at least two ways to have a timer do something:

//When activated:
    powerUp.timeRemaining = 30 seconds;
    ...make paddle larger...

//Every frame:
    powerUp.timeRemaining -= deltaTime;
    if(powerUp.timeRemaining <= 0)
       ...Our timer is done, so make the paddle normal size again...

The above is useful in some situations, but normally I prefer this second method:

//When activated:
    //We store the absolute time the powerup ends, not the relative amount of time remaining...
    powerUp.endTime = (currentTime + 30 seconds);
    ...make paddle larger...

//Every frame:
    //We don't need to substract anything anymore.
    //powerUp.timeUntilDone -= deltaTime;

    //Instead, we just check if we've passed the end time (i.e. we check if the present time is larger than the ending time).
    if(powerUp.endTime < currentTime)
       ...Our timer is done, so make the paddle normal size again...

Note: 'deltaTime' is the amount of time between the previous update and this current update. You calculate it like this:

currentFrameTime = gameTime.ElapsedGameTime.TotalSeconds;
deltaTime = (currentFrameTime - previousFrameTime);
previousFrameTime = currentFrameTime;

#5298312 Is using one the switch statement better then using multiple if statements?

Posted by Servant of the Lord on 27 June 2016 - 06:09 PM

But really that is quite pointless, I think we all can agree that if talking about switch we are talking about switch with "break" labels. Not having break from AFAIK is mostly bugs, or shorthands for reducing duplicate code if blocks are shared

Or to trip up students with Duff's Device.  :lol: 


If you are using switch() statements for micro-optimizations, there are other tricks to be aware of also; putting your more-frequently used branches closer to the top of the switch() supposedly helps, though I've never tried it.

Does it? I've heard that this can help for regular if/else-chains (on said "dumb" compilers), but from switch all I've heard is that you should put labels in order, like always go 0 on top to 10 on bottom, best case without holes (so 0..10 or 10...0 is optimal, "10, 5, 9, 1" not so much). Reason I've read is that this makes it much easier for calculating where to jump to for the compiler, though I belive any compiler where if/else can be compiled to same assembly as switch should be able to figure out ordering on their own.

I've heard both, but have tested neither.  :) 

Maybe the assumption is that the ones near the top (if the compiler doesn't reorder them) are more likely to have their instructions already loaded into the instruction cache? I have no clue; in either case, I'd rather optimize at the function and system level, and if I have to do micro-optimizations (which I never have needed to do), I'd use explicit compiler intrinisics and play with PGO to see if that provides any tangible benefits.

#5298288 Spreading value x randomly between x amount of elements

Posted by Servant of the Lord on 27 June 2016 - 03:40 PM

Here's my approach. I shuffle the order the elements are visited, to alleviate bias towards the end of the vector.
(The bias is still there for whatever elements I give first shot at the values, but those elements are randomly distributed throughout the vector instead of grouped by order of first-come-first-serve)
I also applied a little (optional) fudgery to the maximum amount assigned each element, so the results aren't quite as extreme.

#include <iostream>
#include <vector>
#include <random> //For std::mt19937, std::random_device(), and std::uniform_int_distribution()
#include <algorithm> //For std::shuffle

void FillBuckets(std::vector<int> &buckets, const int amountToDistribute)
	std::mt19937 randomGenerator(std::random_device{}());
	//We generate some indices to fill the bucket elements in a random order.
	std::vector<size_t> bucketIndices(buckets.size());
	std::iota(begin(bucketIndices), end(bucketIndices), 0);
	std::shuffle(begin(bucketIndices), end(bucketIndices), randomGenerator);
	int bucketsRemaining = static_cast<int>(bucketIndices.size());
	int amountRemaining = amountToDistribute;
	for(size_t index : bucketIndices)
		int amountToGive = 0;
		//If this isn't the last bucket, take a random amount of the remaining value.
		if(bucketsRemaining > 1)
			//Balances out the numbers a bit more, so the first few buckets don't steal everything.
			//This means, if there are two buckets remaining, one of the buckets can take 100%.
			//If there are three buckets remaining, the most each bucket can take is 50%.
			//If there are four buckets remaining, the most each bucket can take is 33%, and so on.
			int maxToGive = (amountRemaining / (bucketsRemaining-1));
			amountToGive = std::uniform_int_distribution<int>(0, maxToGive)(randomGenerator);
		//If this IS the last bucket, just take everything that remains.
			amountToGive = amountRemaining;
		buckets[index] = amountToGive;
		amountRemaining -= amountToGive;

int main()
	std::vector<int> buckets(10);
	FillBuckets(buckets, 100);
	std::cout << "Result: ";
	for(int amount : buckets)
		std::cout << amount << " ";
	std::cout << std::endl;
	return 0;

#5298264 Is using one the switch statement better then using multiple if statements?

Posted by Servant of the Lord on 27 June 2016 - 11:56 AM


I cannot spot when and how switch can have outperformed proper conditionals in any way on any platform- and by my logic I conclude it cannot.

The reason has already been covered in this very thread: switches are easy to convert to lookup tables in machine code. Conditional statements are less easy. So in a common case you will get better machine code generated for a switch than a if/else ladder. Some compilers are better at this than others.

And the reason why it is easier for compilers to optimize it is because it gives more information for the compiler to work with, so they can detect common patterns of behavior easier that the compiler is familiar with and knows optimization tricks for.

For the same reason, in C++, these two do the same logic:

for(size_t i = 0; i < array.size(); ++i) //Regular 'for' statement
   Element &element = array[i];

for(Element &element : array) //Range-for statement

...but beside them doing the same logic, the second version is easier for the compiler to optimize, because the structure of the code itself communicates information to the compiler. The second version guarantees to the compiler that every element will only be visited once, and that the elements will be traversed in a specific order, and that all the elements are known in advance - the first version guarantees none of those things.


In simple cases, the compiler will likely optimize them both the same, but in more complex cases, the compiler may not be able to figure out the optimization of first example, and so may use slightly lesser optimizations instead. This may result in super teeny-tiny gains or losses in performance, which 99.999% of the time don't matter.


Basically, the structure of our code becomes contracts to the compiler making guarantees that help the compiler guide the optimizations. Different code structures make different guarantees.


else-if() chains make different guarantees than a switch() does, and the extra information switch() communicates can help the compiler more easily recognize certain types of optimizations that else-if() chains might, in some cases, be more harder for the compiler to detect.


But if the OP is asking "Which one should I use?", then the usual criteria applies: Use whatever is cleaner/clearer to whoever has to read and expand and maintain the code, and don't optimize prematurely. When it comes time to optimize, profile first to find out what is slow, then focus on the critical parts, and after realizing that architectural macro-optimizations benefit way more than function-level micro-optimizations, and finally-finally-finally, micro-optimize only the areas that require it.


In my own code, I tend to find else-if() chains more visually appealing in some cases, and switch()'s cleaner in other cases, and so have plenty of each.

And every one of switch statements used in my code, 100% of the time they were chosen because it makes the code clearer, not for any optimization purposes.


If you are using switch() statements for micro-optimizations, there are other tricks to be aware of also; putting your more-frequently used branches closer to the top of the switch() supposedly helps, though I've never tried it. When you get to that level of micro-optimizing though, that's when something like profile-guided-optimizations provide the compiler with even better information about the real-world usage your code would be put through.

#5298158 PowerUp stuck

Posted by Servant of the Lord on 26 June 2016 - 05:06 PM

What about storing the "end time" (which only is changed when you pick up a powerup) instead of storing the "amount of time left" (which has to be updated every frame).


Then, store it in a map.


Something like this: (psuedocode)

PowerUpType { LargerPaddle, SmallerPaddle, FasterPaddleMovement, SlowerBalls}

Map<PowerUpTime, Time> powerUpActivationMap;

void Game::PickUpPowerUp(PowerUpType type, Time powerUpDuration)
    powerUpActivationMap[type] = (currentTime() + powerUpDuration);

bool Game::PowerUpIsActive(PowerUpType type)
    return (powerUpActivationMap[type] > currentTime());

int Game::GetBallSpeed()
          return BallSpeed_Slower;
     return BallSpeed_Regular;

int Game::GetPaddleSize()
     bool largerPaddle = PowerUpIsActive(LargerPaddle);
     bool smallerPaddle = PowerUpIsActive(SmallerPaddle));
     if(largerPaddle AND smallerPaddle) return PaddleSize_Regular;
     if(largerPaddle)  return PaddleSize_Larger;
     if(smallerPaddle) return PaddleSize_Smaller;

     return PaddleSize_Regular;

#5298023 Including from the Standard C++ Library

Posted by Servant of the Lord on 25 June 2016 - 02:07 PM

When we include libraries from the standard library (like cmath or cstring), what are we actually including?

First, understand that using different brackets in #include "header" and #include <header> only affects the order that the compiler looks through different directories when searching for the files.
Second, understand that you can use different file extensions (in your own headers) and everything will still work fine (as long as the file actually exists with that extension): myHeader, myHeader.h, myHeader.hpp, myHeader.pch, myHeader.meow, whatever - they are just filenames. Occasionally I've even had to #include a .cpp file from one .cpp to another. They're just text files.

Finally, what *actually* happens vary from compiler to compiler, because C++ only enforces behavior not implementation details. Also, the standard library is (mostly) separate from the compiler, even though it is shipped with compilers. For example, Visual Studio and GCC use *different* standard library implementations even though they behave the same and have the same interface. (GCC defaults to using it's own opensource libstdc++ library, Clang defaults to usibng its own opensource libc++ library, and Visual Studio defaults to using a custom-modified version of the third-party proprietary Dinkumware one. There are also other implementations of the standard library available).

The header files actually exist (usually), and (on Windows) they are usually located where you installed your compiler, unless you manually installed the libraries somewhere else.

In my QtCreator install of MinGW, on Win10, I find <cmath> located in:

At that location, I find a text file named 'cmath' with no extension (I also find 'cstring' and 'iostream' and 'vector' and so on - text files without filename extensions).
It #includes "math.h", undefines some math.h defines (like cos(), tan(), etc...), reimplements them as real functions (instead of macroes), and puts them inside of the std namespace.


Depending on the compiler and header, the compiler can also just pretend behind the scenes that the header file really exists, but really do something else instead as long as the behavior is the same. For example, why recompile the standard library headers over and over again? Maybe the compiler just holds them pre-compiled, if it can detect that you aren't doing something that'd require it to recompile the header (like #defining certain macroes).

I'm assuming they aren't header files since there is no ".h" extension. Also, how can we just include files and be able to access functions from that library? Are the function definitions inside the included file? I thought it was good practice to separate declaration from definition?


First, a huge amount of the standard library classes and functions are templates. Templates in C++ have to be in headers, declaration-and-definition together. It's an unfortunate consequence of how templates work. So for those classes, whether standard library or 3rd party library or your own code, the header is plenty enough, no .cpp needed.


Second, for the functions that aren't templated, and have concrete definitions, they basically fall into several categories: If they are inline, or if the compiler wants to inline them for performance reasons, the compiler will generate the code inside your project's .exe - often those kinds of functions are only a few lines of code, so the performance gains outweigh the minor costs. This is common with compiler intrinsic functions.

For other functions, they can be handled just like regular static libraries (compiled into your code, but not inline), or dynamic libraries.


If they are linked to as dynamic libraries, then you have to make sure that .DLL actually is available for your project to find, when you run your executable. For Microsoft, the standard library DLLs are just shipped with Windows as part of the OS or as part of the Visual Studio runtimes.


For me, using GCC/MinGW, I have to include the standard library DLLs with my .exe. GCC's is named libstdc++-6.dll, which in-turn calls functions from some of Microsoft's libraries, since it has to function on Windows and some code has to run through the OS (like loading files).


You can also tell your compiler to statically include the standard library inside your .exe, but that isn't without tradeoffs.

#5297921 Converting a desktop game into mobile game- Illegal?

Posted by Servant of the Lord on 24 June 2016 - 03:39 PM

Basically, you can make similar types of games, but you can't copy the artwork, name, code, audio, levels, or even the same sequence and nature of upgrades and unlocks.


Ideas aren't copyrightable, but implementations of ideas are. You can use the same ideas, but not the implementations (numbers, patterns, visuals, text, etc...). The closer you get to copying someone else's implementation, the more likely you are infringing.

#5297591 Game Object Update Order

Posted by Servant of the Lord on 22 June 2016 - 10:10 AM

Thank you all for your responses. I wanted to know why those of you who have stated that a bucketed approach would be better than a dependency graph? It seems that it has a lot negatives to it. Like, what if you have two logical components where one relies on the other. or two logical components where one has to run before a renderable component, and then the second one has to run after the render component. You then have to have two logic bucket groups that will only be used by one-two components. I could understand wanting to just take the easy way and not worry about all of the dependencies in the engine, but could anyone mind telling me the negatives of using dependency graphs?


There are a thousand different ways to program "a game". But we eliminate many of those ways as unnecessary complexity or unnecessarily slow, when we start talking about a specific game. It doesn't make too much sense to argue about which way is better for "games in general".


Until you know the specifics of your game, trying to code for every "maybe" that exists adds complexity and unnecessary abstraction. Abstraction, when built up layer by layer, has reasoning costs (it becomes slightly harder to reason about the entire project, and slightly harder for others to learn the engine, and those costs can add up project-wide). It has benefits but also costs - even if those costs are small, it's wasted cost if you don't actually benefit from it (rather than theoretical future benefit).


Bah, this makes it sound like I'm saying "don't code for the future", but I'm not. You definitely do want to code for the future by coding cleanly and simply, so it is easy to rapidly modify, debug, and expand for this specific game and any future games you later re-purpose the code for. You also want to code for the future of the known needs of the specific project you are really working on.

What I'm advocating against is pre-emptively building more complex architectures before you actually know whether you even need them - more classes and functions and lines of code to write, reason about, maintain, document, and for others to learn. If it seems likely you'll actually need it for *this* project, by all means plan for it! But if seems theoretically that it might be needed for some vague future project, then you can easily tweak or rewrite part of the architecture for that project, when you are actually making that project (your code is modular and cleanly written, right?).


Unless your game requires motorcyles on trains, it is better for your code to decide, "you can't have motorcylces on trains".

Engines that can "do anything" also mean that they do nothing well, and they have to have yet more code written to actually make the abstract fluff do something concrete.


Who cares if a bucket group is only used by one entity? It has minimal code complexity and minimal performance costs, while fitting into an established paradigm already used in your architecture. It is very easy (development-cost wise), very simple (comprehension-wise), very fast (execution-wise), and very clean (for code legibility and future changes), to create buckets that run before and after rendering. Or before and after physics. Or before and after... whatever.


I'm not against dependency graphs. For some projects or sub-portions of projects it makes sense. I'm not pushing buckets over dependency graphs, I'm pushing for comprehending the requirements of a concrete project, and not being too enamored by (the understandable programmer appeal of) intricate fits-every-situation solutions. I'm pushing for firmly deciding "my engine doesn't do X" as a way to make cleaner, faster, more streamlined engines that are actually usable. With the understanding that, if you eventually find out that you really do need X for a later project, you can add it in or rewrite the isolated module of your architecture that prevented it.

#5297531 Game Object Update Order

Posted by Servant of the Lord on 21 June 2016 - 10:37 PM

Well the reason I don't want to do a hard coded version like
for( auto* MovementComponent ..){ }
for( auto* SkinnedMeshComponent){ }
is that in some cases, a skinned mesh component isn't being modified by a movementcomponent. Or there are hundreds of different scenarios, like maybe update this CrowdComponent, then update this movement component.. then do a post update of componentY and X.

If SkinnedMeshComponents can sometimes be modified by a MovementComponent, and MovementComponents are never modified by a SkinnedMeshComponent, then what is the harm in processing them in the KISS way?

Well off of my the top of my head I can think of two dependencies that might be in an engine.

Why are you worrying about stuff that might be needed? If you know it's going to be needed, add it. If you're not sure, then adding "support" for it means trading development time for something you might not use. That's like paying money for a tailor-made suit on the offchance you get invited to a wedding next year - you might attend zero weddings, making it a wasted investment, and even if you do need to attend one, by the time you do need a suit, the one you originally had made might not fit your actual requirements.
If you're making an engine "that'll support many different types of games", your engine likely will be used for zero finished games, unfortunately.  :(
(unless you've already made several engines used for several completed and complex games)

So I don't want a hardcoded scenario of spaghetti code and am leading more towards a generic approach

Hardcoded isn't spaghetti code by default. You can even more easily have generic spaghetti.
There's a balance between concrete and generic, and there's a cost to making something generic. You should code cleanly, and make things generic to the extent that is actually useful, IMO.
The amount of genericity that I personally find useful depends on what layer of the architecture I'm working on. I find at lower levels (like utility functions, containers, algorithms, and so on), more genericity is useful, but at higher levels, I want to go increasingly concrete.
For me, it's easy to make something generic. It's too easy to make something over-generic. And if I make something over-generic, I basically have to throw it away and redesign from scratch. But if I write something concretely, it can be easily made more generic (if it was coded cleanly), and in a more knowledgeable way (since the concrete version helps you understand the realities of the actual requirements).

[example A] The movement components job is to update the SkinnedMeshComponents transform prior to animation evaluation. Because of this, the MovementComponent should run prior to the SkinnedMeshComponent.
[example B] Another dependency is based on children. If one component, lets say a player, is on a vehicle.. the vehicle should update first.

You have to decide what is actually needed for the actual game you are actually making. Once you know your requirements, design decisions are very straightforward. If you later on decide you want to use the same code-base for another game, it's easy to fork the codebase, and customize it or generalize it as (actually) needed, where it makes sense.
For example, I can think of at least three valid ways of solving the problem you just mentioned. The "best" way depends on the actual nature of the actual game being actually made.
Make it order-independent until a post-update step is done
Update the entities (player and vehicle) in any order.
After all the entities have updated their relative positions, run through the list a final time to calculate their absolute positions. Since you already ran through the list once, you could create an array of pointers to every child entity, and just update that list rather than every entity.
Sort based on dependencies
In ExampleA, this means just process all the MovementComponents first.
In ExampleB, this means make sure parents occur earlier in the list of elements than their children.
Separate into buckets of generations
Update all entities that have zero parents. (Probably 95% of your entitieS)
Update all entities that only have one parent. (Almost everything else)
Update all entities that are two parents deep. (What, like three entities?)
This is what Uncharted 2 did (Slide 74 in the powerpoint, but the slide itself is labeled '38'), and only needed to go to a depth of four (Gun held by Player standing on Box located on Train).
And rather than worry about parent depths, they just used 'buckets' and crammed different types of entities into different buckets.
In their game, they knew (as a requirement) that a vehicle will never be riding another vehicle (while it's possible in reality for motorcycles to be driven on trains, their game didn't need that). Nor do people ride other people (while piggy-back rides are possible in real-life, by saying 'not in this game', it simplified and solidified their engine architecture).
So they did four buckets:
Vehicle (e.g. train), Object (e.g. box), Character (e.g. player or NPC), and Prop (e.g. weapon)
Then they just updated Vehicle bucket first, then Object bucket, and so on. Because they knew their requirements (and weren't making vague "any situation" engines that are useful for no concrete game), the solution actually became really simple without having to manage complex dependencies.

#5297500 When you realize how dumb a bug is...

Posted by Servant of the Lord on 21 June 2016 - 04:34 PM

Qt uses a custom CSS-inspired DSL to theme its widgets.


For a long time (a couple years - it was very low-priority on my TODO list) Qt has been emitting warning after warning saying, Unknown property margins

But a few days ago I finally decided to hunt down the problem.


The warning seems pretty self-explanatory: Unknown property margins

So I figured out what widgets were emitting the warning, and made sure all their margins were set properly...


The warnings were still getting emitted. Finally (after debugging and glancing through Qt's sourcecode), I realized that what it was really saying was, 'margins' is an unknown property. The Qt stylesheet property is 'margin' without the 's'.  :lol:


All they had to do was add commas to variable error messages.... Unknown property 'margins' with quotes makes it so much clearer.

#5297406 Free game analytics

Posted by Servant of the Lord on 20 June 2016 - 09:34 PM

Worse comes to worst, I can just have the game store the data in a file on the player's computer and then have them email it to me.


Or just have your game auto-email it to you.


Here's what one GameDevver did. Another, IIRC, used Google Analytics and pretended that his smartphone app was a webpage.

#5297405 Should all gameobjects be stored in the same data structure?

Posted by Servant of the Lord on 20 June 2016 - 09:24 PM

Only if the objects are similar in data and go through similar logic paths in the code. The less similar they are (in data and usage), the less likely you want them in the same structure.


So basically, it'd vary from game to game.  :wink:


Care to post the structs/classes for the three kinds of objects? Do they share a base class? Do they share some variables in common (e.g. collision rects)? Do they share some code in common (e.g. collision testing)?


How much do they have in common? Sometimes you want:

ProccessCollisions(&player, allGameObjects);

Othertimes this is better:

ProccessCollisions(&player, allEnemies, &Enemy::collisionRect,    [](Enemy &enemyCollidedWith) { ... });
ProccessCollisions(&player, allWalls,   &Wall::collisionRect,     [](Wall &wallCollidedWith)   { ... });
ProccessCollisions(&player, allItems,   &Item::collisionRect,     [](Item &itemCollidedWith)   { ... });
^ templated func              pointer-to-member-var ^                 ^ lambda callback

Depending on how much varies or is similar, affects whether you should parameterize your data and logic, or treat them as identical.

#5297142 Generating Provinces From A Map

Posted by Servant of the Lord on 18 June 2016 - 03:33 PM

If you have heightmap data for mountains, and know where your lakes and rivers are, elevation changes and rivers help define national boundaries in real life.


Further, civilization tends to grow most around waterways because bodies of water are natural "highways" of commerce, and provide easy access to food (fish) and fertile soil, while also helping make the surrounding climate slightly less extreme (cooling off overly-heated areas, and warming up overly cold areas). This is important when placing your more-developed cities.

#5297096 Problem on referencing a vector of derived class

Posted by Servant of the Lord on 17 June 2016 - 11:31 PM

I guess... be generic when you can and it promotes code re-use, and be specific if it feels like you're trying to cram something into a pattern that doesn't fit - or rethink your design.

To me, it's easy to make things generic. So easy, that I often over-abstract. I notice this in some 3rd party libraries and engines too, like there's no concrete goal. Claiming "reusable code" can be a siren song for over-engineering - I'd rather have clean but concrete code that I can easily abstract as necessary.
So I now tend to be more concrete and abstract only as I see a pressing reason to (depending on what layer of code I'm working on - utility and helper stuff tends to be more abstract, for practical reasons), because it's always easy for me to make something more abstract, and harder to make an abstract thing concrete without rewriting it from scratch.

This smells heavily of unneeded abstraction, so in my own code, I'd need to justify it to myself before adding abstractions. I'm just not convinced of the value of it.
However, you obviously don't need to justify your code to me:) 

Since it was suggested to the OP, and since he seems eager to adopt it, I still question whether it's a good thing or if it's over-engineering. He seems to want to treat them all identically, but I don't see a need (or even a valuable enough benefit that outweighs the cons of over-abstracting) to treat them all identically, and by default, I feel they shouldn't be treated identically, because they aren't actually related.
Each component type is processed by a different system and each system does different work, the only thing they have in common - as far as I can see - is the name of the design pattern we are applying. And a shared design pattern isn't enough motivation for me to use inheritance: functions are a design pattern, but we don't put unrelated functions into a map together. Typically, only functions that are genuinely interchangeable do we typically treat as interchangeable. There are exceptions, ofcourse, but by default I question it as a big-picture engineering code smell.
Flyweights are also a design pattern, but unrelated flyweights shouldn't be put in a map together without a compelling reason - though I see people often doing this in their "asset managers", and I occasionally fall in the trap of doing it myself before I realize it doesn't make sense. 

I guess what I'm getting at, is that treating things in an abstract way clearly has benefits [...]

And detriments. Programming is always about tradeoffs. Things can be too concrete, and they can be too abstract. Since it's easier (at least for me) to make concrete things abstract than to make abstract things concrete, by starting concrete and getting more abstract as needed, I save myself work.
There's definitely a balance to it; especially with game engines, where the internet is a vast graveyard of people writing engines in so generic ways that not a single completed game ever gets created with them.
I find abstractions easy to write and easy to over-engineer, so after doing abstract-anything-that-can-be-abstracted style of coding, I realized I wasn't benefiting much from it, so now I tone it back and ask myself, "What abstractions actually are worth the added code complexity?". Because they seem to promise more benefits than they actually deliver, I'm extra wary about them - despite them being pleasantly easy to write and appealing to me as a programmer; I want my mind to set up warning bells when I see abstractions without significant reason. (Abstractions at lower levels of code make sense for me - helper functions and algorithms and containers and so on, but when I start getting to the glue that holds everything together, I want to hone it and have something solid that targets an actual game).
It might just come down to different engineering styles and the different lens we are viewing our respective projects through.
I do like your idea of separating the component data ownership away from the systems, and will have to give that some thought.


Thanks for discussing this! It's an enjoyable topic for me, and large-scale system engineering is an area I still have alot of room for growth in; sorry for being a tad pushy and too argumentative.  :)