• Content count

  • Joined

  • Last visited

Community Reputation

2365 Excellent

About Zipster

Personal Information

  • Interests
  1. Cache Coherency and Object Update

    Why should such an operation be the most common? If a system needs to read data from N components in order to update one, then so be it. There's no conclusion you can draw from this. Even from a performance perspective, there will be little difference on modern caches between accessing multiple arrays in sequence versus accessing their elements interleaved. And if you ever each the scale where it matters, you'll have much bigger issues to deal with. What you should be more worried about is handicapping the design of your components and systems to meet unnecessary requirements, and making it that much harder to actual implement non-trivial gameplay with the ECS. It defeats the purpose of having one if you can't do what you to with it, and ultimately cause the kind of coupled spaghetti code mess you were trying to avoid in the first place.
  2. Cache Coherency and Object Update

    Data access patterns in an ECS are highly unpredictable, since each entity can have different behaviors that require different data at different times. This is entirely at odds with data-oriented design. You'll go mad trying to reconcile the two.
  3. Fixed update in game loop

    Actual simulation time is ultimately unpredictable though, especially if there's the occasional spike, so you really need a way to handle this at runtime. What you'll want to do is keep track of the error between the simulation time and the wall-clock time and either "skip" the simulation ahead, or add a few extra updates here and there, depending on actual measurements of the simulation and how it relates to the timestep (maybe keeping a small history to deduce short-term trends).
  4. Code generators are typically used in the context of domain-specific languages, so anything you find would likely be tied to a specific language. It would probably be easier to write your own simple one from scratch rather than try and adapt another one to your needs. Luckily there are some excellent references on DSLs out there.
  5. Actually it seems we do agree, at least on each system having its own internal structure. I just like to think in terms of domains as opposed to systems, since I find the conceptual delineation better for determining when different representations are actually needed. However if your systems are 1:1 with behaviors (physics, rendering, etc.), then it's essentially the same I'm also in full agreement with having a separate component handle the transfer of data between systems, but in that case I don't see a need for any intermediary shared state between them. I'm not even sure what purpose it would have, or if it's feasible in the first place. How would you reconcile an OOBB or capsule used by physics, with a matrix transform used by rendering, with a sphere used by gameplay, all into a single representation that makes sense? And why would you even need it if each system already has sufficient internal state to function? The purpose of the transform component in this case would just be to copy the data from one system to another and perform any necessary conversion. Instead of each system assuming internal details about the other, you've inverted your dependencies, shifted that responsibility to a higher level, and made the data relationship explicit in code for all to see.
  6. I'm of the opinion that you should always have distinct, domain-specific representations of information. Physics and rendering are two different domains, thus two transforms. You can split hairs over where these transforms live, and whether or not rendering/physics is part of the ECS, but regardless of where they are you should have one transform owned by "physics" and one transform owned by "rendering" (at least). Don't get distracted by the fact that they may appear similar or duplicate -- they are completely distinct conceptually. If you allow them to both own the same transform, you'll quickly find them fighting each other over even simple changes to functionality. Don't think of data as a "point of communication". Communication is a behavior, and in the absence of code the only place behaviors exist are in assumptions and inferences. It may be fine for the rendering code to assume the transform mean one thing, and for the physics code to assume the transform mean something else, but when it comes to communicating that information across domain boundaries, you should't force the rendering code to make assumptions about what the transform means to the physics code and vice versa. This is exactly the type of behavioral coupling you'll come to regret
  7. C++ Code Review? Please?

    I look at it from the point of view of someone who has to work with code written by others on a day-to-day basis (as do many of us), and ultimately share the burden of any problems or issues that arise from it. And not just code written by other team members, but also code written by my past self. Some of the trickiest bugs I've encountered were caused by code written with an incomplete or incorrect understanding of core concepts -- ownership being among the most common. And as you've pointed out, these can be insidious bugs that hide in code for a long time before rearing their ugly heads at the worst possible times. So perhaps I'm just being cynical, but I don't mind if someone is discouraged from learning C++. Do we really have any control over that anyway? I'd rather make sure that if they ever do reach the point of working with others, they're not adding to the team's burden, increasing technical debt, or otherwise causing problems that could have been avoided had they a better understanding of what they were doing.
  8. C++ Code Review? Please?

    C++ isn't designed to deal with the consequences of shared-only ownership semantics like C# and Python are. Imagine what would happen if you introduced a circular reference (either intentionally or otherwise) while taking this advice. I've seen it take days for even seasoned engineers to find the cycles, let alone someone new to the language. There are plenty of concepts I consider safe to simplify for newcomers to C++, but dynamic memory and ownership semantics just aren't among them. If you're going to play with fire, you have to know what you're doing.
  9. I've worked on a few card games in the past and had to address a similar problem. We ultimately ended up with a solution based on a conditional probability model. We gave each card a weight and used the aforementioned weighted random selection to find a card in the pool. The weight was based on rarity or other traits, and represented how much "opportunity" a card had relative to other cards to be selected for the deck. Once a card was found, we used a percentage chance to determine if the card should actually be added to the deck, and repeated this process until the deck was full. This approach also allowed us to limit the number of occurrences of any given card in the deck by temporarily setting its weight to 0 when its limit was reached, a very important feature for most deck building games.
  10. @Oberon_Command You're getting caught up on a specific detail of my example code that's really besides the point I was trying to make. Appropriate scoping and visibility of code and data is important, but an unrelated issue. I'm warning against baking too much behavior into the components such that they become difficult to re-use. This is from my own experience working on codebases with large ECS frameworks -- there's nothing more frustrating than having to add a flag to a large component that mutates parts of its behavior because someone thought no one would ever use it for any other reason or purpose beyond what they originally envisioned. Or having to choose between writing a one-off throwaway component with duplicate functionality, versus trying to refactor out the needed functionality from another component into a third, shared component. And on top of it all, dealing with the code coupling issues when it's time to writes tests or split apart components into separate libraries. This is the real, in-the-trenches work you inherit when using an ECS. Ultimately it can't always be avoided, since behaviors change with requirements, but the better the choices you make up front, the less headaches you'll have later. But to be perfectly honest, I have my own radical views on what an ECS abstraction would look like (who doesn't, right?) that don't really align with any of the articles or discussions I've seen, so perhaps that taints my opinions on the subject...
  11. The ECS pattern enables you to express the behavior of entities as a combination of modular components. It doesn't mean the business logic driving this expression is actually contained within the components. If anything, such an approach immediately breaks the modularity and encapsulation you're trying to achieve and can quickly lead to unmaintainable spaghetti code. Take your AI component and render component as an example. There is now a specific behavioral and component dependency built into the AI component. It's impossible for higher-level code to use the AI component without inheriting this internal behavioral interaction with the render component. The fact that internal ECS behavior can't be decoupled from external game behavior should raise a red flag. Let's also consider unit testing for a moment. How would one unit test the AI component independently of the render component? The fact that it's impossible should immediately raise another red flag. There's also the issue of the code itself being coupled at the build level. Your AI component code must now be linked with the render component code. It doesn't matter if the final entity actually has both components. You've created a permanent symbol dependency that must always be resolved either at build-time or load-time (i.e. by the dynamic linker). It might not be a big deal now, but good luck trying to split your components into modular, reusable, domain-specific type libraries. Data duplication was a bad example on my part. This is more appropriate to the discussion: entity.component<Position>().set(10, 10); if (entity.has<Networking>()) entity.component<Networking>().markDirty<Position>(); Two completely unrelated components that share only a behavioral relationship (notify networking when position changes) There's nothing in the code to suggest that any entity can be accessed from anywhere. It's precisely the opposite, where the "move" method accepts only the bare minimum data it needs, and doesn't access any global state. If code shouldn't have access to a particular piece of data (such as an entity ID), then hide it. Data hiding is a separate issue entirely that can be solved using other well-known methods. The fact that you can query any component type is neither here nor there. If code doesn't have an actual entity ID or other relevant data to work with (because you appropriate hid it), then the knowledge of those types does nothing for you. Having entities be identifiers versus objects has nothing to do with component accessibility or dependency specifications. It's also not relevant to the discussion. We could just as easily assume that the entity object in my examples was actually a thin handle/wrapper about a functional interface that stores components in "structures of arrays" (or any layout of your choosing). It doesn't change anything, as it's functionally equivalent. The only way to truly "hide" a component (or any code for that matter) is to make its symbols inaccessible. This is ultimately determined by how you encapsulate and layer your software into "black boxes", not by any particular implementation. Explicit declaration of dependencies is a nice touch that helps code be more self-documenting and enables certain optimizations in the ESC implementation, sure, but it's useless as a mechanism for controlling type access. I can include the appropriate header file and update the specification at-will to accommodate my new component dependency, and nothing can stop me. Trying to arbitrarily limit or control type access from code pointless. All you can do is control data, but that's entirely sufficient for all intents and purposes. These mechanisms already exist, and you use them all the time. Code access and visibility is controlled by its structure and layout, the use of public vs private header files, visible vs hidden symbols in shared libraries, etc. Splitting software components into black boxes that can communicate through public interfaces is nothing new. However that's besides the point. Why does it matter if arbitrary components types are exposed to arbitrary code? Going back to what I said previously, without real data it's a moot point.
  12. It's worth mentioning that ECS and other "ad-hoc relational" designs and patterns (thanks for the term @Hodgman!) are implementation details that shouldn't be exposed to the user (in what would be known as a "leaky abstraction"). User code doesn't (or shouldn't) care if entities are collections of components, God objects, or something in-between. The purpose of a proper abstraction is to hide such irrelevant details. Unfortunately, allowing direct access to components breaks that abstraction, couples implementation details together, and makes it more difficult to refactor or extend functionality. Perhaps today setting the position of an entity is universal and straightforward: entity.component<Position>().set(10, 10); But maybe tomorrow you have more complex entities with interdependent data: entity.component<Position>().set(10, 10); entity.component<Model>().setPosition(10, 10); Or maybe you want to send events when the position changes, notify other parts of the engine, emit log messages, set dirty flags for networking, wrap the change in an event and apply it next frame, etc. You certainly don't want to expose those details at every call site where higher-level code interacts with your entity model. At the same time you also want to avoid internal coupling : void Position::set(int x, int y) { x_ = x; y_ = y; // Oh great, Position component now has a permanent code dependency on Model // even though this condition is only sometimes true. Model component probably // also needs a similar, defensive call to us... yay :| if (has<Model>()) component<Model>().set(x, y); // Ugh, it also needs to know about event handling?? eventEmitter().send(MoveEvent(x, y)); } Lastly, let's not forget unit tests. It turns out that TDD is a powerful catalyst for building good abstractions because user code is the primary focus, and implementation secondary. And since test code is so highly contrived and designed to exercise specific functionality and use cases, you don't typically get a lot of reuse, so a poor abstraction will immediately and noticeably impede development by practically doubling your efforts and forcing you to maintain multiple, parallel code paths. The impedance alone forces you to spend the time designing good abstractions, just to make your code more maintainable and improve your overall quality of life Now imagine something like this: void move(int id, int x, int y) { auto entity = getEntityById(id); if (!entity) throw std::runtime_error("bad entity id"); if (entity->has<Position>()) entity->component<Position>().set(x, y); // For those more complex entities if (entity->has<Model>()) entity->component<Model>().set(x, y); // Send an event entity->eventEmitter().send(MoveEvent(x, y)); } Position and Model components no longer depend on each other. User code doesn't depend on components. The only point of coupling is the "move" method, which is devoid of any implementation-specific details. Perfect. @Juliean mentioned focusing on the functional over the declarative, and I think that's a good analogy for how one should interact with entities. And just for fun let's write the unit test mock method: int testX = 0; int textY = 0; void move(int /*id*/, int x, int y) { std::cout << "Moving test entity from " << testX << ", " << testY << " to " << x << ", " << y << std::endl; testX = x; testY = y; } Which not only supports the minimal data and functionality required for the test (tracking and logging position changes), but doesn't depend on the ECS framework at all. That's great, since this specific test doesn't need it anyway. Yet from the perspective of user code, it all looks the same: move(id, 10, 10); move(id, 10, 20); move(id, 20, 30); Of course these are all contrived examples, but the idea is the same. Don't get too focused on all the minor details of ECS, since in the grand scheme of things it's just another software black box. Get it working, make it easy to use, and circle back if/when there's a quantifiable problem.
  13. Gameplay Multithreaded Gameplay Logic

    Seconded. Most games I've worked on typically update game logic systems between 10-20Hz on a single thread without any impact to the gameplay experience. There simply isn't enough going on every frame for gameplay to consume that much CPU, especially considering that humans themselves are limited in how fast they can respond anyway. As long as the key player "interactive" systems are real-time responsive (rendering, sound, input, etc.), the game as a whole will feel responsive. Also consider that in any multiplayer game, network latency is going to mask a lot of it. Certain gameplay-relevant systems that are more computationally expensive (AI, pathfinding, physics) can be multithreaded independently as previously mentioned, but again those are very specific, well-defined problem domains.
  14. This is as close as I was able to get: I'm not sure if you're able to get rid Type in the base class, since you need access to that type in the specialization to determine if T is a derived class. However someone else with better template-foo can certainly prove me wrong
  15. It's entirely acceptable (and virtually necessary) to maintain multiple models and representations of your game world for different gameplay or engine components that need to see and interact with the world in different ways. For instance, a list of tiles works well for field placement (stone, grass, etc.) and general pathfinding, while continuous 2D coordinates work best for collision, rendering, physics, etc. As long they're kept in sync, or the data you need can be easily derived, everything is happy. Instead of trying to bend your logic to work around the limitations of any particular view, you simply have multiple views that you keep in sync. It moves the burden from your game code to your "sync" code, which is another problem that has to be solved, but typically this code can be written in an abstract, non-game-specific manner that makes it highly reusable. Take a graphics library as an example. It doesn't know anything about your "game", it only sees the world as a hierarchical scene comprised of nodes, meshes, lights, etc. It's the job of your game engine to build this scene based on the specifics of your game and keep it updated as the game world changes (update transforms, visibility, etc.). This is some extra work, however the process of converting "game objects" to "graphics objects" can usually be abstracted and generalized to a point where you can reuse the process for any number of games, truly limiting the amount of rework you have to do per-game to actual, game-specific logic.