Jump to content
  • Advertisement

All8Up

Moderator
  • Content Count

    286
  • Joined

  • Last visited

Community Reputation

5962 Excellent

About All8Up

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Design
    Education
    Production
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. This does indeed get quite and complicated at that point as it is fairly indirection heavy. First off, I would seriously suggest starting with a very simple array of arrays implementation to get started, it may be good enough for your purposes. But, if that is not the goal, it goes along the following lines: EntityHandle (uint32_t index) -> EntityData (uint64_t mask) -> ComponentContainer(s) -> ComponentData What that is trying to describe is that the entity handle is actually a fixed index which can index into an array of "EntityData" structures and also the index into each of the ComponentContainer arrays which in turn index into the actual ComponentData. So, effectively this means there is a bit of double indirection going on. First from the handle to an index, the handle can not change after creation so it is mapped into an array which overtime ends up with some holes. The holes are part of a free list and as such eventually get reused. Anyway, the API for this ends up as follows: enum class EntityHandle : uint64_t {eInvalid = uint64_t(-1)}; EntityHandle Alloc(); Free(EntityHandle); uint32_t Get(EntityHandle); void Set(EntityHandle, uint32_t); // NOTE: For my purposes I assert if any of this is used incorrectly. // It is internal and means there is a bug in the ECS itself if it fails. That's basically the entity handle API minus some utilities for the deferred support. (NOTE: The API is internal and not exposed to the user.) A handle is a unique uint64_t which allows you to store a uint32_t value. Using that value, you can now index into the remaining arrays which are all packed such that no holes exist. I.e. assume I free an entity, it's EntityData is no longer needed so the last entry in the EntityData array is copied into it's place and the index stored in the EntityHandle array is updated to the new position. Components are updated in much the same manner but with one additional indirection for sparse components. Things start getting more complicated at the component array level because those can be sparse and not all handles indirect to a valid component. But, generally speaking it is just a variation of the EntityHandle style solution where you indirect into the actual data array yet again. There are a couple tricky bits involved, especially when removing components, but overall it's not too bad if you get your head around the packed and free list styles of arrays involved. Hope this is useful.
  2. This is actually close to how I started my implementation, though the blocks of memory were per component type. Basically I had arrays for each component which were not sparse, this had the advantage that for common components such as transform, things were nice and linear in memory during iteration. The down side is as you mention, the rare components like cameras wasting a bunch of space and being scattered about in those huge arrays. When I moved to sparse arrays I left the option for fixed arrays for certain components such as transform that exists on basically everything. Overall, with high create/destroy rates, the fixed arrays were about 5% faster, so not an insignificant win. On the other hand, after implementing the lazy sorting of components (a VERY good reason to never have non POD data in components) sparse arrays were pretty much equivalent once the ordering stabilized in contiguous memory space. Basically if you are adding/removing components a rapidly the linear blocks are better, if you are not constantly fiddling with components the sparse arrays have an initial inefficiency but quickly converge, so usually it only takes a few frames before they are equivalent again. The whole point though is that I highly suggest implementing the pattern correctly initially, no matter how slow, so you understand how things are supposed to work. Then understand what bits and pieces you really need before getting into the performance side. Maybe the simple starting implementation is good enough, for many games it probably will be in fact. My implementation evolved for the purpose of experimenting with how best to multithread an entire game engine, so probably 50% of the code is complete overkill for most folks. But is sure is fun to use it and bring 16+ core ThreadRipper's/Epyc or i9's/Xenon to their knee's without wasting most of the CPU on overhead.
  3. Actually, for my purposes that turned out rather easy but it is definitely a behind the scenes type of thing. First, everything in that API is actually deferred and does not run immediately. This is a painful thing to implement and still keep the nice API, but it is doable. Now that everything is deferred it means that because my engine is fully threaded, I do a lot of lazy cleanup. So, with entities and components being added/removed in bulk, performing a little sort on the data and then updating the indexes is quite efficient. I'm not sure if there is a proper name for my method of insert/delete, but it is generally a couple memcpy's. I start at the highest thing to be inserted and binary search for the location it should be in the index. I memcpy everything at that location to the end up in memory by the total number of items to be inserted. Copy the new index and pop it, repeat till done but only moving the portion of memory to the point right before the insertion. Eventually the gap you made fills in and you've only moved any given element once. Overall it is a reasonable approach, I think I have some ideas to make it work faster but haven't bothered since in the ridiculous stress test I run the insert/delete hasn't shown up in the top 100 hot spots yet. The test adds/removes on average five thousand components and three thousand entities every second, so it is being beaten on significantly.
  4. In the solution I use, the entity manager is just the API for working with the containers. Basically you have functions such as the following: enum class EntityHandle : uint64_t {eInvalid = 0}; EntityHandle EntitySystem::CreateEntity(); bool EntitySystem::DestroyEntity(EntityHandle); void* AddComponent(EntityHandle, ComponentBit); bool RemoveComponent(EntityHandle, ComponentBit); void* GetComponent(EntityHandle, ComponentBit); EntityIndex* GetIndex(ComponentMask componentMask); So, creating an entity is: auto entity = ecs->CreateEntity(); TransformComponent* transform = ecs->AddComponent<TransformComponent*>(entity, TransformComponent::kBit); *transform = { Matrix33f::Identity(), Vector3f::Zero() }; So, some explanations. Each component in my system has an ID to specify what it is, this is just a uint64_t which I happen to build from a crc of the name. Each component also has a 'bit' which is computed when they are installed into the ecs itself. The first component is bit: 1<<0, the second 1<<1 etc. I'm simplifying things a bit here to be clear, because all of this is done dynamically but I won't go into the details as they depend on your needs. For instance, the ComponentBit and ComponentMask types in my system are currently uint64_t's because I have never needed more than 64 components, but I can switch them out to bitsets if needed later. Anyway, to the point, the entity index is an object which contains an array of entity ID's based on the given ComponentMask. So, let's say I've inserted 5 components: Transform, Physics, Renderable, Audible and Spaceship. Assume that the bits match up in that order. I want the AudioSystem to iterate over only the entities which contain the Audible component. auto index = ecs->GetIndex(ComponentMask{AudibleComponent::kBit}); So, the AudioSystem now has an index and when you run the main loop for the ECS it can do the following: int32_t count; const EntityHandle* entities; index->GetEntities(&count, &entities); for (int32_t i=0; i<count; ++i) { // Do whatever. } Now, assume I have Audio3DSystem, it wants an index which consists of Transform and Audible: auto index = ecs->GetIndex(ComponentMask{TransformComponent::kBit, AudibleComponent::kBit}); It uses the same code as above and gets the handles to the entities which are represented. But, as mentioned in my initial reply, sometimes I want dopler effects so I directly query if the entity has a physics component. Hence, having the entity know which components it has is a huge optimization. A naive implementation of this is probably good enough for most uses and you won't have to go much further. But, to give you an idea of how far you can push things (and maintain the simple API), with a number of optimizations behind the scenes, my ecs stress test runs just over 5 million entities doing a simple wander at ~200Hz on a 16 core thread ripper. Getting to that point is a major undertaking but it is very doable.
  5. The second option is the closest to correct item but by tying components to specific systems you are breaking some of the intended utility of ecs. Of course this may fit your intended usage patterns but it doesn't work for some of the uses I've put my system to. Take the physics component as an example, I often want to access the physics component from many different systems. Some concrete examples: audio system uses the velocity for doppler effects, AI systems use velocity to calculate intercept vectors, rendering can use velocity for trails and particle emitters attached to the entity as the velocity to initialize new particles to. As to the idea of entities knowing about their components, from a singular point of view, they don't generally need it, but when you are doing things like querying the world for a list of entities around a point, filtering by what components exist on each entity is a good thing and then of course computing something based the components of the entities found is generally the goal of such a query. So, overall, you are loosing a lot by not having a generalized iteration system for your entities. The way I approached this is by having a core 'system' (otherwise known as the EntityManager) which owns all of the component containers. Any system added afterwards can then request an 'index' from the core system. I don't believe I have a single system in use that has an iterator which is a single component, I'm almost always interested in Transform and at least one other component. I.e. rendering generally needs transform and the renderable components, audio needs transform and audible components, etc. At the most simplistic, managing the indexes is actually fairly easy. An index is simply a vector of entity ID's. Every time you add/remove a component on an entity you look through all the indexes which exist and check if the index is watching for that component. Removing components is trivial, if the index contains the given entity and a component that index is watching is removed, just remove the entity from the index. Adding is a little more involved in terms that only if the component being added is the last component needed for the index to be interested in the entity do you add the entity to the index. Implemented in the simple way yields a usable ECS system without the limitations your two suggestions would introduce. I would highly suggest doing this in a correct manner the first time at least and then simplify later. The abilities you are loosing by not going with a full implementation could be critical to your ability to actually use such a system.
  6. From the way the embedded code is shown, I would say the first thing to do is reverse the logic. What I mean is that what you posted suggests that the code is along the lines of "I'm trying to interact with you" and "I need to figure out what you are and what I can do to you". Conceptually of course is the most sensible solution, unfortunately in practice it is also the solution which fails the most often. Rather than this, I've always used subject oriented programming for interactions. This reverses the logic such that rather than the 'doer' figuring things out the subject of the interaction performs the work. Basically this means that the input system would simply set a flag on the input component saying "this entity wishes to interact". The interaction system can then do pre-cull for subject entities which are in range, which ones can be used from the doer's position, etc. Finally, you end up with a door that runs the rule checks: "You are in range, you have my key, it is Monday after noon, you are wearing a purple shirt... Ok, guess I will open now." As to how you implement these things in the ECS itself, I suspect everyone does it differently. Personally I use a handle based approach where the interaction component simply contains a handle which refers to usually a script which checks the interaction rules. The script gets the doer and subject ID's so it can check states and make the decision. Then, likely in another script, an action is performed. Overall this probably sounds ass backwards but it is a well proven solution to removing massive 'if/else' checks. It is also great for expansion packs and such since all the new logic is contained in the subject and you don't have to patch the doer code to understand the new interaction. Just as an example, The Sims used(still uses I assume) this subject oriented approach which allows DLC to be dropped in and mostly just work. Hope this makes sense.
  7. This debate will pretty much never die. Even with the current environment of high speed internet and low pings, there is still enough of a difference which folks can argue around forever. Personally I see it as a matter of degree's anymore. "Technically" UDP is the better technology for games and it avoids a number of the downsides of TCP but the differences at the end of the day are fairly small at this point and getting smaller. The primary issue which folks bring up most often is that because TCP is lossless it will sometimes block IO when a packet is dropped which can increase the effective latency considerably. Such issues can cause hickups in the gameplay and if they are bad enough make a game unplayable. UDP can avoid these problems because most implementations send a steady stream of packets with updated information and missing one packet doesn't hold up later packets which contain more recent data such that the missed packet can safely be ignored. There are a number of reasons that this is becoming less of an issue, at least for games. The primary reason that this is becoming 'less' of an issue, though by no means gone, is that the internet in general has changed quite a bit over time. Packet loss is actually not particularly common anymore and actually less likely with TCP due to optimizations to the routers which try there best to prevent dropped packets on TCP specifically. Historically I used to measure between 1 and 5 % packet loss from most ISP's depending on time of day, anymore it is at worst around 2% unless they are having actual issues. Also, many routers have a reliability system where rather than waiting on the client or server to resend a packet the router which dropped the packet will automatically re-request it from the closest link. Often such corrections are sub millisecond and barely notable. Finally, even if all the routers are ancient and there is high loss rate, the normal pings between client and server are considerably lower such that the corrections happen in 10-20 ms versus what we used to deal with around 100+ ms. Now, as Kylotan says, super fast games can always benefit from going UDP. I might argue just how much but that is still a fairly reasonable fact. But, for the reasons I point out above (and lots of other reasons), it is becoming less of an issue and may not matter for most games moving forward.
  8. Looking at the code in that presentation it seems that the minSeparation is a measurement of how close two entities will be at their closest point. Using this value it then says that if the separation is more than some value (2*radius in the code) don't bother building an avoidance force since the entities never come close enough to need adjustment. That would be my guess from briefly looking at the code. Hope this helps.
  9. There are two usual suspects for the differences you are noticing. The negation usually comes from setting up the variables involved. A very common item is that the equations will show, for example, "target-source" and some parts of the surrounding code will want the opposite "source-target", the programmer just uses the most common case and corrects the given equations by dropping or adding the negation. This is not usually a bug, it is just refactoring the equation to fit with the surrounding available data. The second problem, and why the math does not always match up with the proper equations, is that the equations return an instantaneous force which needs to be applied. When you start factoring in other things such as a given bodies mass so they don't look mechanical, standard delta time application of forces rather than instantaneous, you are effectively breaking the equations. It is actually quite difficult to factor things like mass all the way back to the initial equations of motion, as such many implementations tweak the post motion equation to simply 'look good' rather than be correct. Unfortunately these forms of tweaks are use case specific usually and black magic tweaks that are not 'correct' in the math terms but get the job done for the game in question. Additionally, such hackery works great with some of the equations or are not actually needed in others, but variations of 'arrive', such as intercept, are extremely sensitive to such things. Tweaking the 'arrive' behavior is problematic since it almost always ends up underestimating the amount of deceleration required to stop at the target, the math gets pretty hairy to integrate this correctly so lots of folks just hack around till it is close enough.
  10. I can think of one reason, which OP doesn't cite - the service locator pattern is just a "better" way to have singletons. Being able to switch out objects from the service locator at runtime is nifty, but it still has the problem that globals and singletons have where it hides dependencies within the methods that use it, rather than making them explicit,. The service locator still looks like global state to client code and most implementations of it still only allow one of each type of object to be located from it - both of which are problems. I see it as a crutch for dealing with globals-heavy legacy code; in new code, it'd call it an anti-pattern. Well... I don't promote the idea of using the locator to hide such details, I only mention it as "you could" do so, not should. And I agree that such things should be avoided as a better practice. I tend to get into that a bit more when I mention minimizing the referential chains, those are just evil. I'm unsure of the OP's desires though, so when you have a hammer, everything looks like a nail kinda applies.
  11. Ah, you are breaking SRP at multiple levels then. But, even without that, the suggested referential chain is another case of writing a helper function rather than typing that over and over. So, you could rewrite that chain as: BindPointSampler<PipelinePS>* GetBindPointSampler(engine, slot) { return ... chain of references ... ; } This goes back to the cohesion problem. By writing the chain of dereferences you are exposing the ownership into your code. If you encapsulate the chain of dereferences as above, if and when you decide to change the referential chain you don't have to go hunt down every use case, just change this helper function. Additionally, you can write a couple variations such that if you already have the state manager, it will automatically start at that point rather than going all the way up to engine. Obviously though you should be keeping your reference chains as short as possible. Anytime I see a reference chain longer than two hops, I tend to think there is something wrong. GetSwapChain(engine) is probably a bad chain, GetSwapChain(window) or GetSwapChain(renderer) are both shorter and imply more context around what you are doing and as such, have less of a cohesion problem exposed. Or, even better, hide the swap chain completely and only have GetCurrentImageView(renderer). Utilizing helpers in this way means you don't expose the ownership outwards from the things that actually need to know such things. When done well your high level code is extremely simple and linear and independent of how you might want to restructure things later.
  12. Actually the question here is why are you not using the service locator you mention in the OP? I.e. you should rewrite that function as: CreateWhiteTexture(ServiceLocator*, GUID, string); There are quite a few benefits to this approach. Primary among the benefits is that you can change everything about how to create a texture, update this function to match and not have to touch any of the uses of the function afterwards. This is the pattern I tend to use all the way from the main function on up. If you centralize everything around a combined Factory/Locator abstraction there is a little bit of extra setup boilerplate 'within' wrapper functions such as CreateWhiteTexture but very little when using the function. I.e. in my codebase it would be something like the following: TextureBase* CreateWhiteTexture(ServiceLocator* locator, GUID guid, string name) { auto driver = locator->Get<RenderDriver>(); auto resMan = locator->Get<ResourceManager>(); ... make a texture etc ... } That's not *too* much boilerplate to deal with once in a while when writing these helpers is it? I admit, I tend to take this to a bit of an extreme level given the choice simply because I've been bitten by singletons so many times over the years that I never want to see such things in the future if I can help it. Going back to hodg's reply, I don't even believe in singletons for executable resources for the most part. For instance, the command line passed into main, my "Application::Main" never see's that, it can get the 'environment' object from the locator passed to it after the per platform main/WinMain/DllMain/whatever has setup the process. It's a bit extreme but can be really useful for instance if you ever worked on an MMO and needed to startup 100 headless clients to stress test servers. Starting 100 processes is typically bad, but if you have abstracted away the entire environment from main on up, you can just call "Application::Main" 100 times even passing in custom command lines, different cin/cout abstractions (i.e. feed to logging instead of console) etc etc.
  13. All8Up

    City building game inner workings

    There are quite a few methods to approach this but the most simplistic one is usually based on the idea of breaking everything in the world into actions which produce state changes. Banished is probably the most simplistic of the games until you get into the market and home management portions of things so I'll use that as an example. If you ignore everything except one building, things become simple though you have to think of the solution in a kind of odd manner. Let's take the fishery as an example. The fishery is placed near water and it contains one action from then on: "create a fish". Since in Banished you assign workers to buildings (well you set the number and the game selects the specific entities) the first step is pretty easy, on every game update the workers check their assigned building for an action if they don't have one already. Since the fishery always contains "create a fish" as an action the worker will always have one copy of that action in their queue. We'll ignore the other items such as get warm, go eat, sleep etc but they are also in the queue and that's where the priority portion comes in later. Every action has a required state, what it 'does' code and a final state change to make on completion. The required state for create a fish is 'in the building', it's 'does' code is simply wait 'x' seconds and the resulting state change is '+1 fish to inventory'. So, the entity tries to run the 'does' code but finds that the requirement 'in the building' is not satisfied. The entity looks in a table of actions for one which has a result of 'in a building', it should find a 'path to building' action with the desired result. So, the entity code moves the "create a fish" action to be a child of a new copy of "path to building" and starts running that action. "Path to building" has a required state: "has path to building", it searches the actions and finds "create path", pushes "path to building" to a child of a new copy of that and starts running it. "Create path" has no requirements and results in either success, which sets the path on the entity workspace, or fails and takes all it's children with it potentially telling the user or the building about the failure. Assuming there is a path though, it assigns it to the entity workspace as the state change portion of the action and then replaces itself in the queue with it's child "path to building" which can now start running, as such the entity starts following the path. Each update the "in building" check is updated and when it changes, pop "path to building" off and replace it with "create a fish" which now starts running. "Create a fish" simply puts the entity to sleep for 'x' seconds and when complete it performs "+1 fish" state change. Bingo, a chain of events broken into fairly simple concepts which can be explained to the entity with a fairly trivial algorithm. That's the very "basics" of how you can solve the "do things" portion of the problem. It is actually fairly simple though some of the code is not trivial. Where things start getting more complicated, and as Orymus3 said, how to make it work through out the game evolution, is when you start adding multiple competing needs to the entities. In Banished again, each entity has competing needs such as: stock my houses inventory, eat, sleep, procreate, get warm, get healthy, etc. At this point you start assigning priorities to the actions in each entity and as such your balancing nightmare commences. You need to start considering things like action velocity (i.e. I've started this action so it should get a boost to priority for a while before other things interrupt it), absolute interrupts such as hey that's a fricken tornado I should run away, etc. In more complicated games, Kingdom's and Castles I'm not sure of, you have multi-step production chains and things get *really* damned complicated. For instance in an older game I can't remember the name of, you had fields of wheat which you harvested, wheat was then used in a mill to produce flour, which was then taken to a bakery, which when combined with entities and other ingredients could produce different types of food. That game left a lot of the priority setting to the player and it was really difficult to get things working smoothly because the AI didn't consider time spent pathing, waiting for the mill to not be in use, etc very well when considering which actions were most efficient. Lately I believe a modified Monte Carlo Tree Search could be modified to handle such things but that's getting pretty advanced and you should probably stick to the Banished like simple model till you get a handle on some of the trickier bits. Hope I didn't trivialize this too much to make sense. If I did though and you need further expansion on bits and pieces, let me know.
  14. All8Up

    COM Interface best practices

    There is actually a rule of thumb here which is fairly easy. If you intend to supply fallback functionality for when you can not get say IDXGIAdapter4, then following point three is the best way to go. If, on the other hand, you absolutely require a certain level of the interface to exist, say you absolutely must have IDXGIAdapter3 or higher, then keep only IDXGIAdapter3 pointers. Basically there is no reason to have multiple pointers or continually query things unless you can actually handle failures and have supplied alternative paths through the code for those cases. A case in point, I'm doing work with DX12 and as such there are guaranteed minimum versions of DXGI which must exist for DX12 devices to even be created. I have no fallbacks since I can't get DX12 functional without the minimums so any failure is a terminal error. On the other hand, I do query for 12.1 features in a couple places just because they make things a bit easier, but I have fallbacks to 12.0 implementations, in those specific cases where it matters, yes I query as needed. Hope this makes sense and is helpful.
  15. There is a reason that suspend and resume do not exist in posix and many others, they are inherently unsafe. Because you don't know exactly where the thread will be during a suspend there are a lot of different things which could go wrong. For instance, if you suspend with a lock under the threads control, this can cause a very difficult to find/understand deadlock. For this reason it was deemed best to avoid this and make programmers use explicit synchronization. In general this is the best solution even if the API's are available because you know the exact state at the point of suspension, i.e. I suggest not supporting this even on Window's. See the caution here: https://msdn.microsoft.com/en-us/library/system.threading.thread.suspend(v=vs.110).aspx, and the fact that it is deprecated in .Net moving forward for these and other reasons.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!