agleed

Members
  • Content count

    126
  • Joined

  • Last visited

Community Reputation

1013 Excellent

About agleed

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. I have only implemented two variants of these, those were initially a couple of hours of work that was fairly representative in terms of performance to a fully featured implementation. I'd approach the problem by designing an agnostic API with a stub implementation, and then brute-force choose the best performing one once I have a non trivial project going. This is strictly only impossible if the implementation has strong implications for how programmers must design the resulting systems that use that particular implementation. I'm not sure what would even fit the "strong" qualification there. Even the variant where the components stored with systems (and the programmer who writes those systems has to put them there) can probably be circumvented with some macros or code introspection / code generation (hello clang libs, until C++ finally has a proper metaprogramming system) that says "I need physics and renderer related components here" and then figure out where to put that stuff based on the dependency graph of all systems that need it. I guess if you really want to you can probably circumvent all of those problems by using some kind of DSL. What am I missing?
  2. The projectileFX action internally calls a function in our graphical effects system. That function is hardcoded to take raw game-world positions, and what the projectileFX action does is take the casting entity and the target, get the positions of those and hands it over to that FX function. If we wanted to shoot the projectile not at the target, but somewhere else, we'd add a different kind of action (or just an option to projectileFX) that doesn't take the target entity's position, but whatever else you need.  I should add that all over the code, we have cases where 'variables' are really treated as 'this is either a variable or a function' (inspired by functional programming languages), so we can do stuff like changing the target position given to that FX function at runtime (for example, if we want a projectile to track a target, instead of flying to the same place when the target moves), or change the 'arrival time' variable which the spells use to know when to trigger damage. The lightning bolt thing wouldn't require a tree or anything like that. In our linear action list system, I would just give the thing [n] iterations of a [lightningFX, lightningSFX, damage] action triple, where n is the number of times you want it to jump. Every time the lightningFX is used, it writes the location of the target to some internal variable of the spell that later executions of lightningFX reference. We have a bunch of tracking stuff like that in our spells, too. We have a spell that spawns rotating orbs around a caster, who shoot particles regularly. Since the positions of those always change and they are spawned at runtime, we just add those particle effect objects into a 'spawnedParticleEffects' array that every spell has, and every time that spell shoots a projectile, it picks a random entry out of that array as a source location.
  3. Not sure how much use this is if you want a hardcore 'component'-ized spell system, but in Idle Raiders (and its successor Second Run, you can look it up on Kongregate if you want to play it) we don't solve this generally, instead there's a whole bunch of (re-usable) hard coding which is working fairly well. I think we have close to a hundred different spells in the actual game now (playable by people, and more added on a regular basis) and we haven't really encountered major issues. Our spells are lists of actions (basically, functions that get called one after the other, with delays between them). We have a fireball spell in our game, and that has the actions "projectileFX" (for the graphical effect of launching the projectile, that also computes how long it's going to take), "projectileSound" (for sound effect), followed by the action "fireDamage" (for dealing damage).  When spells are constructed they are being given generic options (string-value pairs), in this case stuff like the file name for the projectile, the damage modifier, speed of the projectile, etc... We also have an "Ice Shard" spell, that could just be implemented as the Fireball spell but with different options (for various gameplay reasons, it's an entirely separate spell in our system though). If we wanted to add an AOE component at the end of your spell, there are two ways we could do it. We could again hard-code it (maybe leave it out by disabling it with an option when the spell is constructed), and just add an "AOEFireDamage" action in addition or instead of the single target fireDamage action. The second way would be via our 'passive ability' system. All abilities can trigger other abilities using various gameplay rules. We could just create a generic "aoedamage" spell that is triggered by the fireball spell. As a practical example there's an "Ignite" passive skill that triggers a (burning) damage over time effect on the target every time a Fireball crits. Oh yeah, there's also an actual AOEDamage ability that can be used by warriors to have a chance to "cleave" their melee attacks, that works like this. This works in a data-driven approach, too. All these 'actions' are created at runtime anyways (it's Javascript so it's easier to do this there, but in C++ they would just be function pointers that always have the same type, or if you want to get fancy, instances of classes derived from a SpellAction base class), so it's not a problem to cobble together an editor that assembles new spells from these basic actions.  There's one major upgrade we could and would like to make to this system, which is to have skills be an action-tree instead of an action list. With action trees (so a single action can branch out into multiple follow-up actions, or multiple branches can join back into one), it would be easier to have effects where you need to track multiple instaces of something that has previously been started by a different action. For example, a random spell could say spawn three different projectiles that travel around for a bit, which would mean the action tree branches out into three paths, and at the end there would be a connecting action node that waits for all three projectiles to arrive at their target before doing something. Or the spell launches a random amount of projectiles, and each of them does something different (random?) when it reaches the target. That kind of stuff would be a lot easier to handle in terms of code if the "random things" that happen at the end can refer to a parent chain of action nodes, instead of having to find whatever they need within the linear list of actions that is there now. We haven't done that yet mostly because we haven't encountered any serious use cases where we couldn't just (again) hard code around the problem. It sounds dirty but complicating the system needs to be a productivity win (less time spent creating the same things for the game), which we don't see at the moment.
  4. Is there previous work for something like that? Does it even make sense?    I'm a complete beginner when it comes to network programming. From what I've read it sounds like people mostly try out different networking implementations (regarding protocols used, prediction and interpolation approaches, etc.) by hand and end up using what feels best.    I'm wondering whether or not there exist metrics that measure various aspects of an implementation specifically suited or tailored to games. I figure it would be helpful for automated testing, and maybe speed up the development process when you're trying out a bunch of different approaches.   And if there aren't, I also wonder if this would be worth putting some work into to come up with useful metrics, or if consensus is "nah, just try until you find something that works best, how networked gameplay feels has too much subjective/complex elements attached be quantified by metrics" or something like that.   edit: I know this is a very generalized question. What a metric would look like probably depends a lot on what kind of quantities you're looking at. Am I trying to synchronize player positions as best as I can across multiple players? Server-Client or P2P? Etc.? I'm basically having a hard time googling for this stuff and wonder if people more experienced in the field have come across useful stuff. Open to anything.
  5. Just to get on the same page... what are we talking about when we say explicit connections? I'm thinking about struct ABCEnt {      A *a;  //points to an A component in the big linear array of A components      B *b;  //same      C *c;  //same } Of course, you can do that at runtime with something like struct Entity {     Component *components; //can be filled in a data-driven manner     int numComponents; } Entity e; A *a = GetComponent<A>(ent); //linear search in components But that kind of stuff in my opinion is the opposite of explicit, on a code-implementation level. Are we talking about the same thing?
  6. Well, ECS is just an extended form of composition with data oriented and data driven design measures added in. They just counter the typical problems you get from deep inheritance trees, so I think it's natural to assume that stance of discussion and those comparisons are entirely fair. Of course, ECS is just one possible solution to the problem, which you probably don't need if you don't have all of those particular problems, but everyone should already be aware of that and we're just discussing specifics of ECS.   Regarding explicit vs implicit: The problem with explicit is that it's not data-driven at all. You can't treat different component combinations in a polymorphic manner when needed. If you have an entity 1 with components A,B,C and entity 2 with components B,C and a system that operatoes on components B and C together, you can't have a system that iterates through all entities just like that because they're different types, instead you would have to manually work on all of those different combinations when updating a system. Like, first looping through all ABC entities, then through all BC entities, if you later decide to add an ABCD entity, you have to introduce that as well, and if they interact, manage that somehow as well.   This is probably solvable with dynamic code generation and re-compilation while working in your editor or whatever, but even then you'd have to find a way to let the user define system logic and then have the code generation cobble those different entity types together...   So I think explicit only really works if the entities in your game are not "different but some subset of components is the same" like that. Or to put it differently: If you have no need for data-driveness of your core entity types. I wouldn't go that route with a generalist framework. It probably works pretty well when you're coding your engine in parallel with every game, and adapt it to that game's needs.   So overall, it's completely true this is basically a static vs dynamic typing discussion, with all the same trade-offs and reasons why you're sometimes forced to dynamic typing in the end.
  7. I don't know how complex your game is, but my personal experience has been that any mobile device that doesn't run WebGL by now probably isn't going to be nearly fast enough to run a javascript game of non-trivial runtime complexity. For desktop, this is less true, we tested more than decade old devices which have proper WebGL browser support (even with all extensions and what not) but which are just too slow for the CPU side code.   On practically all platforms WebGL support is also just a software problem of users requiring a proper browser that supports WebGL, not a hardware problem (since it's based on ES 2.0), so if you plan to roll out your HTML5 app as a packaged application, as opposed to an actual web page that your players have to navigate to via the browser, you can just make sure to use a javascript runtime that supports WebGL in the background. This is much easier than convincing your users to upgrade their browsers (or even worse, switching to a different one) for your game.    Also canvas performance CPU-wise is pretty bad. I have no idea why this is, but even on high end desktop PCs we struggled hard to draw more than ~600 small images per frame at 60 FPS, even if everything was packed into one or two atlases. This is one of the major reasons why we decided to go WebGL only for our next web game.    Another thing to add, mobile & desktop browser WebGL support is at 92.6% globally as of February this year (probably 95%+ by now), according to http://webglstats.com/ , even just mobile devices have 90% coverage, but with the "roll out as stand alone application" approach you can probably savely put this in the 100% category.    And last but not least, consider that supporting separate rendering paths is more complex than just going pure WebGL!
  8. How do you attempt to read the data? With image load/store you need manual synchronization (using glMemoryBarrier, ) to make sure the data is available in successive drawing operations or compute calls. For example if you want to fetch from the texture in a shader directly after writing to it, you need a glMemoryBarrier(GL_TEXTURE_FETCH_MEMORY_BIT) in between the dispatch call and the draw call that uses the texture.   edit: Whoops, saw you already have a memory barrier in there. Still, you should make sure the correct flag is set. With the SHADER_IMAGE_ACCESS_BARRIER_BIT the correct synchronization is only guaranteed if you use this image in subsequent calls by reading from it with image load/store as well (so no normal texture fetching for example).   yet another edit: Also, the memoryBarrier call is before the dispatch. If you access the texture after, you still need another barrier after the dispatch call. The barrier before it only ensures synchronization with everything that happens before it.
  9. Binding different components together using an entity is almost the entire point of the system. It's about having different data and functionalities, and bundling that together in a way with minimal memory, runtime and abstraction overhead (the opposite of which would be huge inheritance trees).   In practical terms it's because too much code will need it. Have an animation system and a combat system? Well, if my character is hit (which inevitably ends up modifying that combat component somehow, which probably contains all the stats like HP, armor, damage, etc.) I'd like to use the animation system to set up a response for that particular character, which will in the end modify that particular animation component. Get the combat component and animation component using the entity... ez pz. Stuff like that.   If you're viewing this from a low-level perspective, it's easy to get the impression that you're mostly dealing with independent systems. A physics engine, that takes care of all its own stuff, a renderer that doesn't care about what happens on the outside, why would an audio system require outside components ever?? etc...    But that's not what ECS is really for (big IMO). It's about that part of the code above that which uses all these systems and bundles them together to make a game with it.   edit: Remember that a big part of the complexity of ECS is also requirements like being able to attach and detach components at runtime, being able to do data-driven stuff, etc... So in the end those users of your component system will just end up with similarly complex abstraction layers which tie this stuff together at runtime.
  10. Our game (Idle Raiders and its upcoming successor) deals a metric ton with stats, and it works like this:   We have "modifiers". They are triplets ( attribute, operator, value), i.e. the attribute to be changed (e.g. damage), the operator used (how the values are added the attribute, e.g. simple ADD which does base_value + value, ADDITIVE_MUL which does base = base_value * value, other stuff, etc.).    There core modifier system does no 'runtime tracking' (recomputing every frame or something) of stats because it's not necessary. Not really sure why you would need it... we only compute the values when they are changed by gameplay (e.g. player equips new item, player applies buff, buff runs out, etc.) and keep it like that until it gets changed again. We also don't use any dirty values and just recompute an attribute immediately when a modifier is applied/unapplied, because even though we have sometimes dozens of entities fighting, modifier changes are so rare relative to the frame update frequency that changing the same modifier on the same entity twice in a frame almost never happens, and even then the computations involved are nearly trivial in computational terms.   The modifier system also doesn't actually store 'active' modifiers that are used, and only stores the combined values per operator type. So when you want to compute the final result of an attribute from all its applied modifiers, it looks up combined values from the ADD, MUL, etc. modifiers and applies those to the base stat. Combine means... when you do some MULs like this base_value += base_value * value1 + base_value * value2 + ... it's of course the same as doing (this is the same for all other operators) base_value += base_value * (value1 + value 2 + ...); so you can take the  (value1 + value 2 + ...) and store it somewhere. It only gets changed when modifiers are applied/unapplied, and when the value of an attribute is computed, you only need to do that one step using the cached 'combined' value per each operator type.   The tracking of the actual modifier objects that need to be applied/unapplied is left to the gameplay systems that actually use them, because their lifetimes are usually dealt with in different ways. For example, the equipment system just stores all modifiers associated with an item in the item itself, and when you equip/unequip it, applies/unapplies it. These modifiers are alive all game and applied/unapplied at the behest of the player's actions. The buff system on the contrary actually does some checks once a second to see which modifiers now have to run out, etc. and actually discards some of them.   We need ordering of those modifiers, so we use a layer system. Any layer can contain any number of modifiers with all their different operations. We apply the lowest first, from that it calculates basically a new "base value" (a copy of course, the original base value never gets changed) for the layer above. That layer also applies its modifiers, and moves it up the ladder all layers.    For example, an entity starts with 100 HP and has an item that gives +20 HP and another that gives +50% HP, then that results in 170 HP since equpiment modifiers are all in the same layer and operate on the base value. But when the player now triggers a temporary buff that increases the HP of an entity by 20%, that is in a layer above (since thats just how we want it), and the entity will have a total for 204 HP. And so on.   The math operators described above are applied independently from each other within the same layer. So layer 0 MUL does not take the computation result of layer 0 ADD into account. Instead, layer 1 will take the result of the final result of layer 0 into account, etc.   All gameplay systems target a pre-defined hardcoded layer, since it's important in terms of gameplay how the different sources of modifiers are supposed to "stack". There are many different ones... we have equpiment like armor and weapons that changes stats, all characters can have skills that modify some stats, then we have temporary buffs from usable items that modify stats, permanent unlockables that modify stats, a basebuilding system that modifies stats, etc...    As a final bonus step, before storing the computation result in the actual attribute variable (myEntity.damage for example) you can put that through a custom curve if you so desire.    And actually we don't use this layering system or custom curves yet so that was a lie  :D That's just how I would do it if I had the time to code it again. For now we basically make do with the same system but with only 1 layer where everything is thrown into, and we've been doing fine so far, except that our usable scrolls that increase raider damage by 10% for 30 seconds only do it based on the base value, but that's the only thing we currently don't like about it and we had more important stuff to work on so...
  11.   Another thing, I just noticed this doesn't work like that if entityID is just a globally increased number. Instead those entityIDs must be "per alive entity in the relevant component collection". I.e. if you add/remove a component to an entity you have to change its ID, and keep reusing IDs of dead entities. Shouldn't be a problem if you hand around your entity as an actual structure.
  12.   Nononono. Don't use global arrays, for anything, especially in an ECS.       Indeed, making this actually global is silly, I didn't mean global in the 'singleton' way. I'm currently working on a game where we made one or two systems 'global' (instead of local per game level) like that and we deeply regret it. 
  13. I had some idea without being able to implement it yet. I hope this isn't completely useless, it's possible that I have just overlooked something simple and this just doesn't work. Anyways.. I think the following way of component storage is familiar to most people who have implemented an ECS system, just a short recap:   Have global arrays of the same fixed size for each component type where the entity ID is a literal index into the array. Creating a new entity anywhere just increases the array size for all component types by 1.    If your game is entirely coded in C++ and don't access or create component types from outside scripts, all necessary information is available statically and getting any component from any type is just a fixed array access. I.e. auto position = g_components.positionArray[entityID]; auto mesh = g_components.meshArray[entityID]; This sounds pretty the first time you see ECS implemented like this because component access can't get much faster than this. Problems with this approach (and why people use others):   entities that only have component 1, but not 2 3 4 and 5 still use memory for 2 3 4 and 5 Systems that iterate through all components will have to skip some NULL/nullptr components that are unused Most engines do need some kind of data driven way to define and use components. With the added complication of tracking component type IDs, this also means component access becomes at least two array accesses, something like: auto position = g_components[componentID][entityID] So people do stuff like storing components in an associative manner instead, which is theoretically much slower in terms of how you access these components.    My proposal for a different solution: Create code that generates these global array structures for each used permutation of entities and their used component types. So there's not just a single global component collection, and instead multiple, each for one permutation of components used(not actual C++ syntax): g_positionMeshComponents = { vector<vec3> positions; vector<mesh> meshes; } g_positionWeaponInventoryComponents = { vector<vec3> positions; vector<weapon> weapons; vector<inventory> inventories; } etc. So when you create an entity that has a position and mesh component (and nothing else), you will know to access the data from that particular global variable.   Of course, in practice these global collections wouldn't be stored in static variables. You can't create these statically, because the number of possible permutations is too large if you have any realistic amount of component types. So you need to only keep those permutations around that are actually used. So instead you could create a key from the component types an entity uses (for example: bit mask where each component type is 1 bit in the mask, or a hash with low chance of collision...) and then access components like auto componentCollection = g_components[key]; auto position = componentCollection[typeID][entityID]; or something like that. The key only needs to be updated when you add or remove components to / from an entity. This operation would be exceedingly rare even in complex games, at least in my experience. As such you can either store a pointer to the relevant component container in your entity struct, or work the key into the first 32 bits of the entity ID, or whatever...   Now you have a system where everything is stored nicely in a linear fashion, and only uses the memory it actually requires, plus the component access is really fast. It's still triple indirection, but it should come ahead easily of even highly optimized associate solutions.    But now there's one bigger problem: What about systems that would really just like to iterate through all components of a particular type and do stuff? The components are now spread over several arrays. My proposal is to write these systems separately, such that they have separate component storage and the actual component types used by entities are just wrappers used for access to these components, and they are kept in sync at some point in the code.    You need to do this anyways... when you use third party systems like physics engines and whatnot that have their own ideas about what is optimal for data storage and management...   Another problem: In this lookup auto componentCollection = g_components[key]; auto position = componentCollection[typeID][entityID]; the componentCollection might only have two component types, but typeID can be pretty big (e.g. if every component type has just an incremented ID). Since you want to avoid associative lookup here, you would need to take the overhead and make each array in those componentCollections large enough to contain all component types... I'm not sure how bad this would be in terms of cache misses if a component lies in the middle of the array and the pointers to other type arrays right next to it are unused. Is there a better strategy?   What do you think about it?
  14. The shader code looks good. Sure nothing's wrong with the CPU side setup of the texture? The location of the baby blue spot looks correct, but why would it sample blue from the texture in other locations..? Looks almost like the sampler is wrong (but it says point sampler so I'm assuming it's set up correctly) and filtering something from a lower mip level or something.   Just in case, have you tried using Load instead? (https://msdn.microsoft.com/en-us/library/windows/desktop/bb509694(v=vs.85).aspx)
  15. After some investigation I realized that vertex shader texture fetch isn't possible in WebGL (probably because it's slow on mobile) so emulating instancing would be pretty painful (since I'd have to use the limited amount of uniform slots for all the instanced data that I have...).   I can't wait to get back to desktop graphics and proper APIs again.    BTW just to be clear, I'm already simulating instancing by just flattening/duplicating a lot of data and rendering everything in one draw call. This way I can skip ahead to the 'first instance' just by using the offset parameters in drawArrays and drawElements. It's the CPU side copying around of data in typed arrays that I'm trying to optimize (the actual transfer via bufferSubData is fast, I don't need to worry about that for now). Doing it this way is a ton faster than trying to use VAOs and state sorting or what not to try and make draw calls faster (profiled this a lot).   I think I might try keeping all of the vertex data for meshes in fixed array locations (so I don't need to update the CPU side buffers at all unless an object is 'dirty') and instead generate only the index buffer every frame... but I remember someone heavily discouraging this for some reason.