Jump to content
  • Advertisement

Hodgman

Moderator
  • Content Count

    15118
  • Joined

  • Last visited

  • Days Won

    32

Everything posted by Hodgman

  1. There was a discussion recently on making TCP-websockets behave a little more UDP-like. If you're doing a networking model that works well with unreliable streams (e.g. shooters), then this might be worth a look: People have developed fast paced shooters on Websockets, despite TCP not being ideal. RTS games often use lock-step networking models that work well on TCP.
  2. Hodgman

    Space Hockey - Pirate Dawn Universe

    I'm not attacking you. I'm honestly trying to help. I don't believe that there should be a stigma around mental health discussions and it definitely shouldn't ever be used as a slur. To be clear, I'm not attacking you with: "You're mentally unstable, therefore your claims are false". I'm stating the opposite of that: "You repeatedly make false claims, have baseless delusions of grandeur, express paranoid conspiracies against yourself which are false... which are all warning signs of disordered thinking". I'm honestly very worried about you and want to help. I also think that your inability to succeed with your game projects is down to self-sabotaging acts that stem from this disordered thinking... and that if you want to actually succeed as a game designer, you need to address those issues first. We've all watched you talk about these wlid kook posts for years, and the only sensible conclusion is that you're experiencing disordered thinking. You need to get someone who's qualified in this stuff to diagnose it. I am not a doctor, just a concerned colleague. Going along with your stories of you-vs-the-games-industry, or you the genius-inventor-being-sabotaged is not helping you, it's just enabling harm. So I'm not going to indulge you any more. Enough is enough. We've tried playing along with you and it hasn't helped. So, here's the plain truth: it's not true. It's simply not true. This is delusional. I'm literally quoting you... No, you didn't. This is a fantasy. There's no real work in those areas, by you, that's accessible to the public. Just ideas in your head. This is not true. This is a paranoid conspiracy delusion.
  3. Hodgman

    Space Hockey - Pirate Dawn Universe

    No you're not. You're neither famous or infamous. You are suffering from delusional thinking and need to seek medical advice immediately. There's no conspiracy against you. It's psychosis. This is not an insult. As someone with my own mental health concerns, this is honestly the most helpful thing I can think to tell you. The sad truth is you're in an episode right now. Talk to your doctor about this. We've all watched you post incoherently for YEARS about conspiracies to suppress you and your achievements, about your achievements and association with projects (many of which have shown to be wild exaggerations or fabrications), and your game ideas that don't gain traction due to your own personal shortcomings, or by their own incomplete nature (no conspiracy required). When's it going to stop? It's really not healthy. The only way you can help yourself is to get these symptoms diagnosed professionally and find a treatment plan that can stop you from sabotaging yourself with this disordered thinking. Good luck. We're here for you if there's anything we can actually do to help you. If you want to keep insisting that you're a suppressed genius and that the world is out to get you though, I'll keep reminding you that none of us know who you are and there is no worldwide conspiracy. The world simply doesn't even notice Marc Michalik.
  4. Hodgman

    Space Hockey - Pirate Dawn Universe

    Sorry, who are you?
  5. is a helper / utility function (not part of D3D itself) that loads images that have been saved in the DDS image format. Are your images in that format? If not, then you don't want to be using that function. You can, however, look at the source code to CreateDDSTextureFromMemory to see which actual D3D functions it's using to create a texture resource, copy compressed image data into that resource, and then make an SRV for the resource so it can be bound to shaders.
  6. Assuming 60fps, then that's around 20GBps of PCIe traffic. That's near the limit for a lot of PC's! I'd probably aim for below 100MB per frame... The minimum cbuffer size is 256B, so yeah, one per object, times a million objects, times 3 frames = 0.72GB... But... If you're drawing a million objects, you probably really want to be using instancing or indirect drawing or some kind of batching system where you don't need to make a million draw calls each using its own cbuffer. Possibly also using a system where most of the per-object data is stored and calculated on GPU, so that you don't need to triple buffer it.
  7. Hodgman

    Mathematically ideal field of view

    That's the mathematically ideal answer. Everything else is a lie IMHO, setting the vertical FOV and calculating the horizontal produces better results in these days of wide-screen monitors. Most recent games I've looked into let you choose between 60 to 80 degrees vertical, which ends up as around 90 to 115 degrees horizontal on a standard wide-screen (or 145 to 155 degrees horizontal on a triple wide).
  8. If memory serves... The GeForce 9/FX series introduced half-precision processing instructions, and was about twice as fast if you used them (or alternatively - it was twice as slow if you needed 32bit precision....). Also, a lot of SM3.0 level GPU's didn't run 32bit computation by default a lot of the time -- they'd do things in 24 bit precision where possible, and most weren't IEEE 754 compliant. Not sure on the timing, but the idea of half-precision computation was phased out pretty soon afterwards, and is only just being introduced now in Dx12 level GPUs!
  9. I think https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_texture_sRGB.txt adds the compressed sRGB formats
  10. They already don't match up before this (we use PhysX which is non-deterministic, and also sometimes user inputs from clients will arrive slightly too late to be applied at the exact frame that they wanted to apply them on) so the visual position of each player's vehicle is smoothed out to hide the corrections, while the physics state is constantly snapping with corrections from the server. If I end up using a larger tick size for high ping players, they'll end up with more physics snapping = more weird visual smoothing... but the severity of these issues will depend on how bad their ping is. I think gamers kind of expect to have a worse network experience in fast paced when their ping is high, so it might be acceptable..? Using the client-side reprediction model, the current behavior is that high-ping players end up with higher and higher CPU loads / their framerate drops lower as their ping increases!! That's pretty awful, so I think having horrible physics quality at high ping is a better outcome than horrible framerate
  11. If you want to add a line once (not once per frame) then you only call AddLine once (not once per frame)?
  12. I don't know if it's the "correct" way to do things, I'm just making up game/engine architecture as I go along. I'll let you know after I successfully launch a game using this model For what it's worth though, Doom 3 used this model of client-side rewind and reprediction. There's a very in depth write up here: http://mrelusive.com/publications/papers/The-DOOM-III-Network-Architecture.pdf As for the punishing ping thing... Yeah that's my main objection to this model at the moment. I'd love to support fairly high pings, as my racing game doesn't have collisions, so high ping shouldn't be as much of an issue. I haven't implemented these yet, but my plans are to automatically tweak some settings on the client when their ping is too high: * increase the physics tick size during reprediction - e.g. Instead of doing 10x 60Hz ticks, run 5x 30Hz ticks to repredict. * spread reprediction cost over multiple frames. e.g. When the client gets a state update that's 10 frames old, they currently save their game state, load the servers state snapshot, run 10 ticks, then compare the new state against their saved one and add differences into the visual smoothing buffer. Instead, when getting that 10 frame old server snapshot, then could save their game state,load the servers state snapshot, run 5 ticks, save this as a new fake "server snapshot", load own their saved game state from before this prediction and advance to the next frame as if nothing happened. Next frame they take that half repredicted gamestate from the previous frame (which is now only 6 ticks out of date instead of 10) and pretend that it just arrived from the server.
  13. If you're drawing a new line every frame (e.g. a line that shows which enemy your missile launcher is locked on to) then set the duration to 0. If you're drawing a one-off line that you want to linger for a second so that you can see it (e.g. a line that shows the path by a a bullet which exists for a single frame) then you use a large lifetime value like 1000ms.
  14. The Jitter Buffer section implies that state updates from the server arrive at the clients slightly before the client is due to present that frame. This means that he's running the server in the future / clients in the past (like counter strike) and not how I've described in my last post. The downside here is, in order for Sever updates packets for frame F to arrive just before frame F occurs for a client, the client clock has to be synced to at least ping/2 ms in the past relative to the server clock. This means that when the client sends user input to the server for frame G, the server will receive it at time G+ping. This will be fine for completely predictable actions, like moving a single FPS character -- there is latency, but the client can hide it by applying the inputs on their end at frame G -- this is incorrect / divergent from the server's version of events (because the server will apply the change at G+ping), but if no other interactions occur, the end result / final position of the character will be the same on both ends. You only run into issues when your predictions are wrong because there's other things going on in the world, e.g. When two different FPS characters walk into each other, or you walk into a door/moving object being controlled on the server, or if two people throw physics objects at each other, there's some kind of timing/rhythm puzzle, etc, you'll notice ping ms of input latency. AFAIK, counter strike doesn't address that at all / it's just a tolerated downside. However, they do address the special case of weapon ray-tracing, by having the server rewind all player skeletons when receiving a shooting input, so that input lag of shooting is corrected for. The mostly correct predictions plus rewinding of weapon hit boxes means that it seems rock solid.
  15. Hodgman

    Triggering 3D mode on TV's

    Same way they negotiate about resolution, refresh rate, etc -- they use the HDMI protocol to talk to each other over the HDMI cable (or replace HDMI with any of the other protocols). You just start sending stereoscopic images to your display system: In D3D: https://docs.microsoft.com/en-us/windows/desktop/direct3ddxgi/stereo-rendering-stereo-status-notifying In GL: https://www.nvidia.com/content/GTC-2010/pdfs/2010_GTC2010.pdf Or you can talk to your GPU drivers directly, e.g. https://docs.nvidia.com/gameworks/content/gameworkslibrary/coresdk/nvapi/group__stereoapi.html
  16. Can you link to which of gaffer's work you're following? I'm not sure if he does this (I'll have to re-read his work ) but that's exactly what we do when using synchronized physics simulation in our racing game. It does add a lot of extra CPU load on the clients, especially at high ping... In the "Counter Strike" model, the reverse happens -- the server pays the cost of constantly rewinding / fast-forwarding. Also, just for clarity, the clients actually have to simulate / extrapolate to a point in time that's at least ping*0.5ms ahead of the server clock -- so that their input commands will arrive at the server just before the point in time when they should be applied to the simulation. We do a clock-synchronization process during connection to get a common wall clock for each client and the server to refer to, and then the clients can adjust their in-game clock appropriately to maintain themselves in the future slightly. To support massive player counts and/or high ping servers, we also support the snapshot interpolation method as a server configuration setting.
  17. Hodgman

    Microsoft Going Cross-platform with Xbox Live

    Technically that might already possible -- dxvk runs D3D11 on Vulkan, vkd3d runs D3D12 on Vulkan, MoltenVk runs VK on Metal (iOS), and Nintendo runs VK on NVN (Switch) How many layers is too many? Seriously though, Valve used a similar library (ToGL) to ship D3D9 games on Mac, and are now using dxvk/vkd3d in Steam's Linux client.
  18. One of the reasons that everyones started moving towards PBR is that it's easier to make work under loads of different lighting situations than our old ad hoc models What kind of tonemapper are you using? Do you do any other post processing? Can you post some images? It sounds correct that a white surface surrounded by a black/orange dome would look orange/brown... In a real camera, photos of that surface could be shifted away from orange to neural white by changing the white-balance settings. i.e. The physical surface is orange, but we edit our photographs so that it looks white because we subjectively prefer that over an objective measurement of reality. You could add a white-balance step to your tone-mapping, to shift towards blue at sunset/sunrise, and shift towards orange when under the midday sky, like a real camera.
  19. Hodgman

    Interactive book (CYOA)

    I know a bunch of visual novel people who write their stories in Twine but then import the data into their own engines. It's an open source project, so the barrier to hijacking their data formats is smaller e.g. https://assetstore.unity.com/packages/tools/cradle-93606
  20. What? To get depth pass/fail info in a pixel shader, you'll have to implement z-buffering yourself in the pixel shader, by making your own "depth buffer" that's bound as a read-write texture (D3D: UAV / GL: Image). This sounds horribly slow though, so you should probably use the hardware depth/stencil as usual...?
  21. Hodgman

    Free compilers high performance ?

    Instead of building my entire game/engine as debuggable, I just drop this "MAKE_DEBUGGABLE" macro at the top of any individual CPP files that I want to be able to debug: #if defined _MSC_VER #define MAKE_DEBUGGABLE __pragma(optimize("", off)) #elif defined __clang__ #define MAKE_DEBUGGABLE _Pragma ("clang optimize off") #else #define MAKE_DEBUGGABLE #endif This lets you build 90% of your game as fast/optimized code, and the 10% that you're actively working on as slow/debuggable code. If you place it after your #includes, then all your included template crap will be fast / non-debuggable. If you place it before your #includes, then you can step into and debug your template crap too.
  22. Hodgman

    OOP is dead, long live OOP

    edit: Seeing this has been linked outside of game-development circles: "ECS" (this wikipedia page is garbage, btw -- it conflates EC-frameworks and ECS-frameworks, which aren't the same...) is a faux-pattern circulated within game-dev communities, which is basically a version of the relational model, where "entities" are just ID's that represent a formless object, "components" are rows in specific tables that reference an ID, and "systems" are procedural code that can modify the components. This "pattern" is always posed as a solution to an over-use of inheritance, without mentioning that an over-use of inheritance is actually bad under OOP guidelines. Hence the rant. This isn't the "one true way" to write software. It's getting people to actually look at existing design guidelines. Inspiration This blog post is inspired by Aras Pranckevičius' recent publication of a talk aimed at junior programmers, designed to get them to come to terms with new "ECS" architectures. Aras follows the typical pattern (explained below), where he shows some terrible OOP code and then shows that the relational model is a great alternative solution (but calls it "ECS" instead of relational). This is not a swipe at Aras at all - I'm a fan of his work and commend him on the great presentation! The reason I'm picking on his presentation in particular instead of the hundred other ECS posts that have been made on the interwebs, is because he's gone through the effort of actually publishing a git repository to go along with his presentation, which contains a simple little "game" as a playground for demonstrating different architecture choices. This tiny project makes it easy for me to actually, concretely demonstrate my points, so, thanks Aras! You can find Aras' slides at http://aras-p.info/texts/files/2018Academy - ECS-DoD.pdf and the code at https://github.com/aras-p/dod-playground. I'm not going to analyse the final ECS architecture from that talk (yet?), but I'm going to focus on the straw-man "bad OOP" code from the start. I'll show what it would look like if we actually fix all of the OOD rule violations. Spoiler: fixing the OOD violations actually results in a similar performance improvement to Aras' ECS conversion, plus it actually uses less RAM and requires less lines of code than the ECS version! TL;DR: Before you decide that OOP is shit and ECS is great, stop and learn OOD (to know how to use OOP properly) and learn relational (to know how to use ECS properly too). I've been a long-time ranter in many "ECS" threads on the forum, partly because I don't think it deserves to exist as a term (spoiler: it's just a an ad-hoc version of the relational model), but because almost every single blog, presentation, or article that promotes the "ECS" pattern follows the same structure: Show some terrible OOP code, which has a terribly flawed design based on an over-use of inheritance (and incidentally, a design that breaks many OOD rules). Show that composition is a better solution than inheritance (and don't mention that OOD actually teaches this same lesson). Show that the relational model is a great fit for games (but call it "ECS"). This structure grinds my gears because: (A) it's a straw-man argument.. it's apples to oranges (bad code vs good code)... which just feels dishonest, even if it's unintentional and not actually required to show that your new architecture is good, but more importantly: (B) it has the side effect of suppressing knowledge and unintentionally discouraging readers from interacting with half a century of existing research. The relational model was first written about in the 1960's. Through the 70's and 80's this model was refined extensively. There's common beginners questions like "which class should I put this data in?", which is often answered in vague terms like "you just need to gain experience and you'll know by feel"... but in the 70's this question was extensively pondered and solved in the general case in formal terms; it's called database normalization. By ignoring existing research and presenting ECS as a completely new and novel solution, you're hiding this knowledge from new programmers. Object oriented programming dates back just as far, if not further (work in the 1950's began to explore the style)! However, it was in the 1990's that OO became a fad - hyped, viral and very quickly, the dominant programming paradigm. A slew of new OO languages exploded in popularity including Java and (the standardized version of) C++. However, because it was a hype-train, everyone needed to know this new buzzword to put on their resume, yet no one really groked it. These new languages had added a lot of OO features as keywords -- class, virtual, extends, implements -- and I would argue that it's at this point that OO split into two distinct entities with a life of their own. I will refer to the use of these OO-inspired language features as "OOP", and the use of OO-inspired design/architecture techniques as "OOD". Everyone picked up OOP very quickly. Schools taught OO classes that were efficient at churning out new OOP programmers.... yet knowledge of OOD lagged behind. I argue that code that uses OOP language features, but does not follow OOD design rules is not OO code. Most anti-OOP rants are eviscerating code that is not actually OO code. OOP code has a very bad reputation, I assert in part due to the fact that, most OOP code does not follow OOD rules, thus isn't actually "true" OO code. Background As mentioned above, the 1990's was the peak of the "OO fad", and it's during this time that "bad OOP" was probably at its worst. If you studied OOP during this time, you probably learned "The 4 pillars of OOP": Abstraction Encapsulation Polymorphism Inheritance I'd prefer to call these "4 tools of OOP" rather than 4 pillars. These are tools that you can use to solve problems. Simply learning how a tool works is not enough though, you need to know when you should be using them... It's irresponsible for educators to teach people a new tool without also teaching them when it's appropriate to use each of them. In the early 2000's, there was a push-back against the rampant misuse of these tools, a kind of second-wave of OOD thought. Out of this came the SOLID mnemonic to use as a quick way to evaluate a design's strength. Note that most of these bits of advice were well actually widely circulated in the 90's, but didn't yet have the cool acronym to cement them as the five core rules... Single responsibility principle. Every class should have one reason to change. If class "A" has two responsibilities, create a new class "B" and "C" to handle each of them in isolation, and then compose "A" out of "B" and "C". Open/closed principle. Software changes over time (i.e. maintenance is important). Try to put the parts that are likely to change into implementations (i.e. concrete classes) and build interfaces around the parts that are unlikely to change (e.g. abstract base classes). Liskov substitution principle. Every implementation of an interface needs to 100% comply the requirements of that interface. i.e. any algorithm that works on the interface, should continue to work for every implementation. Interface segregation principle. Keep interfaces as small as possible, in order to ensure that each part of the code "knows about" the least amount of the code-base as possible. i.e. avoid unnecessary dependencies. This is also just good advice in C++ where compile times suck if you don't follow this advice Dependency inversion principle. Instead of having two concrete implementations communicate directly (and depend on each other), they can usually be decoupled by formalizing their communication interface as a third class that acts as an interface between them. This could be an abstract base class that defines the method calls used between them, or even just a POD struct that defines the data passed between them. Not included in the SOLID acronym, but I would argue is just as important is the: Composite reuse principle. Composition is the right default™. Inheritance should be reserved for use when it's absolutely required. This gives us SOLID-C(++) From now on, I'll refer to these by their three letter acronyms -- SRP, OCP, LSP, ISP, DIP, CRP... A few other notes: In OOD, interfaces and implementations are ideas that don't map to any specific OOP keywords. In C++, we often create interfaces with abstract base classes and virtual functions, and then implementations inherit from those base classes... but that is just one specific way to achieve the idea of an interface. In C++, we can also use PIMPL, opaque pointers, duck typing, typedefs, etc... You can create an OOD design and then implement it in C, where there aren't any OOP language keywords! So when I'm talking about interfaces here, I'm not necessarily talking about virtual functions -- I'm talking about the idea of implementation hiding. Interfaces can be polymorphic, but most often they are not! A good use for polymorphism is rare, but interfaces are fundamental to all software. As hinted above, if you create a POD structure that simply stores some data to be passed from one class to another, then that struct is acting as an interface - it is a formal data definition. Even if you just make a single class in isolation with a public and a private section, everything in the public section is the interface and everything in the private section is the implementation. Inheritance actually has (at least) two types -- interface inheritance, and implementation inheritance. In C++, interface inheritance includes abstract-base-classes with pure-virtual functions, PIMPL, conditional typedefs. In Java, interface inheritance is expressed with the implements keyword. In C++, implementation inheritance occurs any time a base classes contains anything besides pure-virtual functions. In Java, implementation inheritance is expressed with the extends keyword. OOD has a lot to say about interface-inheritance, but implementation-inheritance should usually be treated as a bit of a code smell! And lastly I should probably give a few examples of terrible OOP education and how it results in bad code in the wild (and OOP's bad reputation). When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a university app that contains a directory of Students and Staff. We can make a Person base class, and then a Student class and a Staff class that inherit from Person! Nope, nope nope. Let me stop you there. The unspoken sub-text beneath the LSP is that class-hierarchies and the algorithms that operate on them are symbiotic. They're two halves of a whole program. OOP is an extension of procedural programming, and it's still mainly about those procedures. If we don't know what kinds of algorithms are going to be operating on Students and Staff (and which algorithms would be simplified by polymorphism) then it's downright irresponsible to dive in and start designing class hierarchies. You have to know the algorithms and the data first. When you were learning about hierarchies / inheritance, you probably had a task something like: Let's say you have a shape class. We could also have squares and rectangles as sub-classes. Should we have square is-a rectangle, or rectangle is-a square? This is actually a good one to demonstrate the difference between implementation-inheritance and interface-inheritance. If you're using the implementation-inheritance mindset, then the LSP isn't on your mind at all and you're only thinking practically about trying to reuse code using inheritance as a tool. From this perspective, the following makes perfect sense: struct Square { int width; }; struct Rectangle : Square { int height; }; A square just has width, while rectangle has a width + height, so extending the square with a height member gives us a rectangle! As you might have guessed, OOD says that doing this is (probably) wrong. I say probably because you can argue over the implied specifications of the interface here... but whatever. A square always has the same height as its width, so from the square's interface, it's completely valid to assume that its area is "width * width". By inheriting from square, the rectangle class (according to the LSP) must obey the rules of square's interface. Any algorithm that works correctly with a square, must also work correctly with a rectangle. Take the following algorithm: std::vector<Square*> shapes; int area = 0; for(auto s : shapes) area += s->width * s->width; This will work correctly for squares (producing the sum of their areas), but will not work for rectangles. Therefore, Rectangle violates the LSP rule. If you're using the interface-inheritance mindset, then neither Square or Rectangle will inherit from each other. The interface for a square and rectangle are actually different, and one is not a super-set of the other. So OOD actually discourages the use of implementation-inheritance. As mentioned before, if you want to re-use code, OOD says that composition is the right way to go! For what it's worth though, the correct version of the above (bad) implementation-inheritance hierarchy code in C++ is: struct Shape { virtual int area() const = 0; }; struct Square : public virtual Shape { virtual int area() const { return width * width; }; int width; }; struct Rectangle : private Square, public virtual Shape { virtual int area() const { return width * height; }; int height; }; "public virtual" means "implements" in Java. For use when implementing an interface. "private" allows you to extend a base class without also inheriting its interface -- in this case, Rectangle is-not-a Square, even though it's inherited from it. I don't recommend writing this kind of code, but if you do like to use implementation-inheritance, this is the way that you're supposed to be doing it! TL;DR - your OOP class told you what inheritance was. Your missing OOD class should have told you not to use it 99% of the time! Entity / Component frameworks With all that background out of the way, let's jump into Aras' starting point -- the so called "typical OOP" starting point. Actually, one last gripe -- Aras calls this code "traditional OOP", which I object to. This code may be typical of OOP in the wild, but as above, it breaks all sorts of core OO rules, so it should not all all be considered traditional. I'm going to start from the earliest commit before he starts fixing the design towards "ECS": "Make it work on Windows again" 3529f232510c95f53112bbfff87df6bbc6aa1fae // ------------------------------------------------------------------------------------------------- // super simple "component system" class GameObject; class Component; typedef std::vector<Component*> ComponentVector; typedef std::vector<GameObject*> GameObjectVector; // Component base class. Knows about the parent game object, and has some virtual methods. class Component { public: Component() : m_GameObject(nullptr) {} virtual ~Component() {} virtual void Start() {} virtual void Update(double time, float deltaTime) {} const GameObject& GetGameObject() const { return *m_GameObject; } GameObject& GetGameObject() { return *m_GameObject; } void SetGameObject(GameObject& go) { m_GameObject = &go; } bool HasGameObject() const { return m_GameObject != nullptr; } private: GameObject* m_GameObject; }; // Game object class. Has an array of components. class GameObject { public: GameObject(const std::string&& name) : m_Name(name) { } ~GameObject() { // game object owns the components; destroy them when deleting the game object for (auto c : m_Components) delete c; } // get a component of type T, or null if it does not exist on this game object template<typename T> T* GetComponent() { for (auto i : m_Components) { T* c = dynamic_cast<T*>(i); if (c != nullptr) return c; } return nullptr; } // add a new component to this game object void AddComponent(Component* c) { assert(!c->HasGameObject()); c->SetGameObject(*this); m_Components.emplace_back(c); } void Start() { for (auto c : m_Components) c->Start(); } void Update(double time, float deltaTime) { for (auto c : m_Components) c->Update(time, deltaTime); } private: std::string m_Name; ComponentVector m_Components; }; // The "scene": array of game objects. static GameObjectVector s_Objects; // Finds all components of given type in the whole scene template<typename T> static ComponentVector FindAllComponentsOfType() { ComponentVector res; for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) res.emplace_back(c); } return res; } // Find one component of given type in the scene (returns first found one) template<typename T> static T* FindOfType() { for (auto go : s_Objects) { T* c = go->GetComponent<T>(); if (c != nullptr) return c; } return nullptr; } Ok, 100 lines of code is a lot to dump at once, so let's work through what this is... Another bit of background is required -- it was popular for games in the 90's to use inheritance to solve all their code re-use problems. You'd have an Entity, extended by Character, extended by Player and Monster, etc... This is implementation-inheritance, as described earlier (a code smell), and it seems like a good idea to begin with, but eventually results in a very inflexible code-base. Hence that OOD has the "composition over inheritance" rule, above. So, in the 2000's the "composition over inheritance" rule became popular, and gamedevs started writing this kind of code instead. What does this code do? Well, nothing good To put it in simple terms, this code is re-implementing the existing language feature of composition as a runtime library instead of a language feature. You can think of it as if this code is actually constructing a new meta-language on top of C++, and a VM to run that meta-language on. In Aras' demo game, this code is not required (we'll soon delete all of it!) and only serves to reduce the game's performance by about 10x. What does it actually do though? This is an "Entity/Component" framework (sometimes confusingly called an "Entity/Component system") -- but completely different to an "Entity Component System" framework (which are never called "Entity Component System systems" for obvious reasons). It formalizes several "EC" rules: the game will be built out of featureless "Entities" (called GameObjects in this example), which themselves are composed out of "Components". GameObjects fulfill the service locator pattern - they can be queried for a child component by type. Components know which GameObject they belong to - they can locate sibling componets by querying their parent GameObject. Composition may only be one level deep (Components may not own child components, GameObjects may not own child GameObjects). A GameObject may only have one component of each type (some frameworks enforced this, others did not). Every component (probably) changes over time in some unspecified way - so the interface includes "virtual void Update". GameObjects belong to a scene, which can perform queries over all GameObjects (and thus also over all Components). This kind of framework was very popular in the 2000's, and though restrictive, proved flexible enough to power countless numbers of games from that time and still today. However, it's not required. Your programming language already contains support for composition as a language feature - you don't need a bloated framework to access it... Why do these frameworks exist then? Well to be fair, they enable dynamic, runtime composition. Instead of GameObject types being hard-coded, they can be loaded from data files. This is great to allow game/level designers to create their own kinds of objects... However, in most game projects, you have a very small number of designers on a project and a literal army of programmers, so I would argue it's not a key feature. Worse than that though, it's not even the only way that you could implement runtime composition! For example, Unity is based on C# as a "scripting language", and many other games use alternatives such as Lua -- your designer-friendly tool can generate C#/Lua code to define new game-objects, without the need for this kind of bloated framework! We'll re-add this "feature" in a later follow-up post, in a way that doesn't cost us a 10x performance overhead... Let's evaluate this code according to OOD: GameObject::GetComponent uses dynamic_cast. Most people will tell you that dynamic_cast is a code smell - a strong hint that something is wrong. I would say that it indicates that you have an LSP violation on your hands -- you have some algorithm that's operating on the base interface, but it demands to know about different implementation details. That's the specific reason that it smells. GameObject is kind of ok if you imagine that it's fulfilling the service locator pattern.... but going beyond OOD critique for a moment, this pattern creates implicit links between parts of the project, and I feel (without a wikipedia link to back me up with comp-sci knowledge) that implicit communication channels are an anti-pattern and explicit communication channels should be preferred. This same argument applies to bloated "event frameworks" that sometimes appear in games... I would argue that Component is a SRP violation because its interface (virtual void Update(time)) is too broad. The use of "virtual void Update" is pervasive within game development, but I'd also say that it is an anti-pattern. Good software should allow you to easily reason about the flow of control, and the flow of data. Putting every single bit of gameplay code behind a "virtual void Update" call completely and utterly obfuscates both the flow of control and the flow of data. IMHO, invisible side effects, a.k.a. action at a distance, is the most common source of bugs, and "virtual void Update" ensures that almost everything is an invisible side-effect. Even though the goal of the Component class is to enable composition, it's doing so via inheritance, which is a CRP violation. The one good part is that the example game code is bending over backwards to fulfill the SRP and ISP rules -- it's split into a large number of simple components with very small responsibilities, which is great for code re-use. However, it's not great as DIP -- many of the components do have direct knowledge of each other. So, all of the code that I've posted above, can actually just be deleted. That whole framework. Delete GameObject (aka Entity in other frameworks), delete Component, delete FindOfType. It's all part of a useless VM that's breaking OOD rules and making our game terribly slow. Frameworkless composition (AKA using the features of the #*@!ing programming language) If we delete our composition framework, and don't have a Component base class, how will our GameObjects manage to use composition and be built out of Components. As hinted in the heading, instead of writing that bloated VM and then writing our GameObjects on top of it in our weird meta-language, let's just write them in C++ because we're #*@!ing game programmers and that's literally our job. Here's the commit where the Entity/Component framework is deleted: https://github.com/hodgman/dod-playground/commit/f42290d0217d700dea2ed002f2f3b1dc45e8c27c Here's the original version of the source code: https://github.com/hodgman/dod-playground/blob/3529f232510c95f53112bbfff87df6bbc6aa1fae/source/game.cpp Here's the modified version of the source code: https://github.com/hodgman/dod-playground/blob/f42290d0217d700dea2ed002f2f3b1dc45e8c27c/source/game.cpp The gist of the changes is: Removing ": public Component" from each component type. I add a constructor to each component type. OOD is about encapsulating the state of a class, but since these classes are so small/simple, there's not much to hide -- the interface is a data description. However, one of the main reasons that encapsulation is a core pillar is that it allows us to ensure that class invariants are always true... or in the event that an invariant is violated, you hopefully only need to inspect the encapsulated implementation code in order to find your bug. In this example code, it's worth us adding the constructors to enforce a simple invariant -- all values must be initialized. I rename the overly generic "Update" methods to reflect what they actually do -- UpdatePosition for MoveComponent and ResolveCollisions for AvoidComponent. I remove the three hard-coded blocks of code that resemble a template/prefab -- code that creates a GameObject containing specific Component types, and replace it with three C++ classes. Fix the "virtual void Update" anti-pattern. Instead of components finding each other via the service locator pattern, the game objects explicitly link them together during construction. The objects So, instead of this "VM" code: // create regular objects that move for (auto i = 0; i < kObjectCount; ++i) { GameObject* go = new GameObject("object"); // position it within world bounds PositionComponent* pos = new PositionComponent(); pos->x = RandomFloat(bounds->xMin, bounds->xMax); pos->y = RandomFloat(bounds->yMin, bounds->yMax); go->AddComponent(pos); // setup a sprite for it (random sprite index from first 5), and initial white color SpriteComponent* sprite = new SpriteComponent(); sprite->colorR = 1.0f; sprite->colorG = 1.0f; sprite->colorB = 1.0f; sprite->spriteIndex = rand() % 5; sprite->scale = 1.0f; go->AddComponent(sprite); // make it move MoveComponent* move = new MoveComponent(0.5f, 0.7f); go->AddComponent(move); // make it avoid the bubble things AvoidComponent* avoid = new AvoidComponent(); go->AddComponent(avoid); s_Objects.emplace_back(go); } We now have this normal C++ code: struct RegularObject { PositionComponent pos; SpriteComponent sprite; MoveComponent move; AvoidComponent avoid; RegularObject(const WorldBoundsComponent& bounds) : move(0.5f, 0.7f) // position it within world bounds , pos(RandomFloat(bounds.xMin, bounds.xMax), RandomFloat(bounds.yMin, bounds.yMax)) // setup a sprite for it (random sprite index from first 5), and initial white color , sprite(1.0f, 1.0f, 1.0f, rand() % 5, 1.0f) { } }; ... // create regular objects that move regularObject.reserve(kObjectCount); for (auto i = 0; i < kObjectCount; ++i) regularObject.emplace_back(bounds); The algorithms Now the other big change is in the algorithms. Remember at the start when I said that interfaces and algorithms were symbiotic, and both should impact the design of the other? Well, the "virtual void Update" anti-pattern is also an enemy here. The original code has a main loop algorithm that consists of just: // go through all objects for (auto go : s_Objects) { // Update all their components go->Update(time, deltaTime); You might argue that this is nice and simple, but IMHO it's so, so bad. It's completely obfuscating both the flow of control and the flow of data within the game. If we want to be able to understand our software, if we want to be able to maintain it, if we want to be able to bring on new staff, if we want to be able to optimise it, or if we want to be able to make it run efficiently on multiple CPU cores, we need to be able to understand both the flow of control and the flow of data. So "virtual void Update" can die in a fire. Instead, we end up with a more explicit main loop that makes the flow of control much more easy to reason about (the flow of data is still obfuscated here, we'll get around to fixing that in later commits) // Update all positions for (auto& go : s_game->regularObject) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } for (auto& go : s_game->avoidThis) { UpdatePosition(deltaTime, go, s_game->bounds.wb); } // Resolve all collisions for (auto& go : s_game->regularObject) { ResolveCollisions(deltaTime, go, s_game->avoidThis); } The downside of this style is that for every single new object type that we add to the game, we have to add a few lines to our main loop. I'll address / solve this in a future blog in this series. Performance There's still a lot of outstanding OOD violations, some bad design choices, and lots of optimization opportunities remaining, but I'll get to them with the next blog in this series. As it stands at this point though, the "fixed OOD" version either almost matches or beats the final "ECS" code from the end of the presentation... And all we did was take the bad faux-OOP code and make it actually obey the rules of OOP (and delete 100 lines of code)! Next steps There's much more ground that I'd like to cover here, including solving the remaining OOD issues, immutable objects (functional style programming) and the benefits it can bring to reasoning about data flows, message passing, applying some DOD reasoning to our OOD code, applying some relational wisdom to our OOD code, deleting those "entity" classes that we ended up with and having purely components-only, different styles of linking components together (pointers vs handles), real world component containers, catching up to the ECS version with more optimization, and then further optimization that wasn't also present in Aras' talk (such as threading / SIMD). No promises on the order that I'll get to these, or if, or when...
  23. In the AAA space a lot of post processing / lighting etc has a ton of man hours poured into optimisation, including finding which bits of work can be done in the same shaders, instead of multiple passes. This pass-reduction kind of work can reduce the bandwidth impact of "fat" HDR formats. So - if your renderer isn't as hyper optimised, you'll likely see a larger difference between thin/fat HDR formats! Fp16 is also nice in that it's filterable, whereas manually packed formats (logL, RGBM, etc) need to be point sampled, manually decoded, and manually filtered (if required). This can make effects such as blurring quite expensive (perhaps a worse cost than the bandwidth cost of going "fat"). Fp11 does have quite a limited range - technically it has the same range as Fp16 (around 65k), but because it doesn't have many mantissa bits it suffers from a lot more color banding. To get similar color banding quality to sRGB 8, you need to add a manual gamma curve on top of it with an exponent of between 2 to 3, which gives you a maximum value of between around 255 (gamma 2) to 40 (gamma 3). That's still enough range for HDR if you apply camera exposure in your lighting shader, but not enough range for general purpose use if you apply camera exposure in your tonemapping post process pass.
  24. Yep 2x, 4x, etc are the number of samples. There's two performance impacts. At some point in time, you need to "resolve" the MSAA image into a regular image for display. This will have a similar cost to copying a full screen image... Secondly, memory usage IS a massive performance concern -- not just memory usage total, but bytes per second. Memory is one of the slowest parts of any processor (computations are fast, moving data is slow!) so one of the most common ways to optimise a game isn't to reduce the number of computations done, but to rearrange the memory access patterns... If MSAA makes your buffers 2x, 4x, 8x larger, that means it's going to take 2x, 4x, 8x longer for all the pixels to be written out to memory. The latest GPUs actually have some fancy optimizations here though, where they'll actually perform lossless compression on pixels or groups of pixels before writing them to memory, so in areas of the screen where the colors are all the same, less data has to be written to memory.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!