Jump to content
  • Advertisement

Zipster

Member
  • Content count

    7197
  • Joined

  • Last visited

Community Reputation

2396 Excellent

4 Followers

About Zipster

  • Rank
    GDNet+

Personal Information

  • Role
    Programmer
    Technical Director
  • Interests
    DevOps
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I would use a light-weight wrapper object that implements operator bool() and operator->() at the very least, so you get pointer-like semantics and usage with the option to add behavior as necessary. For instance, you could make the copy constructor explicit so users can still copy the handle, while also allowing one to easily locate such copies through static analysis. Or if you're feeling clever, you could even add tracing/logging to copies, access, etc., so if/when something goes wrong, there's a "chain of custody" so to speak that let's you identify errant usage. Anything like this can also be compiled out in production builds so there's almost no overhead.
  2. Zipster

    Allocator design issues

    This wouldn't change the interface. The purpose of "Deallocate" is to inform the allocator that the memory is no longer in use, at which point it can do with it what it pleases (including nothing at all). If the user is expected to know more about the underlying implementation details, you have a leaky abstraction. Take a look at the standard allocator traits. The essential elements of the interface are allocate/deallocate, and construct/destroy (notice how they're considered separate responsibilities). You really don't need much beyond that, aside perhaps from some templated helper functions that combine allocate+construct and destroy+deallocate. Then, different implementations of this interface can be combined using the adapter pattern to achieve the layered functionality you're looking for.
  3. You mentioned several times that you're familiar with writing multi-threaded software, so you already know what there is to gain by taking advantage of additional processing power that would otherwise remain idle. That's the answer to your question, but to be perfectly honest, it seems to me you already knew that. I feel the real issue is that you're having trouble accepting the amount of time and effort involved in implementing proper thread synchronization. At the end of the OP you ask, "What is the best overall architecture that is versatile, safe, performant, and future-proof?". But considering how much emphasis you place on wanting to minimize any additional time or effort (even going so far as to quantify the manhours), the question reads in my mind as "What is the best overall architecture that is versatile, safe, performant, future-proof... and least amount of work to implement?". Other engines/studios handle this by acknowledging the complexity and scope of the problem and designing for it from the beginning. They don't treat it as just another feature that can be implemented at some late stage in the project, and then look for ways to shoehorn it into their existing architecture with minimal impact on everything else. That's not to say it can't be done, but there isn't going to be a quick fix or magic bullet to get you to where you want to be. You either have to accept that there's no way around the work that has to be done to achieve the versatile, safe, performant, and future-proof implementation you're looking for, or throw it all out and accept the issues and limitations that come with a single-threaded runtime environment. Once you commit to doing the work (or not) the way forward will become clear.
  4. Intrinsic to any "identifier" are two pieces of information, what you're identifying (animal, plant) and how you identify it (id, name). Any solution that prevents you from mixing the "whats" but not the "hows" (or vice versa) is certainly better than nothing, but only focusing on one and not the other doesn't give you a "strongly typed" ID, and it's an obvious code smell for that reason. And perhaps this doesn't affect the OP, but in my experience it's extremely common to have objects that belong to multiple ID spaces (client, server, world, etc.) that are also similarly typed, and can just as easily be mixed accidentally, so for posterity why wouldn't we want to discuss the actual nature of the problem so real solutions can be explored?
  5. I'm not a huge fan of this "identifier" pattern. Consider: using AnimalID = IDType<Animal, unsigned int>; // Animal identifier using id using AnimalName = IDType<Animal, std::string>; // Animal identifier using name using AnimalLabel = IDType<Animal, std::string>; // Animal identifier using label It's suggestive of semantics that don't exist, which to me a big code smell.
  6. Not only that, but I find this strong identifier pattern to be infinitely more useful when expressed using trait types: struct TreeIdTraits { using underlying_type = std::string; using reference_type = Tree*; static underlying_type from_instance(const reference_type instance) { return (instance != nullptr) ? instance->get_id() : Tree::InvalidId; } static reference_type to_instance(underlying_type id) { return get_tree_by_id(id); } }; struct AnimalIdTraits { using underlying_type = std::uint32_t; using reference_type = std::shared_ptr<Animal>; static underlying_type from_instance(const reference_type instance) { return instance ? instance->get_id() : Animal::InvalidId; } static reference_type to_instance(underlying_type id) { return get_animal_by_id(id); } }; template <typename IdTraits> struct IdType { using underlying_type = typename IdTraits::underlying_type; using reference_type = typename IdTraits::reference_type; explicit IdType(underlying_type value) : id(value) {} explicit IdType(const reference_type instance) : id(IdTraits::from_instance(instance)) {} reference_type get() const { return IdTraits::to_instance(id); } operator bool() const { return static_cast<bool>(get()); } private: underlying_type id; }; using TreeId = IdType<TreeIdTraits>; using AnimalId = IdType<AnimalIdTraits>; I chose to add some "weak reference" behavior to demonstrate just how easily you can incorporate practical functionality into these identifiers beyond just a compile error if you accidentally mix them up.
  7. Zipster

    C++ IDE for Linux

    I've been using VisualGDB for a little over a year now, and while it isn't perfect (or free), nothing beats being able to use Visual Studio seamlessly across multiple environments. Just having access to the debugger is worth the price alone, let alone the entire VS featureset and then some.
  8. Zipster

    __declspec(selectany)

    Take a look at this for a quick refresher on linkage in C++. It's a good reference that covers all the relevant details, including how the compiler determines default linkage in the absence of a storage class specifier and the difference between a declaration versus a definition for both functions and variables. Perhaps you wouldn't, but there are a handful of constructs in C++ that produce multiple definitions as a matter of course, and they need special mechanisms (such as COMDAT) to support them properly. See vague linking for some additional insight and information (I also found Where's The Template? to be a rather interesting read on the related template instantiation problem). One thing it does is allow read-only global data items to participate in COMDAT folding and elimination. So if you had two (or more) read-only global variables with the same value: __declspec(selectany) extern const int x = 5; __declspec(selectany) extern const int y = 5; __declspec(selectany) extern const int z = 5; And you used the /OPT:ICF option, those variables would be collapsed into one in the final EXE/DLL, saving space. The /OPT:REF option would also allow the linker to eliminate unreferenced global variables completely, although be careful when doing this on objects since the initialization code will be removed as well! I suppose a secondary "benefit" is being able to define variables in header files, since the linker will just pick one instead of complaining, but I wouldn't use it for that reason alone.
  9. I think a big part of the issue is that 'renderer' and 'renderable' are loaded terms that don't really say much about the actual responsibilities involved in going from a high-level representation of some complex 2D/3D entity, to a final rendered product. What do you mean, for instance, when you say "type" of renderable? At the end of the day, everything boils down to vertices, indices, shader programs, and other resources/states that the low-level graphics API knows how to work with. There are no "types of renderables" as far as it's concerned, just raw data. What makes a mesh different from a sprite at this level? At the same time though, there is certainly some code at a high enough level that sees meshes, sprites, particles, etc. as conceptually distinct and separate... so something has to be responsible for eventually bridging that gap so they can be physically rendered. SRP tells us that whichever system interacts directly with the graphics API (one responsibility), should be separate from whichever system manages the higher-level representations of these 2D/3D entities (another responsibility or set thereof), but the terminology seems to be getting in the way. If you approach the problem from the perspective that there must be one thing that is a 'renderer', and another thing that is a 'renderable', you'll just pigeonhole yourself into a design that's awkward to use and difficult to extend. My suggestion is to ditch the lingo for a bit and approach the problem from the point of view of what functionality is actually required. What are the responsibilities, are where are their delineations? What data model(s) are involved, and how to you translate from one to another? Work out a roadmap for how you'd go from a system that interacts with "3D meshes" and/or "2D sprites", to one that works with raw graphics data, without worrying too much about nomenclature.
  10. Off the top of my head (untested): std::tuple<TypedMap<Ts>...> Storage;
  11. A mesh exists in 3D while a sprite exists in 2D, so it's highly unlikely you'd ever want or need them in the same renderer to begin with. Also keep in mind that the "appropriate" renderer isn't an intrinsic property of what you're rendering. In other words, meshes and sprites shouldn't be choosing their own renderers, because there could be multiple renderers serving different purposes. For example, let's say you have a 2D game world and 2D UI. That would mean there are two sprite renderers, and the correct renderer would depend on how the sprite is being used. This isn't information the sprite would/should have, so the calling code would determine whether the sprite belongs in the "world renderer" or the "UI renderer" based on usage and pass that single, correct one in.
  12. I generally try to avoid advertising private implementation details in headers as much as possible. If it's functionality that can be resused elsewhere, it goes into a shared utilities library. Otherwise, if it can be easily refactored to not depend on private data, it lives in an anonymous namespace in the type's source file. This has the added benefit of allowing you to change the functionality without modifying a header and forcing a lot of other code to recompile. Look into pImpl for more flavors and examples of the pattern
  13. I personally wouldn't allow mixed-type operations like this to performed, at least not easily (or accidentally). An explicit copy constructor that can convert between types provides a good balance between functionality and usability. Plus, there's only a single place in the code where you have to worry about type conversion issues, which is a plus: template<typename U> explicit Matrix(const Matrix<U>& m) { }
  14. std::map is sorted by key, however you would only use this if you need a sorted associative container (which you don't). It's not that it's a hit to performance per se (especially if the items in the vector have move semantics), but rather that in certain cases you can avoid the shift completely, so why bother doing it? Agreed. And if you just clear the vector, you'll be able to re-use the memory too.
  15. auto it = std::find_if(v.begin(), v.end(), finder()); if (it != v.end()) { std::swap(*it, v.back()); v.pop_back(); } Yes, you still need to find the item. The difference is that removing an element from the middle of a vector will force all the items after it to move up by 1, while removing the last element won't cause any shifting at all (since there's nothing after it). Even though the operation is still O(n), there's no reason to perform the extra work if you know it can be avoided. This trick doesn't work for sorted data, but again there might be a better container option in that case. More information is always helpful
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!