Zipster

Members
  • Content count

    7190
  • Joined

  • Last visited

Community Reputation

2393 Excellent

1 Follower

About Zipster

  • Rank
    GDNet+

Personal Information

  • Interests
    Programming
  1. __declspec(selectany)

    Take a look at this for a quick refresher on linkage in C++. It's a good reference that covers all the relevant details, including how the compiler determines default linkage in the absence of a storage class specifier and the difference between a declaration versus a definition for both functions and variables. Perhaps you wouldn't, but there are a handful of constructs in C++ that produce multiple definitions as a matter of course, and they need special mechanisms (such as COMDAT) to support them properly. See vague linking for some additional insight and information (I also found Where's The Template? to be a rather interesting read on the related template instantiation problem). One thing it does is allow read-only global data items to participate in COMDAT folding and elimination. So if you had two (or more) read-only global variables with the same value: __declspec(selectany) extern const int x = 5; __declspec(selectany) extern const int y = 5; __declspec(selectany) extern const int z = 5; And you used the /OPT:ICF option, those variables would be collapsed into one in the final EXE/DLL, saving space. The /OPT:REF option would also allow the linker to eliminate unreferenced global variables completely, although be careful when doing this on objects since the initialization code will be removed as well! I suppose a secondary "benefit" is being able to define variables in header files, since the linker will just pick one instead of complaining, but I wouldn't use it for that reason alone.
  2. I think a big part of the issue is that 'renderer' and 'renderable' are loaded terms that don't really say much about the actual responsibilities involved in going from a high-level representation of some complex 2D/3D entity, to a final rendered product. What do you mean, for instance, when you say "type" of renderable? At the end of the day, everything boils down to vertices, indices, shader programs, and other resources/states that the low-level graphics API knows how to work with. There are no "types of renderables" as far as it's concerned, just raw data. What makes a mesh different from a sprite at this level? At the same time though, there is certainly some code at a high enough level that sees meshes, sprites, particles, etc. as conceptually distinct and separate... so something has to be responsible for eventually bridging that gap so they can be physically rendered. SRP tells us that whichever system interacts directly with the graphics API (one responsibility), should be separate from whichever system manages the higher-level representations of these 2D/3D entities (another responsibility or set thereof), but the terminology seems to be getting in the way. If you approach the problem from the perspective that there must be one thing that is a 'renderer', and another thing that is a 'renderable', you'll just pigeonhole yourself into a design that's awkward to use and difficult to extend. My suggestion is to ditch the lingo for a bit and approach the problem from the point of view of what functionality is actually required. What are the responsibilities, are where are their delineations? What data model(s) are involved, and how to you translate from one to another? Work out a roadmap for how you'd go from a system that interacts with "3D meshes" and/or "2D sprites", to one that works with raw graphics data, without worrying too much about nomenclature.
  3. Off the top of my head (untested): std::tuple<TypedMap<Ts>...> Storage;
  4. A mesh exists in 3D while a sprite exists in 2D, so it's highly unlikely you'd ever want or need them in the same renderer to begin with. Also keep in mind that the "appropriate" renderer isn't an intrinsic property of what you're rendering. In other words, meshes and sprites shouldn't be choosing their own renderers, because there could be multiple renderers serving different purposes. For example, let's say you have a 2D game world and 2D UI. That would mean there are two sprite renderers, and the correct renderer would depend on how the sprite is being used. This isn't information the sprite would/should have, so the calling code would determine whether the sprite belongs in the "world renderer" or the "UI renderer" based on usage and pass that single, correct one in.
  5. I generally try to avoid advertising private implementation details in headers as much as possible. If it's functionality that can be resused elsewhere, it goes into a shared utilities library. Otherwise, if it can be easily refactored to not depend on private data, it lives in an anonymous namespace in the type's source file. This has the added benefit of allowing you to change the functionality without modifying a header and forcing a lot of other code to recompile. Look into pImpl for more flavors and examples of the pattern
  6. I personally wouldn't allow mixed-type operations like this to performed, at least not easily (or accidentally). An explicit copy constructor that can convert between types provides a good balance between functionality and usability. Plus, there's only a single place in the code where you have to worry about type conversion issues, which is a plus: template<typename U> explicit Matrix(const Matrix<U>& m) { }
  7. std::map is sorted by key, however you would only use this if you need a sorted associative container (which you don't). It's not that it's a hit to performance per se (especially if the items in the vector have move semantics), but rather that in certain cases you can avoid the shift completely, so why bother doing it? Agreed. And if you just clear the vector, you'll be able to re-use the memory too.
  8. auto it = std::find_if(v.begin(), v.end(), finder()); if (it != v.end()) { std::swap(*it, v.back()); v.pop_back(); } Yes, you still need to find the item. The difference is that removing an element from the middle of a vector will force all the items after it to move up by 1, while removing the last element won't cause any shifting at all (since there's nothing after it). Even though the operation is still O(n), there's no reason to perform the extra work if you know it can be avoided. This trick doesn't work for sorted data, but again there might be a better container option in that case. More information is always helpful
  9. Ah yes, I suppose that if the capture is unnecessary you can decay the lambdas down to function pointers
  10. I think this is as close as it's going to get: using SortVector = std::vector<int>; using Sorter = std::function<bool(int lhs, int rhs)>; using SortFunction = std::function<void(SortVector& data)>; SortFunction generateSortFunction(Sorter sorter) { return [=](SortVector& data) { std::stable_sort(data.begin(), data.end(), sorter); }; } std::array<SortFunction, 2> sorters = { generateSortFunction([](int lhs, int rhs) { return lhs < rhs; }), generateSortFunction([](int lhs, int rhs) { return lhs > rhs; }) };
  11. As mentioned by @SeraphLance, the type-based implementation allows code to easily opt-out of depending on global state, ignore the "baked-in" multiplicity of the type's implementation, and accept precisely the number of instances it wants as dependencies. You can also do this without classes, but in the absence of an aggregating agent such an interface would be awkward and cumbersome (AOS vs SOA). A class will also allow you to utilize other type-driven language functionality like templates and overload resolution, although I doubt this is actually useful or advantageous when it comes to singletons. Perhaps a singleton class is also easier to refactor in the future, but once again still begs the question of why you'd start with one in the first place. Otherwise, there's nothing functional that you can do with one approach that you can't do with the other. Likewise, there aren't any pitfalls that you would avoid by using one approach over the other. It's also worth mentioning that namespaces aren't really relevant to the subject, as they're just tools for labeling and organizing code.
  12. I've had to do something like this recently, and you'll probably need two types -- one that you can use for traits: using traits = std::tuple<Args...>; And one that you can use for instantiating arguments: using instances = std::tuple<typename std::decay<Args>::type...>; You can then use the traits to determine which arguments are non-const L-value references, so they can be copied from the instance tuple back into the appropriate reference parameters (depending on how the callable is invoked*). *std::apply would be ideal, but of course it's not available in C++11. Check out this for an alternative approach.
  13. That's the thing though, the fact that the strings are copied is due to the implementation details of the type. It could just as easily printed the strings and not made its own copy at all. There's no way to tell from the declaration of the type exactly what it's going to do with those strings (assuming the constructor wasn't defined inline), so what can you really do about it? It's not as though you can remove the copy if the type really needs it. Either way, the implementation should be considered private and separate from the traits of the type itself.
  14. If a type is literally non-copyable due to its semantics or other constraints, then you should certainly delete the copy constructor and assignment operators (or otherwise hide them if you're not using C++11). Otherwise, just implement the proper behavior and use a profiler to determine if you're performing extra copies and they're causing performance issues. If so, think about how you can optimize the copy, or prevent the copies from occurring at the specific places in code where they're a problem. Don't assume ahead of time there will be a problem, or change the semantics of the type to fix a usage issue. Just fix the usage if, when, and where it becomes a problem. To address the original question, I personally like to think in terms of a value vs reference type dichotomy. One key difference between them is that value types are copy/move assignable, while reference types are not. Note that copy/move construction is still fine; you're just not allowed to assign to an already constructed instance. I adopted this mindset over the years because I found that the issue with copy/move semantics isn't that their implementations are too expensive or prohibitive, but rather that I wasn't considering the semantics of the type as a whole. Once that was fixed, the issues either disappeared completely, or I was better able to focus my efforts in the right place.
  15. Certainly you must see the irony among these statements? Passing by value may often elide the copy for R-values, but not L-values. Passing by reference will always copy R-values, but L-values don't necessarily have to be copied if there are (exception-safe) optimizations that can be done during assignment. While I concede that a self-check isn't a strong case for preferring the latter approach, and I'll definitely be preferring pass-by-value in the general case going forward as it's more idiomatic, it should be obvious enough that both approaches are valid depending on the usage and implementation details of specific types. It's a fool's errand to try an make an argument for either based on contrived usage and performance conditions, especially at the micro-optimization level. I wouldn't even entertain such an argument coming from a coworker. If this code ever does become an issue, I guarantee it's because your software has much bigger issues than assignment operators performing extra copies. But again, without real data there's no way to know.