GDNet+ Basic
  • Content count

  • Joined

  • Last visited

Community Reputation

7068 Excellent

About Juliean

  • Rank

Personal Information

  1. There are actually many more suppliers, which I want to be able to declare/add at different modules w/o affecting the code using it. I don't think that would work, enable_if is used for SFINEA and probably required to only compile the specialization when the condition is actually met. Deriving the classes used for specialization is not a (good) option; first of all I try to minimze inheritance anyways and prefer external solutions via such "traits" normally; also there's a specific non-templated version for a large range of base-objects, which really should not have to be derived, if that makes any sense. I'm going to post my actual solution at the end of this post, where you should see that I don't even need to add "Type" to the base at all, even in Zipsters solution The ObjectSupplier is there to handle object-aquision for the script-calls. Long story short, I'm using a visual scripting system that is rather high level, and I want to be able to call certain functions w/o first having to aquire the actual C++-object (the type system won't even know it exists). Enter ObjectSupplier, which in case of my example will tell the script-system: "In case you bind a function of a ecs::Data<>-object, the function in script should take a Entity-reference instead and aquire the data-object off that before calling the function-pointer. A function using tihs might look like that: void Call(double dt) { CallState callState(*this); Class& object = Supplier::GetClassObject(callState); ClassCallHelper<false, Supplier, Class, Function, Return, Args...>::Call(object, m_pFunction, callState); } Now depending on the specialization of Supplier, it will eigther just get the actual object from Stack; or do something completely different as the specialization dictates. (if thats what you asked) Haven't dealt with type-erasure much I think though, and I'm not sure if that would actually be superior to the current solution Zipster offered, but I'll have a proper read of it later, thanks As for the solution I ended up using: Its pretty much as zipster wrote, just a tad simpler: // base supplier template<typename Object, typename Enable = void> class ObjectSupplier { public: static Object& GetClassObject(CallState& state) { return *state.GetAttribute<Object>(0); } }; template<bool Test> using EnableIf = typename std::enable_if<Test>::type; template<typename T, typename T2> using CheckIsBase = EnableIf<std::is_base_of<T, T2>::value>; template<typename Derived> class ObjectSupplier<Derived, sys::CheckIsBase<ecs::Data<Derived>, Derived>> { public: static Derived& GetClassObject(CallState& state) { return state.GetEntityAttribute(0)->GetData<Derived>(); } }; sys::CheckIsBase is a typedef based on a typedef of std::enable_if using is_base_of, that I've actually been using for some time now. Also, as you can see I don't have to use Derived::Type, but can just use Derived directly. Which makes sense, if I added Type to ecs::Data<Derived>, Type would just be "Derived" in the end, which I don't have to lookup, since its already part of the template. One problem I had is that ou can only use enable_if with no custom type supplies as second template parameter (my typedef used "int" as a default) for some reason, otherwise it will fail to correctly specialize. Other than that, this solution is pretty perfect especially since I don't have to add "Type" to the base classes; it allows me to add the ObjectSuppliers away from the code that uses it, and it also lets me add specialization based on different traits (ie. I have a "isBaseObject<>" trait for my type-system which doesn't involve inheritance). So thanks to Zipster for providing me this neat solution, and thanks to everyone for their input
  2. Ah, seems promising! I actually tried SFINEA/enable_if, but didn't know how to put it base-struct. Putting the "Type" as part of the base class is not a problem, I'll have to try it out later, but thanks so far
  3. So as the title (hopefully somewhat) says, I'm trying to achieve a spezialisation of a template class, based on whether or not the template-parameter is derived off of another (templated) class: // base class, specializations must match this signature template<typename Object> class ObjectSupplier { public: static constexpr size_t objectOffset = 1; static Object& GetClassObject(CallState& state) { return *state.GetAttribute<Object>(0); } }; // possible specialisation for all "Data"-classes, // which are actually derived classes using CRTP template<typename Derived> class ObjectSupplier<ecs::Data<Derived>> { public: static constexpr size_t objectOffset = 1; static Derived& GetClassObject(CallState& state) { return state.GetEntityAttribute(0)->GetData<Derived>(); } }; // ... now here's the problem: // this would work ... ObjectSupplier<ecs::Data<Transform>>::GetClassObject(state); // unfornately the actual object is "Transform", which is merely derived // of "ecs::Data<Transform>" and thus it calls the incorrect overload ObjectSupplier<Transform>::GetClassObject(state); The last two lines show the actual problem. I'm using this ObjectSupplier-class as part of my script-binding system, and thus its not possible/feasable to input the required base-class manually: template<typename Return, typename Class, typename... Args> void registerBindFunction(ClassFunction<Return, Class, Args...> pFunction, sys::StringView strFilter, std::wstring&& strDefinition) { using Supplier = ObjectSupplier<Class>; using RegistryEntry = BindFunctionRegistryEntry<ClassFunctionBindCommand<Supplier, Class, Return, Args...>>; registerClassBindEntry<RegistryEntry, Supplier, Return, Args...>(pFunction, false, strFilter, std::move(strDefinition), DEFAULT_INPUT, DEFAULT_OUTPUT); } registerBindFunction(&GenericObject::Func, ...); // needs to access the ObjectSupplier<ecs::Data<Derived>> overload registerBindFunction(&Transform::GetAbsoluteTransform, ...); // thats how it used to be before: registerObjectBindFunction(&GenericObject::Func, ...); // a few overloads that internally used a "EntityDataSupplier"-class registerEntityDataBindFunction(&GenericObject::GetAbsoluteTransform, ...); (well, it were possible, but I want this binding-code to be as simple as humanly possible; which is why I'm trying to not have to manually specify anything other than "I want to bind a function here"). So, thats the gist of it. Any ideas on how to get this to work? I don't want to (again) have to create manual overloads for "registerBindFunctions", which there would have to be at least 5 (and all have a complex signature); but I'm certainly open to different approaches, if what I'm trying to achieve via template specialization isn't possible. Thanks!
  4. Uhh... you gave part of the answer in your question though: I didn't know that the compiler was even able to do that in the first place I just checked if my MSVC-compiler does that too, and in more complex environments, but yeah, seems like this is something pretty basic optimizationwise. Good to know, reduces code-size quite a bit & saves me from further trouble with that kind of stuff. Thanks!
  5. This could potentially happen, yes. Unless I'm mistaken, this realistically shouldn't happen though: The key lies in "const char[X]". How do you generate such a type? You could manually declare it, sure, and if someone would go: const char test[32] = "Test"; const char testing[] = "Test\0ing"; // ... well Okay, now the StringView says it points to a 32-character long string which should only have 4 characters; and the other one has a \0 manually put int he middle... But outside of that, the only way I can see that you can aquire such a type is by declaring an actual string: const char test[] = "Test"; // now its fixed constexpr char constTest[] = "Test"; // all fine StringView(test); StringView(constTest); StringView("Test"); Since you cannot modify the content of a const char[X], I fail to see any other case where you'd end up with what you described. Even if, it would be trivial to check if (strlen == Size) for debug-builts, to rule out the one case I mentioned. Now what you are thinking about is probably something like this: char buffer[MAX_PATH]; GetCurrentDirecoty(buffer, MAX_PATH. 0); StringView(buffer); //uh-oh Though in this case, as I've said I've simply added a second overload that will be called if you pass in a "char [X]" as opposed to a "const char[X]", and that will actually call strlen. I mean, it seems pretty obvious to me - but am I missing something? I really cannot think about how else one would realistically create a "const char[X]" type that has the nul-terminator not at the end. Maybe through multiple layers of functions that all take "const char[X]" where someone passes in such a "char buffer[256]", but thats besides what I consider a realistic use-case, in regards to how the StringView-class is being used.
  6. Ah, yeah, thats what I've been looking for! Well, I should have been a bit more specific about my use-case: As I've mentioned I'm using my own StringView class, akin to std::experimental::basic_string_view. Now that means that functions may have a signature as such: bool Node::HasNode(sys::StringView strName) const { return m_mNodes.count(strName) != 0; } where it would have been eigther "const std::string&" (for me), or possible "const char*" / "const char*, size_t" before. This has many benefits, as such std::string_view has been proposed, but thats not the point of this post. Now in my code, I might use those functions as such: const auto strName = node.Attribute("name")->GetValue(); widget.SetName(strName.ToString()); const auto isVariable = node.HasNode("IsVariable"); widget.SetIsVariable(isVariable); const auto visibilty = core::VariableLoader::FromAttribute<Visibility>(node, "Visibility"); widget.SetVisibility(visibilty); const auto isEnabled = !node.HasNode("Disabled"); widget.SetEnabled(isEnabled); Not the every function above takes a sys::StringView. And thats pretty much where I applied my optimization. std::string_view would take a const char*, and call strlen. My StringView-constructor can take a static char-array, and directly deduce the size from this - thats the reason why I don't wanna do it by hand even though I technically "know" the strings size, its simple convenience so that I can call all those functions with string literals, but without having to take a copy or determine the size. As you should see in my explanation, the function I proposed isn't really going to be part of an interface, its just an additional constructor for my StringView-class that internally calls it. I don't know if that makes it any better in your book, but I do see a compelling case for handling string-literals the way I do. Also the purpose of StringView is to offer a unified interface from many types (std::string, const char*, const char*+size) to a single const char*, size_t-pair. So I'd say my general notion is not totally wrong - the only difference I make is instead of treating every "const char*" as a nul-terminated string, I'm making a differentiation between static string-literals as part of a small optimization. Sure, adding 3-4 overloads for the same functions is surely overkill, I agree on that (in my case I should have mentioned how its intented to being used), but since we are talking about C-API functions - as you can read in my other thread: there's actually a lot of issues going forward with modern C++ now that most C-style API functions only take nul-terminated C-strings; which wasn't a problem before but now with string_view this is actually limiting its usefulness. So I'd personally rather have atoi(const char*) and atoi(const char*, size_t) than being forced to make sure my strings are nul-terminated... but I thankfully don't have to support a large userbase with my API, so my expertise in that regard is rather limited. EDIT: Anyways, the suggested "tricks" seem to work, even though for some reason I have to add a template type to my template-class ctor for it to work: template<typename Type> class StringView { template<typename Char, CheckIsCharPointer<Char> = 0> BaseStringView(const Char* const& pString) : // still ambigous if I just use "Type" directly BaseStringView(pString, StringLength(pString)) { }; template<size_t Length> constexpr BaseStringView(DynamicString<Length> pString) : BaseStringView(pString, StringLength(pString)) { }; } But the problem seems solved, so thanks to all for helping me solve the problem I'm still rather happy to discuss the issues revolving around this; I just recently started to work with string_view so its certainly good to get more input on it.
  7. Yes, this is true, yet from how I can see it this only happens via cast (char[x] => char*), so under normal overload resolution rules, I still don't see how it would be any different to: void func(int x) { } void func(float x) { } func(0); // calls "func(int x)" I mean you're obviously right about what happens, it just feels wrong to me :> Its actually being used as an optimization string-length generation as part of my custom StringView-class: template<size_t Length> constexpr BaseStringView(StaticString<Length> pString) : // const Type(&)[Length] BaseStringView(pString, StringLength<Length>(pString)) { }; template<size_t Length> constexpr BaseStringView(DynamicString<Length> pString) : // Type(&)[Length] => prevents issues with user-handled char-buffers BaseStringView(pString, StringLength(pString)) { }; I know its technically not 100% safe, but I made sure that it doesn't break anything for me; and since I'm using a string-view I'm already in not-safe territory. As you can see I've got a second overload that gets called when I'm passing in an actual "char array[X];" that is filled from ie. an windows-API method. The actual reason why I'd need the "const char*" overload is that right now this would instead call the "const std::string&" overload, thus creating an unncessary copy & a dangling pointer (if the view is actually locally stored). Not that it happens that often, most of my codebase has now been ported to use StringView & size-qualified strings, but there's always some places where this could still happen.
  8. Well, from what I understand, a size-qualified "char[X]"-array isn't exactly the same type as a char*. For example, you can convert the char[X] to a char*, but not the other way around: char array[4] = {}; char* pointer; pointer = array; // works array = pointer; // doesn't Also the first function can't be called with char*, and will have the correct array-size if called with a char[X]. So all of this ad least made me belive that they are different types; though obviously the compiler assumes they are ambigous, maybe for the reason you wrote. I might have another idea that I'm going to try out though, just remembered that there was a std-trait to find out if a type is an array & to get the arrays extent... though thats going to result in more messy template code, so if someone found an easier solution I'd still appreciate it
  9. So I'm trying to design a function that acts differently, based on whether it is passed a const char/wchar_t array, or a const char/wchar_t*: template<typename Char, size_t Length> size_t stringLength(const Char(&pString)[Length]) { return Length - 1; } template<typename Char> size_t stringLength(const Char* pType) { return strlen(pType); } const char* pTest = "Test"; stringLength(pTest); // => should return 4 stringLength("Test"); // => should return 4 as well The problem is that the last line doesn't compile, saying that the function-call is ambigous between both overloads, even though it correctly identifies the argument as "const char [8]", which works as intended if I remove the "const Char* pType" overload. Now, why is this ambigous? As far as I understand it, the upper function should be a closer match to the argument list and thus be selected. Is there anything I have to/can do to make that work? (I'm on MSVC 2017)
  10. Did you try to delete the intellisense-cache file(s)? Should be .sdf located in the root folder of your project with the same name as the .sln, and/or .suo (which are hidden). This should normally resolve the issue.
  11. Hm, yeah, makes sense to me I suppose. Speed shouldn't really shouldn't be an issue for all the code paths where I need a c-string, as those functions are usually expensive by themselves. I'm just a bit bummed that I ran into issues like that. There's a lot of other little inconveniences too, like heterogenous lookup in unordered containers and so on. I just hope that at least the STL will be updated to make better use of string-views/strings with known length, I mean after all std::string_view is being implemented so why not fully make use of it? Nah, that shouldn't be the case. Its mostly in my file IO & string-conversion wrappers, where the issue is rather that I just cannot reason about when and how a function like "copyFile" is being called. Other than that, using string view has paid of so far - except for the fact that it took 4 days so far integrating it in the base systems w/o being able to compile - good thing I'm not on a time-constraint or anything.
  12. Hi, I've recently starting porting my codebase towards using std::string_view, which is part of the new C++17-standard. I've actually implemented my own version of it, which might be important later when talking about the problem I now face. So at first it seemed really straight-forward, just replace pretty much every occurance of "const std::string&" with "sys:StringView", except for places where I can take advantage of move-semantics. But then I ran into issues, at places where functions from the standard c-library/WinAPI were being called, like this: void copy(sys::StringView stInName, sys::StringView stOutName) { std::ifstream ifs(stInName.Data(), std::ios::binary); std::ofstream ofs(stOutName.Data(), std::ios::binary); if(!ifs.is_open()) throw fileException(); else if(!ofs.is_open()) throw fileException(); ofs << ifs.rdbuf(); } Oups, std::ifstream expects a null-terminated c-string, and sys::StringView can (and often will) point to a portion of a char-array that is not null-terminated, ie. if a Substr() is generated from another stringview. This obviously means subtle fails when trying to open a file, convert a string to an int, etc... as the function will ignore the size stored in the string-view, and will continue to read characters until the end of the actual string. Now my question to those who have already used std::string_view, or just had a thorough thought about it: How would you solve this situation? The premise is to use std::string_view as much as possible due to the inherent benefits not only for speed but also for clearity. The ideas I so far had: - Just pass "const std::string&" to the functions expecting a c-string. Pretty messy, as most other code uses or generates StringViews, which means I have to manually convert to std::string whenever I call such an function. - Before a call that expects a c-string, convert StringView to std::string and use that. While it technically shouldn't matter as most of the calls requiring a c-string are expensive by themselves, it kind of defeats the purpose of using the StringView in the first place, and actually make some functions slower then if I were just passing a "const std::string&". - Make a "CStringView" class the always has to point to a null-terminated string. Would be the same as StringView, except it can only be created from an std::string or const char* directy. This would then be used for functions that require a c-string. I've tried it a bit but ran into problems with functions like the one above that are called from many different places with different data, especially when being used in my data-driven content pipeline, since its not trivial to make sure that when I call the function, I actually have a null-terminated string anymore. - Make an "ToAPI"-function, which looks like this: template<size_t Size> std::array<Type, Size> _ToAPI(void) const { std::array<Type, Size> vArray; CopyInto(vArray.data(), Size); vArray[Size()] = '\0'; return vArray; } This actually saves the dynamic allocations made by converting to a string (and hopefully the array would benefit from copy-ellision/RVO), though it cannot be used with every type of string, since you have to specific the max-size beforehands. This is actually the solution I went with right now, as it allows me to hide the requirement for a real c-string on the callees side, should have a neglectible performance overhead and be somewhat safe (I can add checks that the string doesn't surpass the specified size). In addition, I'm rewriting most simpler functions that usually require a c-string (atoi, _strtoi64, ...) so they can work directly with my StringView-class. Though I'm still not 100% happy, and wondering if there are any other options. So what did/would you do? Any one or combination fo the above; or something entirely different? Seeing how string_view is actually designed without a guarantee for 100% safety, I'm still "shocked" at how difficult things can get when interacting with "outdated" APIs...
  13. Yeah, forget that part. I was thinking about a special list implementation we used at work, std::list probaby doesn't support this. Which only furthers my point, because deleting from a list by "removing whathever unit needs to be removed" is way slower than in a vector for most cases since you have to find which position to erase at. Yeah, I was talking about that. If you intent to store pointers to the objects, then you also have to store them as pointers in the main list, otherwise the pointers will become invalide when the vector has to interally grow in size, or you delete from a certain element. So your main list would become: std::vector<std::unique_ptr<Object>> vObjects; If you're not familiar with std::unique_ptr, get familiar with it Otherwise you could also use smart-pointers as discussed. As for the need to move objects in memory on removal, if you store pointers it doesn't matter. In C++11 with move-semantics, moving objects probably also doesn't matter that much (previously, you'd have to deep-copy objects around the vector). Otherwise, there is still the erase & swap-idiom that I mentioned: auto itr = std::find(vObjects.begin(), vObjects.end(), object); *itr = std::move(vObjects.back()); // instead of erasing, copy/move last object to this slot vObjects.pop_back(); // then, remove the last object If you do this, then there literally is no advantage to a list anymore whatsoever. The only downside is that you cannot do that if order is important (which it shouldn't in your case).
  14. Well, it depends. Do you have the list-node ready when you remove from the middle? Otherwise, you have to search for it first, which is way slower in a list. Also, while remove from the node is O(1) and from vector is O(N) for the worst case, there exists a pattern for arrays called erase & swap, where you put the last element to the position of the one you erased, making it O(1) as well. So I'd say, specially if you need to loop through a lot like you said, there's really no reason to use a list for you Nope, doesn't make a difference in your case - since you are storing the pointer, which is always equally large regardless of what you actually store. If you were to store large classes by value, which can't take advantage of move-semantics/can't use C++11, and you can't use erase&swap, then yeah, theoretically lists would gain some advantage. Though, unless you erase/insert more then you actually iterate (which i find doubtful for most cases, its still not worth thinking about).
  15. First of, you'd probably not want to have an std::list, but rather an std::vector. While lists technically have some benefits over (dynamic) arrays, in practice their non-linear memory layout makes them really imperformant for anything but removing an object by its list-node (when do you have that anyways?). I'd go for a vector by default any time unless you have very specific needs that justify a list. How are pointers unreliable? The only point where they should become invalid is when eigther a unit or building is destroyed, but you can simply notify the other connected entities to let them know when that happens. Otherwise, if you still feel they are unsafe, you could use a smart-pointer like std::shared_ptr/std::weak_ptr. They impose some overhead over raw points, but those are neglectible for what you need. I personally use a combination of both: pointers for storing stuff at runtime, and for serialization, write the entities unqiue ID, and later on loadup just make a one-time lookup by that ID to reaquire the pointer. Other people just save the pointer and modulate it at runtime to point to the correct entity, but that probably requires some advanced memory management to work (I belive, havn't tried it myself). Also, its totally possible, and can even have significant benefits to store a handle/ID instead of a pointer, as it allows you ie. to switch out the entire object, makes the case of deleting entities easier, etc... So I couldn't give you a clear "do this, do that" answer - since there isn't really a clear best-way. I hope I still gave you some ideas that help you choose what to do.