# C++ Template specialisation via base class

## Recommended Posts

So as the title (hopefully somewhat) says, I'm trying to achieve a spezialisation of a template class, based on whether or not the template-parameter is derived off of another (templated) class:

// base class, specializations must match this signature
template<typename Object>
class ObjectSupplier
{
public:

static constexpr size_t objectOffset = 1;

static Object& GetClassObject(CallState& state)
{
return *state.GetAttribute<Object>(0);
}
};

// possible specialisation for all "Data"-classes,
// which are actually derived classes using CRTP
template<typename Derived>
class ObjectSupplier<ecs::Data<Derived>>
{
public:

static constexpr size_t objectOffset = 1;

static Derived& GetClassObject(CallState& state)
{
return state.GetEntityAttribute(0)->GetData<Derived>();
}
};

// ... now here's the problem:

// this would work ...
ObjectSupplier<ecs::Data<Transform>>::GetClassObject(state);
// unfornately the actual object is "Transform", which is merely derived
// of "ecs::Data<Transform>" and thus it calls the incorrect overload
ObjectSupplier<Transform>::GetClassObject(state); 

The last two lines show the actual problem. I'm using this ObjectSupplier-class as part of my script-binding system, and thus its not possible/feasable to input the required base-class manually:

template<typename Return, typename Class, typename... Args>
void registerBindFunction(ClassFunction<Return, Class, Args...> pFunction, sys::StringView strFilter, std::wstring&& strDefinition)
{
using Supplier = ObjectSupplier<Class>;
using RegistryEntry = BindFunctionRegistryEntry<ClassFunctionBindCommand<Supplier, Class, Return, Args...>>;

registerClassBindEntry<RegistryEntry, Supplier, Return, Args...>(pFunction, false, strFilter, std::move(strDefinition), DEFAULT_INPUT, DEFAULT_OUTPUT);
}

registerBindFunction(&GenericObject::Func, ...);
// needs to access the ObjectSupplier<ecs::Data<Derived>> overload
registerBindFunction(&Transform::GetAbsoluteTransform, ...);

// thats how it used to be before:

registerObjectBindFunction(&GenericObject::Func, ...);
// a few overloads that internally used a "EntityDataSupplier"-class
registerEntityDataBindFunction(&GenericObject::GetAbsoluteTransform, ...); 

(well, it were possible, but I want this binding-code to be as simple as humanly possible; which is why I'm trying to not have to manually specify anything other than "I want to bind a function here").

So, thats the gist of it. Any ideas on how to get this to work? I don't want to (again) have to create manual overloads for "registerBindFunctions", which there would have to be at least 5 (and all have a complex signature); but I'm certainly open to different approaches, if what I'm trying to achieve via template specialization isn't possible.

Thanks!

##### Share on other sites

This is as close as I was able to get:

I'm not sure if you're able to get rid Type in the base class, since you need access to that type in the specialization to determine if T is a derived class. However someone else with better template-foo can certainly prove me wrong

##### Share on other sites

Ah, seems promising! I actually tried SFINEA/enable_if, but didn't know how to put it base-struct. Putting the "Type" as part of the base class is not a problem, I'll have to try it out later, but thanks so far

##### Share on other sites

I'm not sure how many specializations you want to have for Object Supplier, but if you only want to support ecs::Data types as the specialization, and everything else to use the standard template, then you can just use std::conditional when declaring the Supplier type in registerBindFunction to select the ecs::Data type:

using Supplier = ObjectSupplier<typename std::conditional<std::is_base_of<ecs::Data<Class>, Class>::value, ecs::Data<Class>, Class>::type;

If you don't mind creating a single new overload for registerBindFunction then you can template the above to get rid of the explicit ecs::Data type if you want to support any other wrapped/CRTP base classes.

Otherwise, you can use Zipsters approach. You can also attempt to use the type directly in registerBindFunction instead of using enable_if. Not sure how much sense that makes since I'm making this up as I go, but:

using Supplier = ObjectSupplier<Class::UnderlyingType>;

Of course, now you need to make sure any class used with ObjectSupplier has that type, which I'm not particularly happy about. You can use some macro to simplify it, or you can make it more explicit by requiring a new derived class that handles it for you:

template <typename Object>
{
using UnderlyingType = Object;
};

namespace ecs
{
template <typename T>
{
};
}

Or something like that, which makes the relationship to ObjectSupplier more clear. But I'm still not too fond of it because it involves even more boilerplate and adds complexity to the usage of ObjectSupplier (now you need the Receiver class base...). However you can use ObjectReceiver directly in registerBindFunction and statically assert that the type input is derived from it, which gives you a way to guarantee its use with a clear error message.

That said, if you have some common function in your classes, then you can also completely bypass the need to provide that base type with a little bit of metaprogramming:

template<typename T, typename R>
T BaseOf(R T::*);

// baseFunc is a common function in your base class, or some such
using Supplier = ObjectSupplier<decltype(BaseOf(&Class::baseFunc))>;

If this is enough, then you won't even need to add Type to the base, and all is well (you still need some consistent function or variable to point at to deduce the type of the base class though).

There are probably more complex ways of getting the base type without requiring such a thing as "baseFunc", but I haven't looked into it much.

This is all just off the top of my head stuff. std::conditional would be the simplest but Zipsters approach is probably more scalable.

Unfortunately I don't have time to exercise "real template-foo" to get around the type declaration, but frankly it's not that important. The above is just food for thought (i.e. I wrote it up while I was eating a midnight snack, code probably doesn't even compile :P).

Edited by Styves

##### Share on other sites

What do you expect the function to look like when passing a "ObjectSupplier" class that seems to me as if you want some type of anonymous objects passing into your script, is it?

Either you create a fixed base class any of your bind function parameter has to inherit from or if that is not in your intention just try to restructure your "ObjectSupplier" class where maybe the solution would be to make some kind of anonymous type wrapping structure that also delivers the necessary informations to your binder function. Some time ago, I added a C# like object struct to my engine code (but never used it since ) to achive exactly this. Passing any type into it with an inner template abstraction while the outer struct is absolutely aware of

struct object
{
private:
struct AbstractType
{
virtual void Free(void** ptr, Allocator& allocator) = 0;

virtual void CopyByVal(void const* src, void** dest) = 0;
virtual void CopyByRef(void* const* src, void** dest) = 0;

virtual void* Value(void** ptr) = 0;

virtual const type_info& GetType() = 0;
virtual uint32 Size() = 0;
};
template<typename T> struct val_type : AbstractType
{
//override for value types here that fit into sizeof(void*)
};
template<typename T> struct ref_type : AbstractType
{
//override for ref types here that do not fit into sizeof(void*) or const char*
};

//do some pointer magic
template<typename T> static AbstractType* DiffType()
{
if(sizeof(T) <= sizeof(void*))
{
/**
the passed value matches into a pointer address memory block so use it directly
instead of creating a pointer based accessing manager. This saves a lot of memory.
*/
static val_type<T> handler;
return &handler;
}
else
{
static ref_type<T> handler;
return &handler;
}
};

AbstractType* handler; //an interface to the underlaying value
void* value; //the memory address hold value is stored

public:
inline object& Assign(object const& obj)
{
Clear();

handler = obj.handler;
if(handler) handler->CopyByRef(&obj.value, &value, GetAllocator());

return *this;
}
template<typename T> inline object& Assign(T const& ptr)
{
if(handler) handler->Free(&value, GetAllocator());

handler = DiffType<T>();
handler->CopyByVal(&ptr, &value, GetAllocator());

return *this;
}

template<typename T> inline T Cast()
{
if(!handler) return typename const_trait<T>::type();

T* r = reinterpret_cast<T*>(handler->Value(&value));
return *r;
}
template<typename T> inline bool TryCast(T& ptr)
{
if(!handler || handler->GetType() != typeof(T))
return false;

ptr = *reinterpret_cast<T*>(handler->Value(&value));
return true;
}

template<typename T> inline bool Is() { return (GetType() == typeof(T)); }
inline const std::type_info& GetType() { return ((handler) ? handler->GetType() : typeof(void)); }

inline const uint32 Size() { return ((handler) ? handler->Size() : 0); }

inline void Clear()
{
if(handler) handler->Free(&value);
handler = 0;
}
};

Left out some (I lied, a lot) of construction and operator code but reduced the basic mechanisms to the core ones to illustrate how it works. You add your any kind value into this cointainer as an anonymous plain pointer for the type erasure and add some kind of overhead information class to it additionally. Any template parameter is abstracted out by inheriting from an untemplated base class offering an interface but the deriving class will override it and so have access to the original type T again.

I added two types of those overhead classes where the value type one treats the passed pointer "as is" (usefull for saving an allocation for small types and primitives such as bool, char, int ...) while the reference type one allocates a copy of the passed memory address (so you may create an object from a type on stack without playing with invalid pointers when stack got cleared) for the cost of only two pointers

##### Share on other sites
7 hours ago, Styves said:

I'm not sure how many specializations you want to have for Object Supplier, but if you only want to support ecs::Data types as the specialization, and everything else to use the standard template, then you can just use std::conditional when declaring the Supplier type in registerBindFunction to select the ecs::Data type:

There are actually many more suppliers, which I want to be able to declare/add at different modules w/o affecting the code using it.

7 hours ago, Styves said:

Otherwise, you can use Zipsters approach. You can also attempt to use the type directly in registerBindFunction instead of using enable_if. Not sure how much sense that makes since I'm making this up as I go, but:

I don't think that would work, enable_if is used for SFINEA and probably required to only compile the specialization when the condition is actually met.

7 hours ago, Styves said:

Or something like that, which makes the relationship to ObjectSupplier more clear. But I'm still not too fond of it because it involves even more boilerplate and adds complexity to the usage of ObjectSupplier (now you need the Receiver class base...). However you can use ObjectReceiver directly in registerBindFunction and statically assert that the type input is derived from it, which gives you a way to guarantee its use with a clear error message.

Deriving the classes used for specialization is not a (good) option; first of all I try to minimze inheritance anyways and prefer external solutions via such "traits" normally; also there's a specific non-templated version for a large range of base-objects, which really should not have to be derived, if that makes any sense.

8 hours ago, Styves said:

If this is enough, then you won't even need to add Type to the base, and all is well (you still need some consistent function or variable to point at to deduce the type of the base class though).

I'm going to post my actual solution at the end of this post, where you should see that I don't even need to add "Type" to the base at all, even in Zipsters solution

2 hours ago, Shaarigan said:

What do you expect the function to look like when passing a "ObjectSupplier" class that seems to me as if you want some type of anonymous objects passing into your script, is it?

The ObjectSupplier is there to handle object-aquision for the script-calls. Long story short, I'm using a visual scripting system that is rather high level, and I want to be able to call certain functions w/o first having to aquire the actual C++-object (the type system won't even know it exists). Enter ObjectSupplier, which in case of my example will tell the script-system: "In case you bind a function of a ecs::Data<>-object, the function in script should take a Entity-reference instead and aquire the data-object off that before calling the function-pointer.
A function using tihs might look like that:

void Call(double dt)
{
CallState callState(*this);
Class& object = Supplier::GetClassObject(callState);

ClassCallHelper<false, Supplier, Class, Function, Return, Args...>::Call(object, m_pFunction, callState);
}

Now depending on the specialization of Supplier, it will eigther just get the actual object from Stack; or do something completely different as the specialization dictates. (if thats what you asked)

2 hours ago, Shaarigan said:

Either you create a fixed base class any of your bind function parameter has to inherit from or if that is not in your intention just try to restructure your "ObjectSupplier" class where maybe the solution would be to make some kind of anonymous type wrapping structure that also delivers the necessary informations to your binder function. Some time ago, I added a C# like object struct to my engine code (but never used it since ) to achive exactly this. Passing any type into it with an inner template abstraction while the outer struct is absolutely aware of

Haven't dealt with type-erasure much I think though, and I'm not sure if that would actually be superior to the current solution Zipster offered, but I'll have a proper read of it later, thanks

As for the solution I ended up using: Its pretty much as zipster wrote, just a tad simpler:

// base supplier
template<typename Object, typename Enable = void>
class ObjectSupplier
{
public:

static Object& GetClassObject(CallState& state)
{
return *state.GetAttribute<Object>(0);
}
};

template<bool Test>
using EnableIf = typename std::enable_if<Test>::type;

template<typename T, typename T2>
using CheckIsBase = EnableIf<std::is_base_of<T, T2>::value>;

template<typename Derived>
class ObjectSupplier<Derived, sys::CheckIsBase<ecs::Data<Derived>, Derived>>
{
public:

static Derived& GetClassObject(CallState& state)
{
return state.GetEntityAttribute(0)->GetData<Derived>();
}
};

sys::CheckIsBase is a typedef based on a typedef of std::enable_if using is_base_of, that I've actually been using for some time now. Also, as you can see I don't have to use Derived::Type, but can just use Derived directly. Which makes sense, if I added Type to ecs::Data<Derived>, Type would just be "Derived" in the end, which I don't have to lookup, since its already part of the template.

One problem I had is that ou can only use enable_if with no custom type supplies as second template parameter (my typedef used "int" as a default) for some reason, otherwise it will fail to correctly specialize. Other than that, this solution is pretty perfect especially since I don't have to add "Type" to the base classes; it allows me to add the ObjectSuppliers away from the code that uses it, and it also lets me add specialization based on different traits (ie. I have a "isBaseObject<>" trait for my type-system which doesn't involve inheritance).
So thanks to Zipster for providing me this neat solution, and thanks to everyone for their input

## Create an account

Register a new account

• ### Similar Content

• By nilkun
Hello everyone!
First time posting in the forum.
I've just completed my first game ever ( C++ / SDL ), and I am feeling utterly proud. It is a small game resembling Missile Command. The code is a mess, but it is my mess! In the process of making the game, I developed my own little game engine.
My question is, where would be a good place to spread the news to at least get some people to try the game?
• By owenjr
Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.

Examples:
- Procedural multi-legged walking animation
- Procedural Locomotion of Multi-Legged Characters in Dynamic Environments

• I am a talented 2D/3D artist with 3 years animation working experience and a Degree in Illustration and Animation. I have won a world-wide art competition hosted by SFX magazine and am looking to develop a survival game. I have some knowledge of C sharp and have notes for a survival based game with flexible storyline and PVP. Looking for developers to team up with. I can create models, animations and artwork and I have beginner knowledge of C sharp with Unity. The idea is Inventory menu based gameplay and is inspired by games like DAYZ.
Here is some early sci-fi concept art to give you an idea of the work level. Hope to work with like minded people and create something special. email me andrewparkesanim@gmail.com.
Developers who share the same passion please contact me, or if you have a similar project and want me to join your team email me.
Many thanks, Andrew.

• By mike44
Hi
saw in dependency walker that my app still needs msvcp140d.dll even after disabling debug.
What did I forget in the VS2017 release settings? After setting to multithreaded dll I get linker errors.
Thanks

• So I have been playing around with yaml-cpp as I want to use YAML for most of my game data files however I am running into some pretty big performance issues and not sure if it is something I am doing or the library itself.
I created this code in order to test a moderately sized file:
Player newPlayer = Player(); newPlayer.name = "new player"; newPlayer.maximumHealth = 1000; newPlayer.currentHealth = 1; Inventory newInventory; newInventory.maximumWeight = 10.9f; for (int z = 0; z < 10000; z++) { InventoryItem* newItem = new InventoryItem(); newItem->name = "Stone"; newItem->baseValue = 1; newItem->weight = 0.1f; newInventory.items.push_back(newItem); } YAML::Node newSavedGame; newSavedGame["player"] = newPlayer; newSavedGame["inventory"] = newInventory; This is where I ran into my first issue, memory consumption.
Before I added this code, the memory usage of my game was about 22MB. After I added everything expect the YAML::Node stuff, it went up to 23MB, so far nothing unexpected. Then when I added the YAML::Node and added data to it, the memory went up to 108MB. I am not sure why when I add the class instance it only adds like 1MB of memory but then copying that data to a YAML:Node instance, it take another 85MB of memory.
So putting that issue aside, I want want to test the performance of writing out the files. the initial attempt looked like this:
void YamlUtility::saveAsFile(YAML::Node node, std::string filePath) { std::ofstream myfile; myfile.open(filePath); myfile << node << std::endl; myfile.close(); } To write out the file (that ends up to be about 570KB), it took about 8 seconds to do that. That seems really slow to me.
After read the documentation a little more I decide to try a different route using the YAML::Emitter, the implemntation looked like this:
static void buildYamlManually(std::ofstream& file, YAML::Node node) { YAML::Emitter out; out << YAML::BeginMap << YAML::Key << "player" << YAML::Value << YAML::BeginMap << YAML::Key << "name" << YAML::Value << node["player"]["name"].as<std::string>() << YAML::Key << "maximumHealth" << YAML::Value << node["player"]["maximumHealth"].as<int>() << YAML::Key << "currentHealth" << YAML::Value << node["player"]["currentHealth"].as<int>() << YAML::EndMap; out << YAML::BeginSeq; std::vector<InventoryItem*> items = node["inventory"]["items"].as<std::vector<InventoryItem*>>(); for (InventoryItem* const value : items) { out << YAML::BeginMap << YAML::Key << "name" << YAML::Value << value->name << YAML::Key << "baseValue" << YAML::Value << value->baseValue << YAML::Key << "weight" << YAML::Value << value->weight << YAML::EndMap; } out << YAML::EndSeq; out << YAML::EndMap; file << out.c_str() << std::endl; } While this did seem to improve the speed, it was still take about 7 seconds instead of 8 seconds.
Since it has been a while since I used C++ and was not sure if this was normal, I decided to for testing just write a simple method to manually generate the YAMLin this use case, that looked something like this:
static void buildYamlManually(std::ofstream& file, SavedGame savedGame) { file << "player: \n" << " name: " << savedGame.player.name << "\n maximumHealth: " << savedGame.player.maximumHealth << "\n currentHealth: " << savedGame.player.currentHealth << "\ninventory:" << "\n maximumWeight: " << savedGame.inventory.maximumWeight << "\n items:"; for (InventoryItem* const value : savedGame.inventory.items) { file << "\n - name: " << value->name << "\n baseValue: " << value->baseValue << "\n weight: " << value->weight; } } This wrote the same file and it took about 0.15 seconds which seemed a lot more to what I was expecting.
While I would expect some overhead in using yaml-cpp to manage and write out YAML files, it consuming 70X+ the amount of memory and it being 40X+ slower in writing files seems really bad.
I am not sure if I am doing something wrong with how I am using yaml-cpp that would be causing this issue or maybe it was never design to handle large files but was just wondering if anyone has any insight on what might be happening here (or an alternative to dealing with YAMLin C++)?

• So I am trying to using Yaml as my game data files (mainly because it support comments, is a bit easier to read than JSON, and I am going to be working in these files a lot) with C++ and yaml-cpp (https://github.com/jbeder/yaml-cpp) seems like the most popular library for dealing with it however I am running into an issue when using pointers.
Here is my code:
struct InventoryItem { std::string name; int baseValue; float weight; }; struct Inventory { float maximumWeight; std::vector<InventoryItem*> items; }; namespace YAML { template <> struct convert<InventoryItem*> { static Node encode(const InventoryItem* inventoryItem) { Node node; node["name"] = inventoryItem->name; node["baseValue"] = inventoryItem->baseValue; node["weight"] = inventoryItem->weight; return node; } static bool decode(const Node& node, InventoryItem* inventoryItem) { // @todo validation inventoryItem->name = node["name"].as<std::string>(); inventoryItem->baseValue = node["baseValue"].as<int>(); inventoryItem->weight = node["weight"].as<float>(); return true; } }; template <> struct convert<Inventory> { static Node encode(const Inventory& inventory) { Node node; node["maximumWeight"] = inventory.maximumWeight; node["items"] = inventory.items; return node; } static bool decode(const Node& node, Inventory& inventory) { // @todo validation inventory.maximumWeight = node["maximumWeight"].as<float>(); inventory.items = node["items"].as<std::vector<InventoryItem*>>(); return true; } }; } if I just did std::vector<InventoryItem> items and had the encode / decode use InventoryItem& inventoryItem everything works fine however when I use the code above that has it as a pointer, I get the following error from code that is part of the yaml-cpp library:
impl.h(123): error C4700: uninitialized local variable 't' used The code with the error is:
template <typename T> struct as_if<T, void> { explicit as_if(const Node& node_) : node(node_) {} const Node& node; T operator()() const { if (!node.m_pNode) throw TypedBadConversion<T>(node.Mark()); T t; if (convert<T>::decode(node, t)) // NOTE: THIS IS THE LINE THE COMPILER ERROR IS REFERENCING return t; throw TypedBadConversion<T>(node.Mark()); } }; With my relative lack of experience in C++ and not being able to find any documentation for yaml-cpp using pointers, I am not exactly sure what is wrong with my code.
Anyone have any ideas what I need to change with my code?

• I already asked this question on stack overflow, and they got pissed at me, down-voted me and so forth, LOL .... so I'm pretty sure the answer is NO, but I'll try again here anyway just in case..... Is there any way to get the size of a polymorphic object at run-time? I know you can create a virtual function that returns size and overload it for each child class, but I'm trying to avoid that since (a) it takes a virtual function call and I want it to be fast and (b) it's a pain to have to include the size function for every subclass. I figure since each object has a v-table their should be some way since the information is there, but perhaps there is no portable way to do it.

• This is the code I have:
//Create Window     DWORD windowStyle = WS_VISIBLE;     DWORD windowExStyle = WS_EX_OVERLAPPEDWINDOW;     SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_SYSTEM_AWARE);     RECT client{ 0, 0, 100, 40 };     UINT dpi = GetDpiForSystem();     AdjustWindowRectExForDpi(&client, windowStyle, false, windowExStyle, dpi);     UINT adjustedWidth = client.right - client.left;     UINT adjustedHeight = client.bottom - client.top;     m_hwnd = CreateWindowEx(windowExStyle,                             className.c_str(),                             windowName.c_str(),                             windowStyle,                             CW_USEDEFAULT,                             CW_USEDEFAULT,                             adjustedWidth,                             adjustedHeight,                             nullptr,                             nullptr,                             m_hInstance,                             m_emu     ); The generated window has a client area of 1 pixel in height, even though I'm asking for 40. so I'm always getting 39 pixel less than what I need...can someone help me with this? x_x

• I've spent quite a while (and probably far longer than I actually should) trying to design an allocator system.  I've bounced ideas around to various people in the past, but never really gotten something satisfactory.
Basically, the requirements I'm trying to target are:
Composability -- allocators that seamlessly allocate from memory allocated by other allocators.  This helps me to do things like, for example, write an allocator that pads allocations from its parent allocator with bit patterns to detect heap corruption.  It also allows me to easily create spillovers, or optionally assert on overflow with specialized fallbacks.   Handling the fact that some allocators have different interfaces than others in an elegant way.  For example, a regular allocator might have Allocate/Deallocate, but a linear allocator can't do itemized deallocation (but can deallocate everything at once).   I want to be able to tell how much I've allocated, and how much of that is actually being used.  I also want to be able to bucket that on subsystem, but as far as I can tell, that doesn't really impact the design outside of adding a new parameter to allocate calls. Note:  I'm avoiding implementation of allocation buckets and alignment from this, since it's largely orthogonal to what I'm asking and can be done with any of the designs.

To meet those three requirements, I've come up with the following solutions, all of which have significant drawbacks.
Static Policy-Based Allocators
I originally built this off of this talk.
Examples;
struct AllocBlock { std::byte* ptr; size_t size; }; class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; template <typename BackingAllocator, size_t allocSize> class LinearAllocator : BackingAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } AllocBlock Allocate(size_t size); }; template <typename BackingAllocator, size_t allocSize> class PoolAllocator : BackingAllocator { AllocBlock baseMemory; char* currentHead; public: PoolAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator<Mallocator, size>; Advantages:
SFINAE gives me a pseudo-duck-typing thing.  I don't need any kind of common interfaces, and I'll get a compile-time error if I try to do something like create a LinearAllocator backed by a PoolAllocator. It's composable. Disadvantages:
Composability is type composability, meaning every allocator I create has an independent chain of compositions.  This makes tracking memory usage pretty hard, and presumably can cause me external fragmentation issues.  I might able to get around this with some kind of singleton kung-fu, but I'm unsure as I don't really have any experience with them. Owing to the above, all of my customization points have to be template parameters because the concept relies on empty constructors.  This isn't a huge issue, but it makes defining allocators cumbersome. Dynamic Allocator Dependency
This is probably just the strategy pattern, but then again everything involving polymorphic type composition looks like the strategy pattern to me. 😃
Examples:
struct AllocBlock { std::byte* ptr; size_t size; }; class Allocator { virtual AllocBlock Allocate(size_t) = 0; virtual void Deallocate(AllocBlock) = 0; }; class Mallocator : Allocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; class LinearAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } AllocBlock Allocate(size_t size); }; class PoolAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* currentHead; public: PoolAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator, size); There's an obvious problem with the above:  Namely that PoolAllocator and LinearAllocator don't inherit from the generic Allocator interface.  They can't, because their interfaces provide different semantics.  There's to ways I can solve this:
Inherit from Allocator anyway and assert on unsupported operations (delegates composition failure to runtime errors, which I'd rather avoid).   As above:  Don't inherit and just deal with the fact that some composability is lost (not ideal, because it means you can't do things like back a pool allocator with a linear allocator) As for the overall structure, I think it looks something like this:
Memory usage tracking is easy, since I can use the top-level mallocator(s) to keep track of total memory allocated, and all of the leaf allocators to track of used memory.  How to do that in particular is outside the scope of what I'm asking about, but I've got some ideas. I still have composability Disadvantages:
The interface issues above.  There's no duck-typing-like mechanism to help here, and I'm strongly of the opinion that programmer errors in construction like that should fail at compile-time, not runtime. Composition on Allocated Memory instead of Allocators
This is probably going to be somewhat buggy and poorly thought, since it's just an idea rather than something I've actually tried.
Examples:
struct AllocBlock { void* ptr; size_t size; std::function<void()> dealloc; } class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size) { void* ptr = malloc(size); return {ptr, size, [ptr](){ free(ptr); }}; } }; class LinearAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) {end = ptr = baseMemory.ptr;} AllocBlock Allocate(size_t); }; class PoolAllocator { AllocBlock baseMemory; char* head; public: PoolAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) { /* stuff */ } void* Allocate(); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator.Allocate(size)); I don't really like this design at first blush, but I haven't really tried it.

"Composable", since we've delegated most of what composition entails into the memory block rather than the allocator. Tracking memory is a bit more complex, but I *think* it's still doable. Disadvantages:
Makes the interface more complex, since we have to allocate first and then pass that block into our "child" allocator. Can't do specialized deallocation (i.e. stack deallocation) since the memory blocks don't know anything about their parent allocation pool.  I might be able to get around this though.
I've done a lot of research against all of the source-available engines I can find, and it seems like most of them either have very small allocator systems or simply don't try to make them composable at all (CryEngine does this, for example).  That said, it seems like something that should have a lot of good examples, but I can't find a whole lot.  Does anyone have any good feedback/suggestions on this, or is composability in general just a pipe dream?

• Hi
I’ve been working on a game engine for years and I’ve recently come back to it after a couple of years break.  Because my engine uses DirectX9.0c I thought maybe it would be a good idea to upgrade it to DX11. I then installed Windows 10 and starting tinkering around with the engine trying to refamiliarise myself with all the code.
It all seems to work ok in the new OS but there’s something I’ve noticed that has caused a massive slowdown in frame rate. My engine has a relatively sophisticated terrain system which includes the ability to paint roads onto it, ala CryEngine. The roads are spline curves and built up with polygons matching the terrain surface. It used to work perfectly but I’ve noticed that when I’m dynamically adding the roads, which involves moving the spline curve control points around the surface of the terrain, the frame rate comes to a grinding halt.
There’s some relatively complex processing going on each time the mouse moves - the road either side of the control point(s) being moved, is reconstructed in real time so you can position and bend the road precisely. On my previous OS, which was Win2k Pro, this worked really smoothly and in release mode there was barely any slow down in frame rate, but now it’s unusable. As part of the road reconstruction, I lock the vertex and index buffers and refill them with the new values so my question is, on windows 10 using DX9, is anyone aware of any locking issues? I’m aware that there can be contention when locking buffers dynamically but I’m locking with LOCK_DISCARD and this has never been an issue before.
Any help would be greatly appreciated.

• 11
• 20
• 12
• 10
• 38
• ### Forum Statistics

• Total Topics
631401
• Total Posts
2999864
×