Jump to content
  • Advertisement
Juliean

C++ Char pointer/array type ambiguity

Recommended Posts

So I'm trying to design a function that acts differently, based on whether it is passed a const char/wchar_t array, or a const char/wchar_t*:

template<typename Char, size_t Length>
size_t stringLength(const Char(&pString)[Length])
{
  return Length - 1;
}

template<typename Char>
size_t stringLength(const Char* pType)
{
  return strlen(pType);
}

const char* pTest = "Test";
stringLength(pTest); // => should return 4
stringLength("Test"); // => should return 4 as well

The problem is that the last line doesn't compile, saying that the function-call is ambigous between both overloads, even though it correctly identifies the argument as "const char [8]", which works as intended if I remove the "const Char* pType" overload.

Now, why is this ambigous? As far as I understand it, the upper function should be a closer match to the argument list and thus be selected. Is there anything I have to/can do to make that work? (I'm on MSVC 2017)

Share this post


Link to post
Share on other sites
Advertisement

Correct me if I'm wrong, but it's ambiguous because char* and char[] are the same type.  Both of them are just pointers to the first character in an array of characters.

Share this post


Link to post
Share on other sites
5 minutes ago, trjh2k2 said:

Correct me if I'm wrong, but it's ambiguous because char* and char[] are the same type.  Both of them are just pointers to the first character in an array of characters.

Well, from what I understand, a size-qualified "char[X]"-array isn't exactly the same type as a char*.

For example, you can convert the char[X] to a char*, but not the other way around:

char array[4] = {};
char* pointer;

pointer = array; // works
array = pointer; // doesn't

Also the first function can't be called with char*, and will have the correct array-size if called with a char[X]. So all of this ad least made me belive that they are different types; though obviously the compiler assumes they are ambigous, maybe for the reason you wrote.

I might have another idea that I'm going to try out though, just remembered that there was a std-trait to find out if a type is an array & to get the arrays extent... though thats going to result in more messy template code, so if someone found an easier solution I'd still appreciate it :)

 

Share this post


Link to post
Share on other sites

The problem I think is not that you can't pass char[x] to a char*, but that you CAN pass "text" to both.

 

What are you trying to accomplish?  This doesn't look like a good way to check the length of a string.

Share this post


Link to post
Share on other sites
24 minutes ago, trjh2k2 said:

The problem I think is not that you can't pass char[x] to a char*, but that you CAN pass "text" to both.

Yes, this is true, yet from how I can see it this only happens via cast (char[x] => char*), so under normal overload resolution rules, I still don't see how it would be any different to:
 

void func(int x)
{
}

void func(float x)
{
}

func(0); // calls "func(int x)"

I mean you're obviously right about what happens, it just feels wrong to me :>

24 minutes ago, trjh2k2 said:

What are you trying to accomplish?  This doesn't look like a good way to check the length of a string.

Its actually being used as an optimization string-length generation as part of my custom StringView-class:

template<size_t Length>
constexpr BaseStringView(StaticString<Length> pString) : // const Type(&)[Length]
	BaseStringView(pString, StringLength<Length>(pString))
{
};

template<size_t Length>
constexpr BaseStringView(DynamicString<Length> pString) : // Type(&)[Length] => prevents issues with user-handled char-buffers
	BaseStringView(pString, StringLength(pString))
{
};

I know its technically not 100% safe, but I made sure that it doesn't break anything for me; and since I'm using a string-view I'm already in not-safe territory. As you can see I've got a second overload that gets called when I'm passing in an actual "char array[X];" that is filled from ie. an windows-API method. The actual reason why I'd need the "const char*" overload is that right now this would instead call the "const std::string&" overload, thus creating an unncessary copy & a dangling pointer (if the view is actually locally stored).
Not that it happens that often, most of my codebase has now been ported to use StringView & size-qualified strings, but there's always some places where this could still happen.

Share this post


Link to post
Share on other sites

As you said before, char[] and char* are just the same, but depending on what you want to do/need you have to cast. And yeah, you can cast from char[] to char* this way:

char array[4] = {'a','b','c','d'};
char* charPointer = nullptr;

charPointer = &array[0];

You have to point your pointer to the beginning of your char array, there is no direct assignment. You might overload operators if you really use that much char-pointer assignment.

Edited by NajeNDa

Share this post


Link to post
Share on other sites

The reference trick in the StackOverflow site is the one I've seen several times over the years:

template<typename T> void f(T* const& c){ std::cout << "pointer\n"; }
template<typename T, size_t N> void f(T(&)[N]){ std::cout << "array\n"; }

Even though you as a programmer don't know, your compiler hitting the code can potentially know.  It works if the parameter being directly passed in is known to be a fixed-length array. If it goes through a single indirection to a pointer and the indirection isn't optimized a way, then the information is lost and the compiler will deduce it as a pointer. If it goes through an indirection and the indirection gets optimized away it can still deduce it correctly.  

So even though that can work in some cases, it won't work in all cases after indirections.

 

The useful cases are almost non-existent.

 

As for the original problem here on the thread where you're trying to avoid taking the string length, that's not much of a benefit.  You're trying to simplify the interface, but instead you are adding complexity by having an additional entry.  Instead of having only one interface:  (buffer, size), you've now got two interfaces: (buffer, size) and (fixed-length-array).  If the writer knows they've got a fixed array they can use (buffer, sizeof(buffer)). If the writer knows they've got a more traditional buffer they can use (buffer, buflen). They know the single interface is there and they need to use it.

Imagine if the C language used that in their interfaces.  You'd have the current set of twenty-ish memory functions like memmov, memcpy, memcmp, and a duplicate version of all the functions for fixed-length arrays.

 

Share this post


Link to post
Share on other sites

I guess what I meant to say is that rolling your own ways to check string length like this reads to me like a code smell / design smell kind of scenario.  If you created the array, you already know the size, so you can pass it around if you need it.

Share this post


Link to post
Share on other sites
1 hour ago, Kylotan said:

 

41 minutes ago, frob said:

The reference trick in the StackOverflow site is the one I've seen several times over the years:


template<typename T> void f(T* const& c){ std::cout << "pointer\n"; }
template<typename T, size_t N> void f(T(&)[N]){ std::cout << "array\n"; }

Ah, yeah, thats what I've been looking for!

41 minutes ago, frob said:

The useful cases are almost non-existent.

 

24 minutes ago, trjh2k2 said:

I guess what I meant to say is that rolling your own ways to check string length like this reads to me like a code smell / design smell kind of scenario.  If you created the array, you already know the size, so you can pass it around if you need it.

Well, I should have been a bit more specific about my use-case: As I've mentioned I'm using my own StringView class, akin to std::experimental::basic_string_view.

Now that means that functions may have a signature as such:

bool Node::HasNode(sys::StringView strName) const
{
  return m_mNodes.count(strName) != 0;
}

where it would have been eigther "const std::string&" (for me), or possible "const char*" / "const char*, size_t" before. This has many benefits, as such std::string_view has been proposed, but thats not the point of this post. Now in my code, I might use those functions as such:
 

const auto strName = node.Attribute("name")->GetValue();
widget.SetName(strName.ToString());

const auto isVariable = node.HasNode("IsVariable");
widget.SetIsVariable(isVariable);
	
const auto visibilty = core::VariableLoader::FromAttribute<Visibility>(node, "Visibility");
widget.SetVisibility(visibilty);

const auto isEnabled = !node.HasNode("Disabled");
widget.SetEnabled(isEnabled);

Not the every function above takes a sys::StringView. And thats pretty much where I applied my optimization. std::string_view would take a const char*, and call strlen. My StringView-constructor can take a static char-array, and directly deduce the size from this - thats the reason why I don't wanna do it by hand even though I technically "know" the strings size, its simple convenience so that I can call all those functions with string literals, but without having to take a copy or determine the size.

41 minutes ago, frob said:

As for the original problem here on the thread where you're trying to avoid taking the string length, that's not much of a benefit.  You're trying to simplify the interface, but instead you are adding complexity by having an additional entry.  Instead of having only one interface:  (buffer, size), you've now got two interfaces: (buffer, size) and (fixed-length-array).  If the writer knows they've got a fixed array they can use (buffer, sizeof(buffer)). If the writer knows they've got a more traditional buffer they can use (buffer, buflen). They know the single interface is there and they need to use it.

As you should see in my explanation, the function I proposed isn't really going to be part of an interface, its just an additional constructor for my StringView-class that internally calls it. I don't know if that makes it any better in your book, but I do see a compelling case for handling string-literals the way I do. Also the purpose of StringView is to offer a unified interface from many types (std::string, const char*, const char*+size) to a single const char*, size_t-pair. So I'd say my general notion is not totally wrong - the only difference I make is instead of treating every "const char*" as a nul-terminated string, I'm making a differentiation between static string-literals as part of a small optimization.
 

41 minutes ago, frob said:

Imagine if the C language used that in their interfaces.  You'd have the current set of twenty-ish memory functions like memmov, memcpy, memcmp, and a duplicate version of all the functions for fixed-length arrays.

Sure, adding 3-4 overloads for the same functions is surely overkill, I agree on that (in my case I should have mentioned how its intented to being used), but since we are talking about C-API functions - as you can read in my other thread:

there's actually a lot of issues going forward with modern C++ now that most C-style API functions only take nul-terminated C-strings; which wasn't a problem before but now with string_view this is actually limiting its usefulness. So I'd personally rather have atoi(const char*) and atoi(const char*, size_t) than being forced to make sure my strings are nul-terminated... but I thankfully don't have to support a large userbase with my API, so my expertise in that regard is rather limited.

EDIT: Anyways, the suggested "tricks" seem to work, even though for some reason I have to add a template type to my template-class ctor for it to work:

template<typename Type>
class StringView
{
  	template<typename Char, CheckIsCharPointer<Char> = 0>
	BaseStringView(const Char* const& pString) : // still ambigous if I just use "Type" directly
		BaseStringView(pString, StringLength(pString))
	{
	};
  
   template<size_t Length>
   constexpr BaseStringView(DynamicString<Length> pString) :
   BaseStringView(pString, StringLength(pString))
   {
   };
}

But the problem seems solved, so thanks to all for helping me solve the problem :) I'm still rather happy to discuss the issues revolving around this; I just recently started to work with string_view so its certainly good to get more input on it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By Andrew Parkes
      I am a talented 2D/3D artist with 3 years animation working experience and a Degree in Illustration and Animation. I have won a world-wide art competition hosted by SFX magazine and am looking to develop a survival game. I have some knowledge of C sharp and have notes for a survival based game with flexible storyline and PVP. Looking for developers to team up with. I can create models, animations and artwork and I have beginner knowledge of C sharp with Unity. The idea is Inventory menu based gameplay and is inspired by games like DAYZ.
      Here is some early sci-fi concept art to give you an idea of the work level. Hope to work with like minded people and create something special. email me andrewparkesanim@gmail.com.
      Developers who share the same passion please contact me, or if you have a similar project and want me to join your team email me. 
      Many thanks, Andrew.

    • By mike44
      Hi
      saw in dependency walker that my app still needs msvcp140d.dll even after disabling debug.
      What did I forget in the VS2017 release settings? After setting to multithreaded dll I get linker errors.
      Thanks
       
    • By 3dmodelerguy
      So I have been playing around with yaml-cpp as I want to use YAML for most of my game data files however I am running into some pretty big performance issues and not sure if it is something I am doing or the library itself.
      I created this code in order to test a moderately sized file:
      Player newPlayer = Player(); newPlayer.name = "new player"; newPlayer.maximumHealth = 1000; newPlayer.currentHealth = 1; Inventory newInventory; newInventory.maximumWeight = 10.9f; for (int z = 0; z < 10000; z++) { InventoryItem* newItem = new InventoryItem(); newItem->name = "Stone"; newItem->baseValue = 1; newItem->weight = 0.1f; newInventory.items.push_back(newItem); } YAML::Node newSavedGame; newSavedGame["player"] = newPlayer; newSavedGame["inventory"] = newInventory; This is where I ran into my first issue, memory consumption.
      Before I added this code, the memory usage of my game was about 22MB. After I added everything expect the YAML::Node stuff, it went up to 23MB, so far nothing unexpected. Then when I added the YAML::Node and added data to it, the memory went up to 108MB. I am not sure why when I add the class instance it only adds like 1MB of memory but then copying that data to a YAML:Node instance, it take another 85MB of memory.
      So putting that issue aside, I want want to test the performance of writing out the files. the initial attempt looked like this:
      void YamlUtility::saveAsFile(YAML::Node node, std::string filePath) { std::ofstream myfile; myfile.open(filePath); myfile << node << std::endl; myfile.close(); } To write out the file (that ends up to be about 570KB), it took about 8 seconds to do that. That seems really slow to me.
      After read the documentation a little more I decide to try a different route using the YAML::Emitter, the implemntation looked like this:
      static void buildYamlManually(std::ofstream& file, YAML::Node node) { YAML::Emitter out; out << YAML::BeginMap << YAML::Key << "player" << YAML::Value << YAML::BeginMap << YAML::Key << "name" << YAML::Value << node["player"]["name"].as<std::string>() << YAML::Key << "maximumHealth" << YAML::Value << node["player"]["maximumHealth"].as<int>() << YAML::Key << "currentHealth" << YAML::Value << node["player"]["currentHealth"].as<int>() << YAML::EndMap; out << YAML::BeginSeq; std::vector<InventoryItem*> items = node["inventory"]["items"].as<std::vector<InventoryItem*>>(); for (InventoryItem* const value : items) { out << YAML::BeginMap << YAML::Key << "name" << YAML::Value << value->name << YAML::Key << "baseValue" << YAML::Value << value->baseValue << YAML::Key << "weight" << YAML::Value << value->weight << YAML::EndMap; } out << YAML::EndSeq; out << YAML::EndMap; file << out.c_str() << std::endl; } While this did seem to improve the speed, it was still take about 7 seconds instead of 8 seconds.
      Since it has been a while since I used C++ and was not sure if this was normal, I decided to for testing just write a simple method to manually generate the YAMLin this use case, that looked something like this:
      static void buildYamlManually(std::ofstream& file, SavedGame savedGame) { file << "player: \n" << " name: " << savedGame.player.name << "\n maximumHealth: " << savedGame.player.maximumHealth << "\n currentHealth: " << savedGame.player.currentHealth << "\ninventory:" << "\n maximumWeight: " << savedGame.inventory.maximumWeight << "\n items:"; for (InventoryItem* const value : savedGame.inventory.items) { file << "\n - name: " << value->name << "\n baseValue: " << value->baseValue << "\n weight: " << value->weight; } } This wrote the same file and it took about 0.15 seconds which seemed a lot more to what I was expecting.
      While I would expect some overhead in using yaml-cpp to manage and write out YAML files, it consuming 70X+ the amount of memory and it being 40X+ slower in writing files seems really bad.
      I am not sure if I am doing something wrong with how I am using yaml-cpp that would be causing this issue or maybe it was never design to handle large files but was just wondering if anyone has any insight on what might be happening here (or an alternative to dealing with YAMLin C++)?
    • By 3dmodelerguy
      So I am trying to using Yaml as my game data files (mainly because it support comments, is a bit easier to read than JSON, and I am going to be working in these files a lot) with C++ and yaml-cpp (https://github.com/jbeder/yaml-cpp) seems like the most popular library for dealing with it however I am running into an issue when using pointers.
      Here is my code:
      struct InventoryItem { std::string name; int baseValue; float weight; }; struct Inventory { float maximumWeight; std::vector<InventoryItem*> items; }; namespace YAML { template <> struct convert<InventoryItem*> { static Node encode(const InventoryItem* inventoryItem) { Node node; node["name"] = inventoryItem->name; node["baseValue"] = inventoryItem->baseValue; node["weight"] = inventoryItem->weight; return node; } static bool decode(const Node& node, InventoryItem* inventoryItem) { // @todo validation inventoryItem->name = node["name"].as<std::string>(); inventoryItem->baseValue = node["baseValue"].as<int>(); inventoryItem->weight = node["weight"].as<float>(); return true; } }; template <> struct convert<Inventory> { static Node encode(const Inventory& inventory) { Node node; node["maximumWeight"] = inventory.maximumWeight; node["items"] = inventory.items; return node; } static bool decode(const Node& node, Inventory& inventory) { // @todo validation inventory.maximumWeight = node["maximumWeight"].as<float>(); inventory.items = node["items"].as<std::vector<InventoryItem*>>(); return true; } }; } if I just did `std::vector<InventoryItem> items` and had the encode / decode use `InventoryItem& inventoryItem` everything works fine however when I use the code above that has it as a pointer, I get the following error from code that is part of the yaml-cpp library:
      impl.h(123): error C4700: uninitialized local variable 't' used The code with the error is:
      template <typename T> struct as_if<T, void> { explicit as_if(const Node& node_) : node(node_) {} const Node& node; T operator()() const { if (!node.m_pNode) throw TypedBadConversion<T>(node.Mark()); T t; if (convert<T>::decode(node, t)) // NOTE: THIS IS THE LINE THE COMPILER ERROR IS REFERENCING return t; throw TypedBadConversion<T>(node.Mark()); } }; With my relative lack of experience in C++ and not being able to find any documentation for yaml-cpp using pointers, I am not exactly sure what is wrong with my code.
      Anyone have any ideas what I need to change with my code? 
    • By Gnollrunner
      I already asked this question on stack overflow, and they got pissed at me, down-voted me and so forth, LOL .... so I'm pretty sure the answer is NO, but I'll try again here anyway just in case..... Is there any way to get the size of a polymorphic object at run-time? I know you can create a virtual function that returns size and overload it for each child class, but I'm trying to avoid that since (a) it takes a virtual function call and I want it to be fast and (b) it's a pain to have to include the size function for every subclass. I figure since each object has a v-table their should be some way since the information is there, but perhaps there is no portable way to do it.
    • By MarcusAseth
      This is the code I have:
       //Create Window     DWORD windowStyle = WS_VISIBLE;     DWORD windowExStyle = WS_EX_OVERLAPPEDWINDOW;     SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_SYSTEM_AWARE);     RECT client{ 0, 0, 100, 40 };     UINT dpi = GetDpiForSystem();     AdjustWindowRectExForDpi(&client, windowStyle, false, windowExStyle, dpi);     UINT adjustedWidth = client.right - client.left;     UINT adjustedHeight = client.bottom - client.top;     m_hwnd = CreateWindowEx(windowExStyle,                             className.c_str(),                             windowName.c_str(),                             windowStyle,                             CW_USEDEFAULT,                             CW_USEDEFAULT,                             adjustedWidth,                             adjustedHeight,                             nullptr,                             nullptr,                             m_hInstance,                             m_emu     ); The generated window has a client area of 1 pixel in height, even though I'm asking for 40. so I'm always getting 39 pixel less than what I need...can someone help me with this? x_x
    • By SeraphLance
      I've spent quite a while (and probably far longer than I actually should) trying to design an allocator system.  I've bounced ideas around to various people in the past, but never really gotten something satisfactory.
      Basically, the requirements I'm trying to target are:
        Composability -- allocators that seamlessly allocate from memory allocated by other allocators.  This helps me to do things like, for example, write an allocator that pads allocations from its parent allocator with bit patterns to detect heap corruption.  It also allows me to easily create spillovers, or optionally assert on overflow with specialized fallbacks.   Handling the fact that some allocators have different interfaces than others in an elegant way.  For example, a regular allocator might have Allocate/Deallocate, but a linear allocator can't do itemized deallocation (but can deallocate everything at once).   I want to be able to tell how much I've allocated, and how much of that is actually being used.  I also want to be able to bucket that on subsystem, but as far as I can tell, that doesn't really impact the design outside of adding a new parameter to allocate calls. Note:  I'm avoiding implementation of allocation buckets and alignment from this, since it's largely orthogonal to what I'm asking and can be done with any of the designs.
       
      To meet those three requirements, I've come up with the following solutions, all of which have significant drawbacks.
      Static Policy-Based Allocators
      I originally built this off of this talk.
      Examples;
      struct AllocBlock { std::byte* ptr; size_t size; }; class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; template <typename BackingAllocator, size_t allocSize> class LinearAllocator : BackingAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } AllocBlock Allocate(size_t size); }; template <typename BackingAllocator, size_t allocSize> class PoolAllocator : BackingAllocator { AllocBlock baseMemory; char* currentHead; public: PoolAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator<Mallocator, size>; Advantages:
      SFINAE gives me a pseudo-duck-typing thing.  I don't need any kind of common interfaces, and I'll get a compile-time error if I try to do something like create a LinearAllocator backed by a PoolAllocator. It's composable. Disadvantages:
      Composability is type composability, meaning every allocator I create has an independent chain of compositions.  This makes tracking memory usage pretty hard, and presumably can cause me external fragmentation issues.  I might able to get around this with some kind of singleton kung-fu, but I'm unsure as I don't really have any experience with them. Owing to the above, all of my customization points have to be template parameters because the concept relies on empty constructors.  This isn't a huge issue, but it makes defining allocators cumbersome. Dynamic Allocator Dependency
      This is probably just the strategy pattern, but then again everything involving polymorphic type composition looks like the strategy pattern to me. 😃
      Examples:
      struct AllocBlock { std::byte* ptr; size_t size; }; class Allocator { virtual AllocBlock Allocate(size_t) = 0; virtual void Deallocate(AllocBlock) = 0; }; class Mallocator : Allocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; class LinearAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } AllocBlock Allocate(size_t size); }; class PoolAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* currentHead; public: PoolAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator, size); There's an obvious problem with the above:  Namely that PoolAllocator and LinearAllocator don't inherit from the generic Allocator interface.  They can't, because their interfaces provide different semantics.  There's to ways I can solve this:
        Inherit from Allocator anyway and assert on unsupported operations (delegates composition failure to runtime errors, which I'd rather avoid).   As above:  Don't inherit and just deal with the fact that some composability is lost (not ideal, because it means you can't do things like back a pool allocator with a linear allocator) As for the overall structure, I think it looks something like this:
      Advantages:
      Memory usage tracking is easy, since I can use the top-level mallocator(s) to keep track of total memory allocated, and all of the leaf allocators to track of used memory.  How to do that in particular is outside the scope of what I'm asking about, but I've got some ideas. I still have composability Disadvantages:
      The interface issues above.  There's no duck-typing-like mechanism to help here, and I'm strongly of the opinion that programmer errors in construction like that should fail at compile-time, not runtime. Composition on Allocated Memory instead of Allocators
      This is probably going to be somewhat buggy and poorly thought, since it's just an idea rather than something I've actually tried.
      Examples:
      struct AllocBlock { void* ptr; size_t size; std::function<void()> dealloc; } class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size) { void* ptr = malloc(size); return {ptr, size, [ptr](){ free(ptr); }}; } }; class LinearAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) {end = ptr = baseMemory.ptr;} AllocBlock Allocate(size_t); }; class PoolAllocator { AllocBlock baseMemory; char* head; public: PoolAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) { /* stuff */ } void* Allocate(); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator.Allocate(size)); I don't really like this design at first blush, but I haven't really tried it.

      Advantages:
      "Composable", since we've delegated most of what composition entails into the memory block rather than the allocator. Tracking memory is a bit more complex, but I *think* it's still doable. Disadvantages:
      Makes the interface more complex, since we have to allocate first and then pass that block into our "child" allocator. Can't do specialized deallocation (i.e. stack deallocation) since the memory blocks don't know anything about their parent allocation pool.  I might be able to get around this though.  
      I've done a lot of research against all of the source-available engines I can find, and it seems like most of them either have very small allocator systems or simply don't try to make them composable at all (CryEngine does this, for example).  That said, it seems like something that should have a lot of good examples, but I can't find a whole lot.  Does anyone have any good feedback/suggestions on this, or is composability in general just a pipe dream?
    • By RobMaddison
      Hi
      I’ve been working on a game engine for years and I’ve recently come back to it after a couple of years break.  Because my engine uses DirectX9.0c I thought maybe it would be a good idea to upgrade it to DX11. I then installed Windows 10 and starting tinkering around with the engine trying to refamiliarise myself with all the code.
      It all seems to work ok in the new OS but there’s something I’ve noticed that has caused a massive slowdown in frame rate. My engine has a relatively sophisticated terrain system which includes the ability to paint roads onto it, ala CryEngine. The roads are spline curves and built up with polygons matching the terrain surface. It used to work perfectly but I’ve noticed that when I’m dynamically adding the roads, which involves moving the spline curve control points around the surface of the terrain, the frame rate comes to a grinding halt.
      There’s some relatively complex processing going on each time the mouse moves - the road either side of the control point(s) being moved, is reconstructed in real time so you can position and bend the road precisely. On my previous OS, which was Win2k Pro, this worked really smoothly and in release mode there was barely any slow down in frame rate, but now it’s unusable. As part of the road reconstruction, I lock the vertex and index buffers and refill them with the new values so my question is, on windows 10 using DX9, is anyone aware of any locking issues? I’m aware that there can be contention when locking buffers dynamically but I’m locking with LOCK_DISCARD and this has never been an issue before.
      Any help would be greatly appreciated.
    • By LukeCassa005
      I'm writing a small 3D Vulkan game engine using C++. I'm working in a team, and the other members really don't know almost anything about C++. About three years ago i found this new programming language called D wich seems very interesting, as it's very similar to C++. My idea was to implement core systems like rendering, math, serialization and so on using C++ and then wrapping all with a D framework, easier to use and less complicated. Is it worth it or I should stick only to C++ ? Does it have less performance compared to a pure c++ application ?
    • By MarcusAseth
      Hi guys, I'm trying to learn this stuff but running into some problems 😕
      I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data:
      //... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this:
      hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1.
      Can someone point out what I'm doing wrong? 😕 
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631371
    • Total Posts
      2999604
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!