# Storing one or more lvalue references inside a tuple of a variadic argument list

## Recommended Posts

I'm writing some code for automatic function argument type deduction using a tuple. I'm then iterating over it to narrow down and check each argument separately. I spent a cozy evening tinkering with getting by-value, const value and const references to work, until I discovered that some functions need to return more than one result. This is where I need to identify non-const lvalue references, which a tuple has difficulty in handling.

As far as I can tell most problems out and about on the web focus on creating fairly simple stand-alone tuples that only contain lvalue references using std::tie. In particular, this stackexchange thread outlines how that can be accomplished.

Problem is, I have a packed array of types, which may or may not contain one or more lvalue references interleaved with other types. forward_as_tuple is suggested here and there, but I'm unsure how to use it.

Here's there relevant code:

// split the return type from the argument list
template<typename R, typename... Args>
struct												signature<R(Args...)>
{
using return_type								= R;
using argument_type								= std::tuple<Args...>;				// 1
};

template<typename FUNCSIG>
editor::nodetype_t&									CreateNodeType(
IN const char*									category)
{
// instantiate the argument list tuple. No need to post any further code as this
// is where things fail to compile
signature<FUNCSIG>::argument_type				arglist;							// 2
}

// the below snippet outlines how CreateNodeType() is called:

#define DEFINE_MTL_NODE(function, category, ...)							\
auto& nodeType = CreateNodeType<decltype(function)>(category);

// a sample function for completeness. I'm intentionally not using the return value here.
void Lerp(
IN const math::vec3& ColorIn1,
IN const math::vec3& ColorIn2,
IN float Alpha,
OUT math::vec3& ColorOut) { .. }

void main()
{
DEFINE_MTL_NODE(Lerp, "Color");
}

Either the line marked with 1 or 2 needs to be something else, but apparently my C++ level is not high enough to figure out what. PS - to further complicate things, I'm stuck on C++11 for now.

Ideas?

##### Share on other sites

I've had to do something like this recently, and you'll probably need two types -- one that you can use for traits:

using traits = std::tuple<Args...>;

And one that you can use for instantiating arguments:

using instances = std::tuple<typename std::decay<Args>::type...>;

You can then use the traits to determine which arguments are non-const L-value references, so they can be copied from the instance tuple back into the appropriate reference parameters (depending on how the callable is invoked*).

*std::apply would be ideal, but of course it's not available in C++11. Check out this for an alternative approach.

Edited by Zipster

##### Share on other sites

I have to say this is a bit over my head. I understand what the code is supposed to do conceptually, but even with the link you provided I'm not sure what exactly is happening.

Eg given this (from the link):

template<typename T, typename... Args>
struct foo
{
tuple<Args...> args;

// Allows deducing an index list argument pack
template<size_t... Is>
T gen(index_list<Is...> const&)
{
return T(get<Is>(args)...); // This is the core of the mechanism
}

T gen()
{
return gen(
index_range<0, sizeof...(Args)>() // Builds an index list
);
}
};

How do I even invoke foo? What is T

return T(get<Is>(args)...);

As I understand it, this gets the Is-th (eg last) element from args, then expands the rest and returns it as a separate type.

__________________________________

std::apply makes a bit more sense to me (though still not enough) - I'm not sure how it can be called without instantiating the argument list first, which already generates the compiler error.

My own take on it fails for a different reason. The following is a mix of the signature class from my original post and std::apply with the help of the index classes from Zipster's link. The main problem here is that I'm not sure where or how the lookup between the traits list and the decayed argument list is supposed to happen. I've also replaced std::invoke with a recursive call to mycall() - this effectively bypasses per-element type lookup anyway.

PS - the reinterpret_cast below is a joke. It does work with my basic test case though, which begs the question - ignoring potential issues with portability for the moment, if the tuple element size is  guaranteed to be constant (or even if different core types have different sizes, but qualifiers do not), why would this be illegal?

void mycall() { }

template<typename T, typename ...Args>
void mycall(T&& t, Args&& ...args)
{
lout << "CALLING" << endl;
DOUTEX(std::is_reference<T>::value);
DOUTEX(std::is_const<std::remove_reference<T>::type>::value);

mycall(args...);
}

namespace detail {
template <class Tuple, std::size_t... Is>
void invoke_impl(Tuple&& t, index_list<Is...> const&)
{
mycall(std::get<Is>(std::forward<Tuple>(t))...);
}
}

template<typename S>
struct												sig2;

template <typename R, typename... Args>
struct sig2<R(Args...)>
{
using argument_list				= std::tuple<typename std::decay<Args>::type...>;
using Tuple					= std::tuple<Args...>;

void invoke() {
const auto size				= tuple_size<Tuple>::value;

argument_list				t;

detail::invoke_impl(
// ! type mismatch for cast via std::forward/std::forward_as_tuple:
// forward_as_tuple/*std::forward*/<Tuple>(t),
// but using dumb force actually works with my test case
reinterpret_cast<Tuple&>(t),
index_range<0, size>());
}
};

Edited by irreversible

##### Share on other sites
16 minutes ago, irreversible said:

How do I even invoke foo? What is T


return T(get<Is>(args)...);

As I understand it, this gets the Is-th (eg last) element from args, then expands the rest and returns it as a separate type.

"T" in this case is the constructor of template-type "T" which is declared for foo.

"get<Is>(args)..." gets every element from 0...last. Similar to "std::forward<Args>(args)" for variadic args, this expands to:

T(get<0>(args), get<1>(args), get<2>(args)....);

Think the general term for this is folding.
"Is" is just a sequence of non-type template arguments that go from 0... last-index, based on the index_sequence variable you pass to the function.

##### Share on other sites

After a bit more tinkering, this is what I got. It seems to work and is C++11 compatible. I guess it would be possible to pass in a callback name and have the correct template overload be called, but frankly I don't need that level of control. Besides, this would be so much easier in C++14+.

template<size_t INDEX, typename Tuple, typename T, typename ...Args>
void mycall(T&& t, Args&& ...args)
{
UNREFERENCED_PARAMETERS(t);

// get the real type
using TT							= std::tuple_element<INDEX, Tuple>::type;

// some debug output
lout << "ARGUMENT" << endl;
DOUTEX(std::is_reference<TT>::value);
DOUTEX(std::is_const<std::remove_reference<TT>::type>::value);

// unpack next argument
mycall < INDEX + 1, Tuple > (args...);
}

namespace detail {
// this can be collapsed into ListArgs()
template <class Tuple, class TupleInstances, std::size_t... Is>
void invoke_impl(TupleInstances&& t, index_list<Is...> const&)
{
// start at index 0, passing in the decayed argument list and type traits
mycall<0, Tuple>(std::get<Is>(std::forward<TupleInstances>(t))...);
}

}

template<typename S>
struct									signature;

template <typename R, typename... Args>
struct signature<R(Args...)>
{
using argument_list					= std::tuple<typename std::decay<Args>::type...>;
using Tuple							= std::tuple<Args...>;

void ListArgs() {
const auto size					= tuple_size<Tuple>::value;

detail::invoke_impl<Tuple>(
argument_list(), index_range<0, size>());
}
};

// USAGE:

signature<decltype<SomeFuction>> sig;
sig.ListArgs();

##### Share on other sites
1 hour ago, irreversible said:

Besides, this would be so much easier in C++14+.

Could you elaborate on that? (just interested)

##### Share on other sites

Could you elaborate on that? (just interested)

C++14 allows autos in lambdas. I'm assuming you could collapse your redirection to something like this:

ListArguments([](auto arg) { mycallback(arg); });

As opposed of having to work around your templated callback using some struct hack. As implied above, I can't test this, though.

## Create an account

Register a new account

• ### Similar Content

• I am a talented 2D/3D artist with 3 years animation working experience and a Degree in Illustration and Animation. I have won a world-wide art competition hosted by SFX magazine and am looking to develop a survival game. I have some knowledge of C sharp and have notes for a survival based game with flexible storyline and PVP. Looking for developers to team up with. I can create models, animations and artwork and I have beginner knowledge of C sharp with Unity. The idea is Inventory menu based gameplay and is inspired by games like DAYZ.
Here is some early sci-fi concept art to give you an idea of the work level. Hope to work with like minded people and create something special. email me andrewparkesanim@gmail.com.
Developers who share the same passion please contact me, or if you have a similar project and want me to join your team email me.
Many thanks, Andrew.

• By mike44
Hi
saw in dependency walker that my app still needs msvcp140d.dll even after disabling debug.
What did I forget in the VS2017 release settings? After setting to multithreaded dll I get linker errors.
Thanks

• So I have been playing around with yaml-cpp as I want to use YAML for most of my game data files however I am running into some pretty big performance issues and not sure if it is something I am doing or the library itself.
I created this code in order to test a moderately sized file:
Player newPlayer = Player(); newPlayer.name = "new player"; newPlayer.maximumHealth = 1000; newPlayer.currentHealth = 1; Inventory newInventory; newInventory.maximumWeight = 10.9f; for (int z = 0; z < 10000; z++) { InventoryItem* newItem = new InventoryItem(); newItem->name = "Stone"; newItem->baseValue = 1; newItem->weight = 0.1f; newInventory.items.push_back(newItem); } YAML::Node newSavedGame; newSavedGame["player"] = newPlayer; newSavedGame["inventory"] = newInventory; This is where I ran into my first issue, memory consumption.
Before I added this code, the memory usage of my game was about 22MB. After I added everything expect the YAML::Node stuff, it went up to 23MB, so far nothing unexpected. Then when I added the YAML::Node and added data to it, the memory went up to 108MB. I am not sure why when I add the class instance it only adds like 1MB of memory but then copying that data to a YAML:Node instance, it take another 85MB of memory.
So putting that issue aside, I want want to test the performance of writing out the files. the initial attempt looked like this:
void YamlUtility::saveAsFile(YAML::Node node, std::string filePath) { std::ofstream myfile; myfile.open(filePath); myfile << node << std::endl; myfile.close(); } To write out the file (that ends up to be about 570KB), it took about 8 seconds to do that. That seems really slow to me.
After read the documentation a little more I decide to try a different route using the YAML::Emitter, the implemntation looked like this:
static void buildYamlManually(std::ofstream& file, YAML::Node node) { YAML::Emitter out; out << YAML::BeginMap << YAML::Key << "player" << YAML::Value << YAML::BeginMap << YAML::Key << "name" << YAML::Value << node["player"]["name"].as<std::string>() << YAML::Key << "maximumHealth" << YAML::Value << node["player"]["maximumHealth"].as<int>() << YAML::Key << "currentHealth" << YAML::Value << node["player"]["currentHealth"].as<int>() << YAML::EndMap; out << YAML::BeginSeq; std::vector<InventoryItem*> items = node["inventory"]["items"].as<std::vector<InventoryItem*>>(); for (InventoryItem* const value : items) { out << YAML::BeginMap << YAML::Key << "name" << YAML::Value << value->name << YAML::Key << "baseValue" << YAML::Value << value->baseValue << YAML::Key << "weight" << YAML::Value << value->weight << YAML::EndMap; } out << YAML::EndSeq; out << YAML::EndMap; file << out.c_str() << std::endl; } While this did seem to improve the speed, it was still take about 7 seconds instead of 8 seconds.
Since it has been a while since I used C++ and was not sure if this was normal, I decided to for testing just write a simple method to manually generate the YAMLin this use case, that looked something like this:
static void buildYamlManually(std::ofstream& file, SavedGame savedGame) { file << "player: \n" << " name: " << savedGame.player.name << "\n maximumHealth: " << savedGame.player.maximumHealth << "\n currentHealth: " << savedGame.player.currentHealth << "\ninventory:" << "\n maximumWeight: " << savedGame.inventory.maximumWeight << "\n items:"; for (InventoryItem* const value : savedGame.inventory.items) { file << "\n - name: " << value->name << "\n baseValue: " << value->baseValue << "\n weight: " << value->weight; } } This wrote the same file and it took about 0.15 seconds which seemed a lot more to what I was expecting.
While I would expect some overhead in using yaml-cpp to manage and write out YAML files, it consuming 70X+ the amount of memory and it being 40X+ slower in writing files seems really bad.
I am not sure if I am doing something wrong with how I am using yaml-cpp that would be causing this issue or maybe it was never design to handle large files but was just wondering if anyone has any insight on what might be happening here (or an alternative to dealing with YAMLin C++)?

• So I am trying to using Yaml as my game data files (mainly because it support comments, is a bit easier to read than JSON, and I am going to be working in these files a lot) with C++ and yaml-cpp (https://github.com/jbeder/yaml-cpp) seems like the most popular library for dealing with it however I am running into an issue when using pointers.
Here is my code:
struct InventoryItem { std::string name; int baseValue; float weight; }; struct Inventory { float maximumWeight; std::vector<InventoryItem*> items; }; namespace YAML { template <> struct convert<InventoryItem*> { static Node encode(const InventoryItem* inventoryItem) { Node node; node["name"] = inventoryItem->name; node["baseValue"] = inventoryItem->baseValue; node["weight"] = inventoryItem->weight; return node; } static bool decode(const Node& node, InventoryItem* inventoryItem) { // @todo validation inventoryItem->name = node["name"].as<std::string>(); inventoryItem->baseValue = node["baseValue"].as<int>(); inventoryItem->weight = node["weight"].as<float>(); return true; } }; template <> struct convert<Inventory> { static Node encode(const Inventory& inventory) { Node node; node["maximumWeight"] = inventory.maximumWeight; node["items"] = inventory.items; return node; } static bool decode(const Node& node, Inventory& inventory) { // @todo validation inventory.maximumWeight = node["maximumWeight"].as<float>(); inventory.items = node["items"].as<std::vector<InventoryItem*>>(); return true; } }; } if I just did std::vector<InventoryItem> items and had the encode / decode use InventoryItem& inventoryItem everything works fine however when I use the code above that has it as a pointer, I get the following error from code that is part of the yaml-cpp library:
impl.h(123): error C4700: uninitialized local variable 't' used The code with the error is:
template <typename T> struct as_if<T, void> { explicit as_if(const Node& node_) : node(node_) {} const Node& node; T operator()() const { if (!node.m_pNode) throw TypedBadConversion<T>(node.Mark()); T t; if (convert<T>::decode(node, t)) // NOTE: THIS IS THE LINE THE COMPILER ERROR IS REFERENCING return t; throw TypedBadConversion<T>(node.Mark()); } }; With my relative lack of experience in C++ and not being able to find any documentation for yaml-cpp using pointers, I am not exactly sure what is wrong with my code.
Anyone have any ideas what I need to change with my code?

• I already asked this question on stack overflow, and they got pissed at me, down-voted me and so forth, LOL .... so I'm pretty sure the answer is NO, but I'll try again here anyway just in case..... Is there any way to get the size of a polymorphic object at run-time? I know you can create a virtual function that returns size and overload it for each child class, but I'm trying to avoid that since (a) it takes a virtual function call and I want it to be fast and (b) it's a pain to have to include the size function for every subclass. I figure since each object has a v-table their should be some way since the information is there, but perhaps there is no portable way to do it.

• This is the code I have:
//Create Window     DWORD windowStyle = WS_VISIBLE;     DWORD windowExStyle = WS_EX_OVERLAPPEDWINDOW;     SetThreadDpiAwarenessContext(DPI_AWARENESS_CONTEXT_SYSTEM_AWARE);     RECT client{ 0, 0, 100, 40 };     UINT dpi = GetDpiForSystem();     AdjustWindowRectExForDpi(&client, windowStyle, false, windowExStyle, dpi);     UINT adjustedWidth = client.right - client.left;     UINT adjustedHeight = client.bottom - client.top;     m_hwnd = CreateWindowEx(windowExStyle,                             className.c_str(),                             windowName.c_str(),                             windowStyle,                             CW_USEDEFAULT,                             CW_USEDEFAULT,                             adjustedWidth,                             adjustedHeight,                             nullptr,                             nullptr,                             m_hInstance,                             m_emu     ); The generated window has a client area of 1 pixel in height, even though I'm asking for 40. so I'm always getting 39 pixel less than what I need...can someone help me with this? x_x

• I've spent quite a while (and probably far longer than I actually should) trying to design an allocator system.  I've bounced ideas around to various people in the past, but never really gotten something satisfactory.
Basically, the requirements I'm trying to target are:
Composability -- allocators that seamlessly allocate from memory allocated by other allocators.  This helps me to do things like, for example, write an allocator that pads allocations from its parent allocator with bit patterns to detect heap corruption.  It also allows me to easily create spillovers, or optionally assert on overflow with specialized fallbacks.   Handling the fact that some allocators have different interfaces than others in an elegant way.  For example, a regular allocator might have Allocate/Deallocate, but a linear allocator can't do itemized deallocation (but can deallocate everything at once).   I want to be able to tell how much I've allocated, and how much of that is actually being used.  I also want to be able to bucket that on subsystem, but as far as I can tell, that doesn't really impact the design outside of adding a new parameter to allocate calls. Note:  I'm avoiding implementation of allocation buckets and alignment from this, since it's largely orthogonal to what I'm asking and can be done with any of the designs.

To meet those three requirements, I've come up with the following solutions, all of which have significant drawbacks.
Static Policy-Based Allocators
I originally built this off of this talk.
Examples;
struct AllocBlock { std::byte* ptr; size_t size; }; class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; template <typename BackingAllocator, size_t allocSize> class LinearAllocator : BackingAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } AllocBlock Allocate(size_t size); }; template <typename BackingAllocator, size_t allocSize> class PoolAllocator : BackingAllocator { AllocBlock baseMemory; char* currentHead; public: PoolAllocator() : baseMemory(BackingAllocator::Allocate(allocSize)) { /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator<Mallocator, size>; Advantages:
SFINAE gives me a pseudo-duck-typing thing.  I don't need any kind of common interfaces, and I'll get a compile-time error if I try to do something like create a LinearAllocator backed by a PoolAllocator. It's composable. Disadvantages:
Composability is type composability, meaning every allocator I create has an independent chain of compositions.  This makes tracking memory usage pretty hard, and presumably can cause me external fragmentation issues.  I might able to get around this with some kind of singleton kung-fu, but I'm unsure as I don't really have any experience with them. Owing to the above, all of my customization points have to be template parameters because the concept relies on empty constructors.  This isn't a huge issue, but it makes defining allocators cumbersome. Dynamic Allocator Dependency
This is probably just the strategy pattern, but then again everything involving polymorphic type composition looks like the strategy pattern to me. 😃
Examples:
struct AllocBlock { std::byte* ptr; size_t size; }; class Allocator { virtual AllocBlock Allocate(size_t) = 0; virtual void Deallocate(AllocBlock) = 0; }; class Mallocator : Allocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size); void Deallocate(AllocBlock blk); }; class LinearAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } AllocBlock Allocate(size_t size); }; class PoolAllocator { Allocator* backingAllocator; AllocBlock baseMemory; char* currentHead; public: PoolAllocator(Allocator* backingAllocator, size_t allocSize) : backingAllocator(backingAllocator) { baseMemory = backingAllocator->Allocate(allocSize); /* stuff */ } void* Allocate(); // note the different signature. void Deallocate(void*); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator, size); There's an obvious problem with the above:  Namely that PoolAllocator and LinearAllocator don't inherit from the generic Allocator interface.  They can't, because their interfaces provide different semantics.  There's to ways I can solve this:
Inherit from Allocator anyway and assert on unsupported operations (delegates composition failure to runtime errors, which I'd rather avoid).   As above:  Don't inherit and just deal with the fact that some composability is lost (not ideal, because it means you can't do things like back a pool allocator with a linear allocator) As for the overall structure, I think it looks something like this:
Memory usage tracking is easy, since I can use the top-level mallocator(s) to keep track of total memory allocated, and all of the leaf allocators to track of used memory.  How to do that in particular is outside the scope of what I'm asking about, but I've got some ideas. I still have composability Disadvantages:
The interface issues above.  There's no duck-typing-like mechanism to help here, and I'm strongly of the opinion that programmer errors in construction like that should fail at compile-time, not runtime. Composition on Allocated Memory instead of Allocators
This is probably going to be somewhat buggy and poorly thought, since it's just an idea rather than something I've actually tried.
Examples:
struct AllocBlock { void* ptr; size_t size; std::function<void()> dealloc; } class Mallocator { size_t allocatedMemory; public: Mallocator(); AllocBlock Allocate(size_t size) { void* ptr = malloc(size); return {ptr, size, [ptr](){ free(ptr); }}; } }; class LinearAllocator { AllocBlock baseMemory; char* ptr; char* end; public: LinearAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) {end = ptr = baseMemory.ptr;} AllocBlock Allocate(size_t); }; class PoolAllocator { AllocBlock baseMemory; char* head; public: PoolAllocator(AllocBlock baseMemory) : baseMemory(baseMemory) { /* stuff */ } void* Allocate(); }; // ex: auto allocator = PoolAllocator(someGlobalMallocator.Allocate(size)); I don't really like this design at first blush, but I haven't really tried it.

"Composable", since we've delegated most of what composition entails into the memory block rather than the allocator. Tracking memory is a bit more complex, but I *think* it's still doable. Disadvantages:
Makes the interface more complex, since we have to allocate first and then pass that block into our "child" allocator. Can't do specialized deallocation (i.e. stack deallocation) since the memory blocks don't know anything about their parent allocation pool.  I might be able to get around this though.
I've done a lot of research against all of the source-available engines I can find, and it seems like most of them either have very small allocator systems or simply don't try to make them composable at all (CryEngine does this, for example).  That said, it seems like something that should have a lot of good examples, but I can't find a whole lot.  Does anyone have any good feedback/suggestions on this, or is composability in general just a pipe dream?

• Hi
I’ve been working on a game engine for years and I’ve recently come back to it after a couple of years break.  Because my engine uses DirectX9.0c I thought maybe it would be a good idea to upgrade it to DX11. I then installed Windows 10 and starting tinkering around with the engine trying to refamiliarise myself with all the code.
It all seems to work ok in the new OS but there’s something I’ve noticed that has caused a massive slowdown in frame rate. My engine has a relatively sophisticated terrain system which includes the ability to paint roads onto it, ala CryEngine. The roads are spline curves and built up with polygons matching the terrain surface. It used to work perfectly but I’ve noticed that when I’m dynamically adding the roads, which involves moving the spline curve control points around the surface of the terrain, the frame rate comes to a grinding halt.
There’s some relatively complex processing going on each time the mouse moves - the road either side of the control point(s) being moved, is reconstructed in real time so you can position and bend the road precisely. On my previous OS, which was Win2k Pro, this worked really smoothly and in release mode there was barely any slow down in frame rate, but now it’s unusable. As part of the road reconstruction, I lock the vertex and index buffers and refill them with the new values so my question is, on windows 10 using DX9, is anyone aware of any locking issues? I’m aware that there can be contention when locking buffers dynamically but I’m locking with LOCK_DISCARD and this has never been an issue before.
Any help would be greatly appreciated.

• I'm writing a small 3D Vulkan game engine using C++. I'm working in a team, and the other members really don't know almost anything about C++. About three years ago i found this new programming language called D wich seems very interesting, as it's very similar to C++. My idea was to implement core systems like rendering, math, serialization and so on using C++ and then wrapping all with a D framework, easier to use and less complicated. Is it worth it or I should stick only to C++ ? Does it have less performance compared to a pure c++ application ?

• Hi guys, I'm trying to learn this stuff but running into some problems 😕
I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data:
//... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this:
hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1.
Can someone point out what I'm doing wrong? 😕

• 38
• 12
• 10
• 10
• 9
• ### Forum Statistics

• Total Topics
631365
• Total Posts
2999584
×

## Important Information

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!