# Storing one or more lvalue references inside a tuple of a variadic argument list

## Recommended Posts

I'm writing some code for automatic function argument type deduction using a tuple. I'm then iterating over it to narrow down and check each argument separately. I spent a cozy evening tinkering with getting by-value, const value and const references to work, until I discovered that some functions need to return more than one result. This is where I need to identify non-const lvalue references, which a tuple has difficulty in handling.

As far as I can tell most problems out and about on the web focus on creating fairly simple stand-alone tuples that only contain lvalue references using std::tie. In particular, this stackexchange thread outlines how that can be accomplished.

Problem is, I have a packed array of types, which may or may not contain one or more lvalue references interleaved with other types. forward_as_tuple is suggested here and there, but I'm unsure how to use it.

Here's there relevant code:

// split the return type from the argument list
template<typename R, typename... Args>
struct												signature<R(Args...)>
{
using return_type								= R;
using argument_type								= std::tuple<Args...>;				// 1
};

template<typename FUNCSIG>
editor::nodetype_t&									CreateNodeType(
IN const char*									category)
{
// instantiate the argument list tuple. No need to post any further code as this
// is where things fail to compile
signature<FUNCSIG>::argument_type				arglist;							// 2
}

// the below snippet outlines how CreateNodeType() is called:

#define DEFINE_MTL_NODE(function, category, ...)							\
auto& nodeType = CreateNodeType<decltype(function)>(category);

// a sample function for completeness. I'm intentionally not using the return value here.
void Lerp(
IN const math::vec3& ColorIn1,
IN const math::vec3& ColorIn2,
IN float Alpha,
OUT math::vec3& ColorOut) { .. }

void main()
{
DEFINE_MTL_NODE(Lerp, "Color");
}

Either the line marked with 1 or 2 needs to be something else, but apparently my C++ level is not high enough to figure out what. PS - to further complicate things, I'm stuck on C++11 for now.

Ideas?

##### Share on other sites

I've had to do something like this recently, and you'll probably need two types -- one that you can use for traits:

using traits = std::tuple<Args...>;

And one that you can use for instantiating arguments:

using instances = std::tuple<typename std::decay<Args>::type...>;

You can then use the traits to determine which arguments are non-const L-value references, so they can be copied from the instance tuple back into the appropriate reference parameters (depending on how the callable is invoked*).

*std::apply would be ideal, but of course it's not available in C++11. Check out this for an alternative approach.

Edited by Zipster

##### Share on other sites

I have to say this is a bit over my head. I understand what the code is supposed to do conceptually, but even with the link you provided I'm not sure what exactly is happening.

Eg given this (from the link):

template<typename T, typename... Args>
struct foo
{
tuple<Args...> args;

// Allows deducing an index list argument pack
template<size_t... Is>
T gen(index_list<Is...> const&)
{
return T(get<Is>(args)...); // This is the core of the mechanism
}

T gen()
{
return gen(
index_range<0, sizeof...(Args)>() // Builds an index list
);
}
};

How do I even invoke foo? What is T

return T(get<Is>(args)...);

As I understand it, this gets the Is-th (eg last) element from args, then expands the rest and returns it as a separate type.

__________________________________

std::apply makes a bit more sense to me (though still not enough) - I'm not sure how it can be called without instantiating the argument list first, which already generates the compiler error.

My own take on it fails for a different reason. The following is a mix of the signature class from my original post and std::apply with the help of the index classes from Zipster's link. The main problem here is that I'm not sure where or how the lookup between the traits list and the decayed argument list is supposed to happen. I've also replaced std::invoke with a recursive call to mycall() - this effectively bypasses per-element type lookup anyway.

PS - the reinterpret_cast below is a joke. It does work with my basic test case though, which begs the question - ignoring potential issues with portability for the moment, if the tuple element size is  guaranteed to be constant (or even if different core types have different sizes, but qualifiers do not), why would this be illegal?

void mycall() { }

template<typename T, typename ...Args>
void mycall(T&& t, Args&& ...args)
{
lout << "CALLING" << endl;
DOUTEX(std::is_reference<T>::value);
DOUTEX(std::is_const<std::remove_reference<T>::type>::value);

mycall(args...);
}

namespace detail {
template <class Tuple, std::size_t... Is>
void invoke_impl(Tuple&& t, index_list<Is...> const&)
{
mycall(std::get<Is>(std::forward<Tuple>(t))...);
}
}

template<typename S>
struct												sig2;

template <typename R, typename... Args>
struct sig2<R(Args...)>
{
using argument_list				= std::tuple<typename std::decay<Args>::type...>;
using Tuple					= std::tuple<Args...>;

void invoke() {
const auto size				= tuple_size<Tuple>::value;

argument_list				t;

detail::invoke_impl(
// ! type mismatch for cast via std::forward/std::forward_as_tuple:
// forward_as_tuple/*std::forward*/<Tuple>(t),
// but using dumb force actually works with my test case
reinterpret_cast<Tuple&>(t),
index_range<0, size>());
}
};

Edited by irreversible

##### Share on other sites
16 minutes ago, irreversible said:

How do I even invoke foo? What is T


return T(get<Is>(args)...);

As I understand it, this gets the Is-th (eg last) element from args, then expands the rest and returns it as a separate type.

"T" in this case is the constructor of template-type "T" which is declared for foo.

"get<Is>(args)..." gets every element from 0...last. Similar to "std::forward<Args>(args)" for variadic args, this expands to:

T(get<0>(args), get<1>(args), get<2>(args)....);

Think the general term for this is folding.
"Is" is just a sequence of non-type template arguments that go from 0... last-index, based on the index_sequence variable you pass to the function.

##### Share on other sites

After a bit more tinkering, this is what I got. It seems to work and is C++11 compatible. I guess it would be possible to pass in a callback name and have the correct template overload be called, but frankly I don't need that level of control. Besides, this would be so much easier in C++14+.

template<size_t INDEX, typename Tuple, typename T, typename ...Args>
void mycall(T&& t, Args&& ...args)
{
UNREFERENCED_PARAMETERS(t);

// get the real type
using TT							= std::tuple_element<INDEX, Tuple>::type;

// some debug output
lout << "ARGUMENT" << endl;
DOUTEX(std::is_reference<TT>::value);
DOUTEX(std::is_const<std::remove_reference<TT>::type>::value);

// unpack next argument
mycall < INDEX + 1, Tuple > (args...);
}

namespace detail {
// this can be collapsed into ListArgs()
template <class Tuple, class TupleInstances, std::size_t... Is>
void invoke_impl(TupleInstances&& t, index_list<Is...> const&)
{
// start at index 0, passing in the decayed argument list and type traits
mycall<0, Tuple>(std::get<Is>(std::forward<TupleInstances>(t))...);
}

}

template<typename S>
struct									signature;

template <typename R, typename... Args>
struct signature<R(Args...)>
{
using argument_list					= std::tuple<typename std::decay<Args>::type...>;
using Tuple							= std::tuple<Args...>;

void ListArgs() {
const auto size					= tuple_size<Tuple>::value;

detail::invoke_impl<Tuple>(
argument_list(), index_range<0, size>());
}
};

// USAGE:

signature<decltype<SomeFuction>> sig;
sig.ListArgs();

##### Share on other sites
1 hour ago, irreversible said:

Besides, this would be so much easier in C++14+.

Could you elaborate on that? (just interested)

##### Share on other sites
1 minute ago, ninnghazad said:

Could you elaborate on that? (just interested)

C++14 allows autos in lambdas. I'm assuming you could collapse your redirection to something like this:

ListArguments([](auto arg) { mycallback(arg); });

As opposed of having to work around your templated callback using some struct hack. As implied above, I can't test this, though.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

• 45
• 11
• 17
• 11
• 13
• ### Similar Content

• Hi all
this is my first post on this forum.
First of all i want to say you that i've searched many posts on this forum about this specific argument, without success, so i write another one....
Im a beginner.
I want use GPU geometry clipmaps algorithm to visualize virtual inifinte terrains.
I already tried to use vertex texture fetch with a single sampler2D with success.

Readed many papers about the argument and all speak about the fact that EVERY level of a geometry clipmap, has its own texture. What means this exactly? i have to
upload on graphic card a sampler2DArray?
With a single sampler2D is conceptually simple. Creating a vbo and ibo on cpu (the vbo contains only the positions on X-Z plane, not the heights)
and upload on GPU the texture containing the elevations. In vertex shader i sample, for every vertex, the relative height to te uv coordinate.
But i can't imagine how can i reproduce various 2d footprint for every level of the clipmap. The only way i can imagine is follow:
Upload the finer texture on GPU (entire heightmap). Create on CPU, and for each level of clipmap, the 2D footprints of entire clipmap.
So in CPU i create all clipmap levels in terms of X-Z plane. In vertex shader sampling these values is simple using vertex texture fetch.
So, how can i to sample a sampler2DArray in vertex shader, instead of upload a sampler2D of entire clipmap?

Sorry for my VERY bad english, i hope i have been clear.

• By mangine
Hello. I am developing a civ 6 clone set in space and I have a few issues. I am using Lua for the logic and UI of the game and c++ directx 12 for the graphics. I need a way to send information between Lua and c++ occasionally and was wondering what is the best and most flexible (and hopefully fast) way to do this. Don't forget that I also need to send things from c++ back to Lua. I know of a lua extension called "LuaBridge" on github but it is a little old and I am worried that it will not work with directx 12. Has anybody done something similar and knows a good method of sending data back and forth? I am aware that Lua is used more and more in the industry and surely plenty of AAA game programmers know the answer to this. I want a good solution that will hopefully still be viable code in a couple of years...
• By owenjr
Hi there.
I'm pretty new to this and I don't know if it has been asked before, but here I go.
I'm developing a game using SFML and C++.
I would like to use the "Tiled" tool to load maps into my game but I don't actually find any tutorial or guide on how to exaclty use it (I know that I have to read an XML file and stuff). I just step into diverse projects that make all a mess.
Anyone knows where can I find good information to make my map loader by myself?

• Hello guys,
I've released my game for the first time. I'm very excited about it and I hope you'll enjoy the game - Beer Ranger. It's a retro-like puzzle-platfromer which makes you think a lot or die trying. You have a squad of skilled dwarfs with special powers and your goal is tasty beer. There is a lot of traps as well as many solutions how to endure them - it is up to your choice how to complete the level!
Link to the project: Project site
Link to the Steam site with video: Beer Ranger
Have fun and please write feedback if you feel up to.
Some screens:

• So I wrote a programming language called C-Lesh to program games for my game maker Platformisis. It is a scripting language which tiles into the JavaScript game engine via a memory mapper using memory mapped I/O. Currently, I am porting the language as a standalone interpreter to be able to run on the PC and possibly other devices excluding the phone. The interpreter is being written in C++ so for those of you who are C++ fans you can see the different components implemented. Some background of the language and how to program in C-Lesh can be found here: