Sign in to follow this  
irreversible

Storing one or more lvalue references inside a tuple of a variadic argument list

Recommended Posts

I'm writing some code for automatic function argument type deduction using a tuple. I'm then iterating over it to narrow down and check each argument separately. I spent a cozy evening tinkering with getting by-value, const value and const references to work, until I discovered that some functions need to return more than one result. This is where I need to identify non-const lvalue references, which a tuple has difficulty in handling.

As far as I can tell most problems out and about on the web focus on creating fairly simple stand-alone tuples that only contain lvalue references using std::tie. In particular, this stackexchange thread outlines how that can be accomplished. 

Problem is, I have a packed array of types, which may or may not contain one or more lvalue references interleaved with other types. forward_as_tuple is suggested here and there, but I'm unsure how to use it. 

Here's there relevant code:

// split the return type from the argument list
template<typename R, typename... Args>
struct												signature<R(Args...)>
{
    using return_type								= R;
    using argument_type								= std::tuple<Args...>;				// 1
};


template<typename FUNCSIG>
editor::nodetype_t&									CreateNodeType(
    IN const char*									category)
{
  	// instantiate the argument list tuple. No need to post any further code as this
  	// is where things fail to compile
  	signature<FUNCSIG>::argument_type				arglist;							// 2
}
  
  
  
  
  
// the below snippet outlines how CreateNodeType() is called:
  
  
#define DEFINE_MTL_NODE(function, category, ...)							\
        auto& nodeType = CreateNodeType<decltype(function)>(category);

// a sample function for completeness. I'm intentionally not using the return value here.
void Lerp(
  	IN const math::vec3& ColorIn1,
  	IN const math::vec3& ColorIn2,
  	IN float Alpha,
  	OUT math::vec3& ColorOut) { .. }
  
void main()
{
	DEFINE_MTL_NODE(Lerp, "Color");
}

Either the line marked with 1 or 2 needs to be something else, but apparently my C++ level is not high enough to figure out what. PS - to further complicate things, I'm stuck on C++11 for now.

Ideas? :)

Share this post


Link to post
Share on other sites

I've had to do something like this recently, and you'll probably need two types -- one that you can use for traits:

using traits = std::tuple<Args...>;

And one that you can use for instantiating arguments:

using instances = std::tuple<typename std::decay<Args>::type...>;

You can then use the traits to determine which arguments are non-const L-value references, so they can be copied from the instance tuple back into the appropriate reference parameters (depending on how the callable is invoked*).

*std::apply would be ideal, but of course it's not available in C++11. Check out this for an alternative approach.

Edited by Zipster

Share this post


Link to post
Share on other sites

I have to say this is a bit over my head. I understand what the code is supposed to do conceptually, but even with the link you provided I'm not sure what exactly is happening.

Eg given this (from the link):

template<typename T, typename... Args>
struct foo
{
    tuple<Args...> args;

    // Allows deducing an index list argument pack
    template<size_t... Is>
    T gen(index_list<Is...> const&)
    {
        return T(get<Is>(args)...); // This is the core of the mechanism
    }

    T gen()
    {
        return gen(
            index_range<0, sizeof...(Args)>() // Builds an index list
            );
    }
};

How do I even invoke foo? What is T

return T(get<Is>(args)...);

As I understand it, this gets the Is-th (eg last) element from args, then expands the rest and returns it as a separate type.

__________________________________

std::apply makes a bit more sense to me (though still not enough) - I'm not sure how it can be called without instantiating the argument list first, which already generates the compiler error.

My own take on it fails for a different reason. The following is a mix of the signature class from my original post and std::apply with the help of the index classes from Zipster's link. The main problem here is that I'm not sure where or how the lookup between the traits list and the decayed argument list is supposed to happen. I've also replaced std::invoke with a recursive call to mycall() - this effectively bypasses per-element type lookup anyway.

PS - the reinterpret_cast below is a joke. It does work with my basic test case though, which begs the question - ignoring potential issues with portability for the moment, if the tuple element size is  guaranteed to be constant (or even if different core types have different sizes, but qualifiers do not), why would this be illegal?

void mycall() { }

template<typename T, typename ...Args>
void mycall(T&& t, Args&& ...args)
{
    lout << "CALLING" << endl;
    DOUTEX(std::is_reference<T>::value);
    DOUTEX(std::is_const<std::remove_reference<T>::type>::value);

    mycall(args...);
}


namespace detail {
    template <class Tuple, std::size_t... Is>
    void invoke_impl(Tuple&& t, index_list<Is...> const&)
    {
        mycall(std::get<Is>(std::forward<Tuple>(t))...);
    }
}

template<typename S>
struct												sig2;

template <typename R, typename... Args>
struct sig2<R(Args...)>
{
    using argument_list				= std::tuple<typename std::decay<Args>::type...>;
    using Tuple					= std::tuple<Args...>;

    void invoke() {
        const auto size				= tuple_size<Tuple>::value;

        argument_list				t;

        detail::invoke_impl(
  			// ! type mismatch for cast via std::forward/std::forward_as_tuple:
			// forward_as_tuple/*std::forward*/<Tuple>(t), 
  			// but using dumb force actually works with my test case
  			reinterpret_cast<Tuple&>(t),
            index_range<0, size>());
        }
};

 

 

Edited by irreversible

Share this post


Link to post
Share on other sites
16 minutes ago, irreversible said:

How do I even invoke foo? What is T


return T(get<Is>(args)...);

As I understand it, this gets the Is-th (eg last) element from args, then expands the rest and returns it as a separate type.

"T" in this case is the constructor of template-type "T" which is declared for foo.

"get<Is>(args)..." gets every element from 0...last. Similar to "std::forward<Args>(args)" for variadic args, this expands to:

T(get<0>(args), get<1>(args), get<2>(args)....); 

Think the general term for this is folding.
"Is" is just a sequence of non-type template arguments that go from 0... last-index, based on the index_sequence variable you pass to the function.

 

 

Share this post


Link to post
Share on other sites

After a bit more tinkering, this is what I got. It seems to work and is C++11 compatible. I guess it would be possible to pass in a callback name and have the correct template overload be called, but frankly I don't need that level of control. Besides, this would be so much easier in C++14+.

 

template<size_t INDEX, typename Tuple, typename T, typename ...Args>
void mycall(T&& t, Args&& ...args)
{
	UNREFERENCED_PARAMETERS(t);

	// get the real type
	using TT							= std::tuple_element<INDEX, Tuple>::type;

	// some debug output
	lout << "ARGUMENT" << endl;
	DOUTEX(std::is_reference<TT>::value);
	DOUTEX(std::is_const<std::remove_reference<TT>::type>::value);

	// unpack next argument
	mycall < INDEX + 1, Tuple > (args...);
}

namespace detail {
  	// this can be collapsed into ListArgs()
    template <class Tuple, class TupleInstances, std::size_t... Is>
    void invoke_impl(TupleInstances&& t, index_list<Is...> const&)
    {
		// start at index 0, passing in the decayed argument list and type traits
		mycall<0, Tuple>(std::get<Is>(std::forward<TupleInstances>(t))...);
    }

}

template<typename S>
struct									signature;

template <typename R, typename... Args>
struct signature<R(Args...)>
{
	using argument_list					= std::tuple<typename std::decay<Args>::type...>;
	using Tuple							= std::tuple<Args...>;

	void ListArgs() {
		const auto size					= tuple_size<Tuple>::value;

		detail::invoke_impl<Tuple>(
			argument_list(), index_range<0, size>());
		}
};
  
  
// USAGE:
  
signature<decltype<SomeFuction>> sig;
sig.ListArgs();

 

Share this post


Link to post
Share on other sites
1 minute ago, ninnghazad said:

Could you elaborate on that? (just interested)

C++14 allows autos in lambdas. I'm assuming you could collapse your redirection to something like this:

ListArguments([](auto arg) { mycallback(arg); });

As opposed of having to work around your templated callback using some struct hack. As implied above, I can't test this, though.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628375
    • Total Posts
      2982319
  • Similar Content

    • By Kazuma506
      I am trying to recreate the combat system in the game Life is Feudal but make it more complex. The fighting system works by taking in the direction of the mouse movement and if you press the left click it will swing in that direction, though stab, overhead, left (up, down, left right) and right are the only swings that you can do. If you wanted to you could also hold the swing by holding left click so you are able to swing at the perfect moment in the battle. I want to change this so add in more swing directions but I also want to code this from scratch in Unreal. Can anyone give me any pointers or maybe a few snippets of code that work in Unreal that could help me start to implement this type of system?
       
       
    • By rXpSwiss
      Hello,
      I am sending compressed json data from the UE4 client to a C++ server made with boost.
      I am using ZLib to compress and decompress all json but it doesn't work. I am now encoding it in base64 to avoid some issues but that doesn't change a thing.
      I currently stopped trying to send the data and I am writing it in a file from the client and trying to read the file and decompress on the server side.
      When the server is trying to decompress it I get an error from ZLib : zlib error: iostream error
      My question is the following : Did anyone manage to compress and decompress data between a UE4 client and a C++ server ?
      I cannot really configure anything on the server side (because boost has its ZLib compressor) and I don't know what is wrong with the decompression.
      Any idea ?
      rXp
    • By noodleBowl
      I was wondering if someone could explain this to me
      I'm working on using the windows WIC apis to load in textures for DirectX 11. I see that sometimes the WIC Pixel Formats do not directly match a DXGI Format that is used in DirectX. I see that in cases like this the original WIC Pixel Format is converted into a WIC Pixel Format that does directly match a DXGI Format. And doing this conversion is easy, but I do not understand the reason behind 2 of the WIC Pixel Formats that are converted based on Microsoft's guide
      I was wondering if someone could tell me why Microsoft's guide on this topic says that GUID_WICPixelFormat40bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA and why GUID_WICPixelFormat80bppCMYKAlpha should be converted into GUID_WICPixelFormat64bppRGBA
      In one case I would think that: 
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat32bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA, because the black channel (k) values would get readded / "swallowed" into into the CMY channels
      In the second case I would think that:
      GUID_WICPixelFormat40bppCMYKAlpha would convert to GUID_WICPixelFormat64bppRGBA and that GUID_WICPixelFormat80bppCMYKAlpha would convert to GUID_WICPixelFormat128bppRGBA, because the black channel (k) bits would get redistributed amongst the remaining 4 channels (CYMA) and those "new bits" added to those channels would fit in the GUID_WICPixelFormat64bppRGBA and GUID_WICPixelFormat128bppRGBA formats. But also seeing as there is no GUID_WICPixelFormat128bppRGBA format this case is kind of null and void
      I basically do not understand why Microsoft says GUID_WICPixelFormat40bppCMYKAlpha and GUID_WICPixelFormat80bppCMYKAlpha should convert to GUID_WICPixelFormat64bppRGBA in the end
       
    • By HD86
      As far as I know, the size of XMMATRIX must be 64 bytes, which is way too big to be returned by a function. However, DirectXMath functions do return this struct. I suppose this has something to do with the SIMD optimization. Should I return this huge struct from my own functions or should I pass it by a reference or pointer?
      This question will look silly to you if you know how SIMD works, but I don't.
    • By pristondev
      Hey, Im using directx allocate hierarchy from dx9 to use a skinned mesh system.
      one mesh will be only the skeleton with all animations others meshes will be armor, head etc, already skinned with skeleton above. No animation, idle position with skin, thats all I want to use the animation from skeleton to other meshes, so this way I can customize character with different head, armor etc. What I was thinking its copy bone matrices from skeleton mesh to others meshes, but Im a bit confused yet what way I can do this.
       
      Thanks.
  • Popular Now