Finalspace

C++ Problem with enum and binary or operator

Recommended Posts

Why wont this code compile? How do i get this to compile (VC++ 2015)?

typedef enum fpl_InitFlag {
	fpl_InitFlag_None = 0,
	fpl_InitFlag_Window = 1 << 0,
	fpl_InitFlag_VideoOpenGL = 1 << 1,
} fpl_InitFlag;

static void InitSomething(fpl_InitFlag initFlags) {
	if (initFlags & fpl_InitFlag_VideoOpenGL) {
		initFlags |= fpl_InitFlag_Window;
	}
}
int main(int argc, char **args) {
	InitSomething(fpl_InitFlag_VideoOpenGL);
	return 0;
}

Error C2676    binary '|=': 'fpl_InitFlag' does not define this operator or a conversion to a type acceptable to the predefined operator

Error (active)        this operation on an enumerated type requires an applicable user-defined operator function

Edited by Finalspace

Share this post


Link to post
Share on other sites

Found a solution for C++ - not great that but will work:

inline fpl_InitFlag operator |(fpl_InitFlag a, fpl_InitFlag b) {
	return static_cast<fpl_InitFlag>(static_cast<int>(a) | static_cast<int>(b));
}
inline fpl_InitFlag operator &(fpl_InitFlag a, fpl_InitFlag b) {
	return static_cast<fpl_InitFlag>(static_cast<int>(a) & static_cast<int>(b));
}
inline fpl_InitFlag& operator |=(fpl_InitFlag& a, fpl_InitFlag b) {
	return a = a | b;
}

 

Edited by Finalspace

Share this post


Link to post
Share on other sites

You can read this.

 

I personally don't like to do that just because doing arithmetic (including bitwise operations) on enums definitely lead to have values not defined in the enum. So you start with a finite set of elements and by allowing such operations you can end up with an infinite set of elements (if you limit to 'or' bitwise operations you are still stuck with a finite number of elements, but their number is large).

So typical C operations like a switch will not be able to handle easily all the values. Also, when debugging, the debugger will not be able to print the matching name of an enum value.

And if you are in C++, this tends to pervade the nature of your enumeration type.

This is just what I think about that :)

Share this post


Link to post
Share on other sites

The compiler is just trying to save you from writing buggy code.  You need to go out of your way to force your bugs into your software.  Your solution does a fine job of doing that.

You're implementing an operator that is not closed on the enumeration set.  You're breaking the contract provided by enum.  When you start getting weird errors, save yourself some debugging time and check where you use the enum first.

Share this post


Link to post
Share on other sites
51 minutes ago, Bregma said:

The compiler is just trying to save you from writing buggy code.  You need to go out of your way to force your bugs into your software.  Your solution does a fine job of doing that.

You're implementing an operator that is not closed on the enumeration set.  You're breaking the contract provided by enum.  When you start getting weird errors, save yourself some debugging time and check where you use the enum first.

A normal enum are just a single 32 bit integer value, including its field accessible by a constant.

You can even define which type your enum are. So using it as flags are totally valid, because its just a stupid integer.

And for the case the compiler may change it to maybe 64 bit integer, i dont care - because the binary operators works there as well.

 

There is no buggy code at all - its totally fine. I see no side-effects whatsoever - unless i overwrite the operators doing some weird shit.

 

Sure i could change it to enum class, but this will break C compability entirely and for a C library this is no go!

Edited by Finalspace

Share this post


Link to post
Share on other sites

I've found the following pattern to be effective:

#include <stdio.h>
  
constexpr const unsigned flag[]                                                                                                                                    
= {                                                                                                                                                                
	0, 1, 2, 4, 8, 16, 32, 64, 128, 256                                                                                                                            
};                                                                                                                                                                 
                                                                                                                                                                   
struct fpl                                                                                                                                                         
{                                                                                                                                                                  
	enum init_flag                                                                                                                                                 
	{                                                                                                                                                              
		init_none,                                                                                                                                                 
		init_window,                                                                                                                                               
		init_opengl                                                                                                                                          
	};                                                                                                                                                             
                                                                                                                                                                   
	fpl( const unsigned &iflags )                                                                                                                                  
	: init_flags{ iflags }                                                                                                                                         
	{}                                                                                                                                                             
                                                                                                                                                                   
	unsigned init_flags;                                                                                                                                           
};                                                                                                                                                                 
                                                                                                                                                                   
                                                                                                                                                                   
int main( int argc, const char *args[] )                                                                                                                           
{                                                                                                                                                                  
	fpl my_fpl{ flag[ fpl::init_window ] | flag[ fpl::init_opengl ] };                                                                                       
                                                                                                                                                                   
	printf( "my_fpl flags: 0x%X\n", my_fpl.init_flags );                                                                                                           
                                                                                                                                                                   
	return 0;                                                                                                                                                      
}                                                                                                                                                                  

 

Share this post


Link to post
Share on other sites

I think using enumerations as flags is fine.  The whole argument that 'its bad because they're not supposed to be used that way' I think is kinda silly.  Sure you don't want to have a situation where you accidentally create an undefined bit pattern, but whether that bit pattern is an 'enum' or just an uint32_t, you still have the same error.  It'll be the same problem in the same piece of code.  The nice thing about enum's is you can have nicer names and avoid stuff like VK_STRUCTURE_TYPE_IMPORT_MEMORY_WIN32_HANDLE_INFO_NV.

Here is what I use (formatting is a bit off but you get the idea):

// --------------------------------------------------------------------------------------------------------------------------
//	enumeration expansion
//		- ENUM_CLASS_OPERATORS defines standard bit operators for enum class types
//		- ENUM_CLASS_AND_OR defines only 'and' and 'or'
// --------------------------------------------------------------------------------------------------------------------------

# define ENUM_CLASS_OPERATORS(T)																																					\
inline constexpr T operator~(T a) noexcept { return static_cast<T>(~static_cast<uint64_t>(a)); }														\
inline constexpr T operator&(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); }			\
inline constexpr T operator|(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); }			\
inline constexpr T operator^(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) ^ static_cast<uint64_t>(b)); }			\
inline T& operator&=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); }			\
inline T& operator|=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); }				\
inline T& operator^=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) ^ static_cast<uint64_t>(b)); }

# define ENUM_CLASS_AND_OR(T)																																						\
inline constexpr T operator&(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); }			\
inline constexpr T operator|(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); }			\
inline T& operator&=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); }			\
inline T& operator|=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); }	

The thing to consider is, even with bit operators, its actually quite hard to come up with a bit pattern that's undefined.  Most of code with flags looks something like:

enum class EEnumOptions {
  none,
  option_1,
  option_2,
  option_3,
  };

enum class EEnumFlags {
  none = 0,
  flag_a = 1,
  flag_b = 2,
  flag_c = 4,
  };
ENUM_CLASS_AND_OR(EEnumFlags)	// create 'and' and 'or' bit operators for EEnumName
  
// ....

void Func(EEnumOptions e, EEnumFlags f) {
  
  // handle options
  switch (e) {
    case EEnumOptions::option_1:
    case EEnumOptions::option_2:
    case EEnumOptions::option_3:
    }
  
  // handle flags
  if ((f & EEnumName::flag_a) == EEnumName::flag_a) {}		// flag_a is set
  if ((f & EEnumName::flag_b) == EEnumName::flag_b) {}		// flag_b is set
  if ((f & EEnumName::flag_c) != EEnumName::flag_c) {}		// flag_c is not set
  }

Even if you were to make a silly bit pattern, things won't 'blow up'.  Any bit pattern is still well defined.  Also if you only restrict yourself to 'and' and 'or' (ie. don't overload 'not' and 'xor'), then its near impossible to create undefined bit patterns (short of intentionally static_cast'ing them in).  Its still safer then simply integer constants or #define's, and for the most part self documenting.  I don't think anyone would have any difficulty using that function, or understanding what is expected, and passing an undefined bit pattern would have to be intentional.

Maybe its my own personal preference, but this seems clean, easy to understand, and hard to break; and isn't that what we want?

Share this post


Link to post
Share on other sites

This sucks really. I added another enum overload:

 

typedef enum fpl_KeyboardModifierType {
	fpl_KeyboardModifierType_None = 0,
	fpl_KeyboardModifierType_Alt = 1 << 0,
	fpl_KeyboardModifierType_Ctrl = 1 << 1,
	fpl_KeyboardModifierType_Shift = 1 << 2,
	fpl_KeyboardModifierType_Super = 1 << 3,
} fpl_KeyboardModifierType;

#ifdef __cplusplus
	inline fpl_KeyboardModifierType operator |(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) {
		return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) | static_cast<int>(b));
	}
	inline fpl_KeyboardModifierType operator &(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) {
		return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) & static_cast<int>(b));
	}
	inline fpl_KeyboardModifierType& operator |=(fpl_KeyboardModifierType& a, fpl_KeyboardModifierType b) {
		return a = a | b;
	}
#endif

and now i get this:

 

Error    C2733    'operator |': second C linkage of overloaded function not allowed

Error    C2733    'operator &': second C linkage of overloaded function not allowed

Error    C2733    'operator |=': second C linkage of overloaded function not allowed

 

I am nearly at a point, where i just want to throw of all enums and use just a struct with an uint32 value and use simple defines - so i get type checking at least for the argument type...

 

People who saying "Almost all legacy C code can be compiled with a C++ compiler" are just lying, because even this simple thing wont compile -.- I see why other C libraries dont use enums at all...

Edited by Finalspace

Share this post


Link to post
Share on other sites
17 hours ago, Ryan_001 said:

The whole argument that 'its bad because they're not supposed to be used that way' I think is kinda silly.

I think breaking contracts for no good reason while better alternatives exist is kinda malicious. Most, if not all bugs, originate from false assumptions. Assuming an enumeration type can only hold a single value from the enumeration has turned into a falsehood by shoehorning it into something it's not.

18 hours ago, Ryan_001 said:

Sure you don't want to have a situation where you accidentally create an undefined bit pattern

What would that be? I'd say that a randomly generated number is still well defined, a 0-bit means a flag is unset, a 1-bit means it's set. If you only have two flags, it doesn't really matter what the 3rd or 4th bits are set to. If you're talking about certain bits being mutually exclusive, your proposed solution does nothing to prevent that. It's also very common to use the bitwise not operator to disable flags, it's easy to understand and recognize.

What's this safety you are talking about? As far as I can tell you have gained nothing in preventing a programmer from making mistakes. Not only that, you've decreased legibility and increased the mental load to deal with silly mistakes. Mistakes that should barely take any time at all to fix in the first place, for some misguided sense of safety.

Share this post


Link to post
Share on other sites
18 hours ago, Ryan_001 said:

Sure you don't want to have a situation where you accidentally create an undefined bit pattern, but whether that bit pattern is an 'enum' or just an uint32_t, you still have the same error.

The difference is that an enum is explicitly designed to stop you getting into the situation where you're using unexpected values, by asking you to specify which value you want by name. Deliberately thwarting that system means you lose some of the benefits. (You keep some benefits, such as a degree of type safety, so I can see why it's frustrating to have an all-or-nothing decision here.)

Going back to the original post and the most recent problem:

6 minutes ago, Finalspace said:

Error    C2733    'operator |': second C linkage of overloaded function not allowed

Looks like the function is not inlined. I've no idea why that might be the case, but maybe there are some settings or macros that switch it off.

Share this post


Link to post
Share on other sites
44 minutes ago, Kylotan said:

The difference is that an enum is explicitly designed to stop you getting into the situation where you're using unexpected values, by asking you to specify which value you want by name. Deliberately thwarting that system means you lose some of the benefits. (You keep some benefits, such as a degree of type safety, so I can see why it's frustrating to have an all-or-nothing decision here.)

Going back to the original post and the most recent problem:

Looks like the function is not inlined. I've no idea why that might be the case, but maybe there are some settings or macros that switch it off.

For me enums are just a group of named integers, so i can access it in C++ by Enum::A and in C Enum_A.

How i use them, is up to me - maybe i just want flags, maybe i want single states? I dont care.

 

If i want extra hard type safety, i just use enum class - but most of the time i just want grouped flags, so i dont accidently set a keyboard_flag on a init_flag... Thats all i want for safety.

Is that so hard for c++? Other languages like c# and even old pascal can do this right.

 

Hmm i know that inline is just a "hint" to the compiler, but this is totally wrong... maybe i need some template magic to get it to compile and behave.

Edited by Finalspace

Share this post


Link to post
Share on other sites
5 minutes ago, Finalspace said:

For me enums are just a group of named integers [...]

How i use them, is up to me - maybe i just want flags

But the key point there, is that your combination of 2 enum values boolean-ored together is no longer in your "group of named integers". It's related to them, sure, but it's outside the group. It was never named, after all.

Imagine this:

const int ONE = 1;
const int TWO = 2;

And now say you want to be able to perform division on 2 of these values, which returns 0.5. But 0.5 isn't in the 'int' set, so you probably shouldn't expect to be able to pretend that it is the same type, just like the value corresponding to 3 isn't in fpl_InitFlag. C and C++ let you get away with it for enums because of an implementation detail that they're stuck supporting forever.

Arguably there should be a different concept entirely for boolean flags like this, but we don't have that in C++.

 

11 minutes ago, Finalspace said:

Is that so hard for c++?

It's not, and it works in C++, and many people have been exploiting this equivalence to integers for a long time. You're almost certainly just doing something wrong at your end.

Share this post


Link to post
Share on other sites
21 hours ago, Ryan_001 said:

The whole argument that 'its bad because they're not supposed to be used that way' I think is kinda silly.

It doesn't matter that it's silly. That's the rules of the language as laid down by the spec. It's the assumptions that your optimizing compiler holds up as golden.

You break the rules of the language and the compiler is allowed to fuck you.
e.g. you can play with pointers willy-nilly, but the optimizer is allowed to assume that you haven't broken the aliasing rules.

float f = 1.0f;
(*(int*)f) = 0; // note that int(0) and float(0) have the same bitpattern, so this should work just fine! Yay, clever!
printf( "%f", f );//does this print 1 or 0?

In practice, that will often print 0.000... but the rules of the language say that this program is invalid, so you could expect it to print 1.000 as well. The aliasing rule says that on the 3rd line it's accessed as a float type, so only the most recent write to a float type could possibly impact the value. The 2nd line can be optimized out or otherwise ignored. That's the specified rules of the language. 

Likewise:

enum Flags {
 F_1 = 1,
 F_2 = 2,
 F_4 = 4,
};

Flags f = (Flags)((int)F_1 | (int)F_2 | (int)F_4);
printf( "%d", (int)f );//does this print 7?

In practice, this will probably print 7... but the rules of the language say that the variable f is only allowed to hold values from 1 to 4 (inclusive), so you could expect this program to print out something entirely different, such as 4 as well.

That might sound silly, but optimizers are built to follow the rules of the language! So, say you've written a switch statement that's designed to handle the 8 possible values that you expect the above 'f' variable to hold, the compiler is well within it's rights to assume that half of these cases are impossible as the spec says that it can only hold values 1-4, and therefore the compiler is safe to go ahead and simply optimize away those function calls completely out of existence.

switch( (int)f )
{
  default: return Case_Other();//impossible
  case 0: return Case0();//impossible
  case 1: return Case1();
  case 2: return Case2();
  case 3: return Case3();
  case 4: return Case4();
  case 5: return Case5();//impossible
  case 6: return Case6();//impossible
  case 7: return Case7();//impossible
}

It doesn't matter that this is silly.

These are the rules, and when you choose to break them, you are choosing to write code that could stop working at any time, and only happens to work right now because the optimizer has not done a good enough job to break your invalid code, yet.

Choosing to write time-bombed code is silly. If you continue to use enums as flags after this, you're choosing to play chicken with your compiler. Good luck, and pray that it doesn't optimize as much as the spec says that it's allowed to.

Share this post


Link to post
Share on other sites

This is very strange. Can someone point me to the part of the standard that says that a variable with an enum type can only hold values that are named constants? I have looked, and I haven't found it. Also, using powers of 2 as constants so you can do bit arithmetic is so common in C that I very much doubt C++ disallows it.

 

EDIT: I found this paragraph in section 7.2:

"For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the underlying type. Otherwise, for an enumeration where e min is the smallest enumerator and e max is the largest, the values of the enumeration are the values in the range b min to b max , defined as follows: Let K be 1 for a two’s complement representation and 0 for a one’s complement or sign-magnitude representation. b max is the smallest value greater than or equal to max(|e min | − K, |e max |) and equal to 2 M − 1, where M is a non-negative integer. b min is zero if e min is non-negative and −(b max + K) otherwise. The size of the smallest bit-field large enough to hold all the values of the enumeration type is max(M, 1) if b min is zero and M + 1 otherwise. It is possible to define an enumeration that has values not defined by any of its enumerators. If the enumerator-list is empty, the values of the enumeration are as if the enumeration had a single enumerator with value 0."

 

In Hodgman's example, b min is 0 and b max is 7. So cases 0, 5, 6 and 7 are kosher.

 

 

Edited by alvaro
Found relevant information

Share this post


Link to post
Share on other sites

I'm not trying to be a jerk here, but I don't see the standard supporting many of your claims on enumerations.  The relevant parts are in n4659 section 10.2 (also refer to section 8.2.9 (9) and (10)).

There is no contractual obligation to only store an enumerated value in an enumeration (10.2 (8) "For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the underlying type").  You can store any value in an enum even if it is not explicitly enumerated provided the underlying type supports the value.  It even says in 10.2 (1) "An enumeration is a distinct type (6.9.2) with named constants.".  Its no different than a bunch of static const int's except that it obeys the type system.  The size of the underlying type is either explicitly specified, or determined according to 10.2 (5) (7) and (8).  As long as you stay within the range of the underlying type, your program will not be undefined.  The underlying bit pattern does not need a corresponding enumerated constant.

The compiler does not 'optimize' an enumeration any differently than any other type.  A switch on an enumeration, is the same as a switch on the underlying type.  It can't treat enum's as special constructs because static_cast is allowed (see 8.2.9 (9) and (10)).

You can static_cast a value back to an enumeration (8.2.9 (10)) provided that "the original value is within the range of the enumeration values"; and as per 10.2 (8) "the values of the enumeration are the values of the underlying type".

It is clear from the standard that enumerations are allowed to be treated as flags, that the underlying type must be (and is) well defined.  That storing bit patterns/values that do not have corresponding enumerations is well defined, and that operating on values using bit operations is well defined.

If you personally (or within your company) wish to use enum as a list of mutually exclusive options, then so be it.  But enum's are used for all sorts of integer constants (flags, options, counters, and half dozen other things), and are well defined in the spec to be capable of doing so.

Share this post


Link to post
Share on other sites

Well at this point i have given up. Now i will use enums for single states only and use this for my flags - compiles in C and C++ (For C i require prefixing anyway...):

	typedef struct fpl_KeyboardModifierType {
		uint32_t value;
	} fpl_KeyboardModifierType;

	const uint32_t fpl_KeyboardModifierType_Alt = 1 << 0;
	const uint32_t fpl_KeyboardModifierType_Ctrl = 1 << 1;
	const uint32_t fpl_KeyboardModifierType_Shift = 1 << 2;
	const uint32_t fpl_KeyboardModifierType_Super = 1 << 3;

	inline fpl_KeyboardModifierType CreateKeyboardModifierType(uint32_t value) {
		fpl_KeyboardModifierType result;
		result.value = value;
		return(result);
	}

	static void doSomethingWithKeyboardModifiers(fpl_KeyboardModifierType modifiers) {
		if (modifiers.value & fpl_KeyboardModifierType_Ctrl) {
			// ...
  		}
	}

 

Sure i lost type safety that way, but its better than nothing...

Edited by Finalspace

Share this post


Link to post
Share on other sites

Are you trying to find a way to write new enums that will compile as both C and C++ or are you modifying existing C code to compile with a C++ compiler?

The following method works for both and has type-checking in C++. Perhaps it will require less modifications to existing C code than adding operators to C++ code?:

#include <stdio.h>


typedef enum
{
	flag0,
	flag1,
	flag2,
	flag3 = 4,
	flag4 = 8
} test_flags;

void config( test_flags iflags[4] )
{
	unsigned uflags[] = { iflags[0], iflags[1], iflags[2], iflags[3] };

	printf( "0x%X\n", uflags[0] | uflags[1] | uflags[2] | uflags[3] );
}

int main( int argc, const char *args[] )
{
	test_flags my_flags[] = { flag1, flag2, flag3, flag4 };
	unsigned badflags[] = { 30, 40, 50, 32 };

	config( badflags ); // not accepted by C++
	config( my_flags );

	return 0;
}

 

Share this post


Link to post
Share on other sites

Finalstate, look at my first post, you can use that.  I showed you how to do it.

Also your attempt where you claimed you had compilation errors, works fine on my end, here's the full code:

# include <thread>
# include <mutex>
# include <condition_variable>
# include <vector>
# include <deque>
# include <atomic>
# include <iostream>
# include <functional>
# include <algorithm>

using namespace std;




typedef enum fpl_KeyboardModifierType {
	fpl_KeyboardModifierType_None = 0,
	fpl_KeyboardModifierType_Alt = 1 << 0,
	fpl_KeyboardModifierType_Ctrl = 1 << 1,
	fpl_KeyboardModifierType_Shift = 1 << 2,
	fpl_KeyboardModifierType_Super = 1 << 3,
} fpl_KeyboardModifierType;

#ifdef __cplusplus
	inline fpl_KeyboardModifierType operator |(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) {
		return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) | static_cast<int>(b));
	}
	inline fpl_KeyboardModifierType operator &(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) {
		return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) & static_cast<int>(b));
	}
	inline fpl_KeyboardModifierType& operator |=(fpl_KeyboardModifierType& a, fpl_KeyboardModifierType b) {
		return a = a | b;
	}
#endif


void Func(fpl_KeyboardModifierType f) {
	if (f & fpl_KeyboardModifierType_Alt) cout << "fpl_KeyboardModifierType_Alt" << endl;
	if (f & fpl_KeyboardModifierType_Ctrl) cout << "fpl_KeyboardModifierType_Ctrl" << endl;
	if (f & fpl_KeyboardModifierType_Shift) cout << "fpl_KeyboardModifierType_Shift" << endl;
	if (f & fpl_KeyboardModifierType_Super) cout << "fpl_KeyboardModifierType_Super" << endl;
	}


// ----- main -----
void main() {

	Func(fpl_KeyboardModifierType_Shift | fpl_KeyboardModifierType_Super);


	// done
	std::cout << "done" << std::endl;
	getchar();
	}

Just copied/paste and threw it together, no changes needed.  What you were doing worked fine.  Its best to test these things in a smaller 'test unit' alone, and then bring them into the larger project.

Share this post


Link to post
Share on other sites
1 hour ago, Ryan_001 said:

Finalstate, look at my first post, you can use that.  I showed you how to do it.

Also your attempt where you claimed you had compilation errors, works fine on my end, here's the full code:


# include <thread>
# include <mutex>
# include <condition_variable>
# include <vector>
# include <deque>
# include <atomic>
# include <iostream>
# include <functional>
# include <algorithm>

using namespace std;




typedef enum fpl_KeyboardModifierType {
	fpl_KeyboardModifierType_None = 0,
	fpl_KeyboardModifierType_Alt = 1 << 0,
	fpl_KeyboardModifierType_Ctrl = 1 << 1,
	fpl_KeyboardModifierType_Shift = 1 << 2,
	fpl_KeyboardModifierType_Super = 1 << 3,
} fpl_KeyboardModifierType;

#ifdef __cplusplus
	inline fpl_KeyboardModifierType operator |(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) {
		return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) | static_cast<int>(b));
	}
	inline fpl_KeyboardModifierType operator &(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) {
		return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) & static_cast<int>(b));
	}
	inline fpl_KeyboardModifierType& operator |=(fpl_KeyboardModifierType& a, fpl_KeyboardModifierType b) {
		return a = a | b;
	}
#endif


void Func(fpl_KeyboardModifierType f) {
	if (f & fpl_KeyboardModifierType_Alt) cout << "fpl_KeyboardModifierType_Alt" << endl;
	if (f & fpl_KeyboardModifierType_Ctrl) cout << "fpl_KeyboardModifierType_Ctrl" << endl;
	if (f & fpl_KeyboardModifierType_Shift) cout << "fpl_KeyboardModifierType_Shift" << endl;
	if (f & fpl_KeyboardModifierType_Super) cout << "fpl_KeyboardModifierType_Super" << endl;
	}


// ----- main -----
void main() {

	Func(fpl_KeyboardModifierType_Shift | fpl_KeyboardModifierType_Super);


	// done
	std::cout << "done" << std::endl;
	getchar();
	}

Just copied/paste and threw it together, no changes needed.  What you were doing worked fine.  Its best to test these things in a smaller 'test unit' alone, and then bring them into the larger project.

 

finalstate *sigh*

 

A single enum will compile, but a second one will break it!

 

I tested it on vs 2015 and 2017:

Adding a second enum compiles... will test it out in my library...

Edited by Finalspace
Wrong statement

Share this post


Link to post
Share on other sites

I have no idea why it wont compile, i made a branch and checked in the enum operators - last try before i give up:

https://github.com/f1nalspace/final_game_tech/blob/enum_nonsense/final_platform_layer.h

https://github.com/f1nalspace/final_game_tech/tree/enum_nonsense/demos

 

Can someone open it up in vstudio - at least reproduce the bug?

Share this post


Link to post
Share on other sites
On 7/14/2017 at 8:34 AM, Finalspace said:

A normal enum are just a single 32 bit integer value, including its field accessible by a constant.

Nope.  It is implementation dependent size if none is specified, and on most major compilers defaults to a 32 bit signed integer usually, but sometimes defaults to a 32 bit unsigned value or a 64 bit signed or unsigned value, and may potentially be something else entirely.

On 7/14/2017 at 8:34 AM, Finalspace said:

So using it as flags are totally valid, because its just a stupid integer.

No. An enum is not "just a stupid integer".  At no point in the language history has it ever been "just a stupid integer". Even going back to C, an enum was something more specific than the two things it replaced, a stupid integer constant and a macro-defined value. An enum is more than either of those things.  By calling it an enum you are assigning it specific meaning which the compiler can use.  You are -- as a convenience -- treating it as an integer.  

You are also treating it as a plain int, which, by the way, is also mostly going away.  If you have a plain old int in modern code then you're doing something wrong.  Specify the width and signed-ness.

10 minutes ago, Finalspace said:

I have no idea why it wont compile, i made a branch and checked in the enum operators - last try before i give up:

You have been told why, and provided with THREE alternate solutions to do what you are trying to do.

The language is trying to protect you.  If you are dead-set on removing those protections you are free to do so, but it is not a wise decision.

Telling the compiler you have one intention, then moving on with a different set of uses that violate those intentions, that is a sure-fire way to introduce bugs in your program.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Announcements

  • Forum Statistics

    • Total Topics
      628294
    • Total Posts
      2981882
  • Similar Content

    • By esenthel
      Just finished making latest WebGL demo for my game engine:
      http://esenthel.com/?id=live_demo
      Let me know what you think,
      as of now only Chrome and Firefox can run it.
      Edge, Safari, Opera have some unresolved bugs at the moment.
    • By reders
      Hi, everyone!
      I "finished" building my first game. Obviously Pong.
      It's written in C++ on Visual Studio with SFML.
      Pong.cpp
      What do you think? What should I consider doing to improve the code?
      Thank you very much.
       
      EDIT: added some screenshot and .zip file of the playable game
       
      Pong.zip


    • By noodleBowl
      I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
      So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
      I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
      When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
      Am I thinking of this comparison function all wrong?
      Vertex data just in case
      //Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };  
    • By Rannion
      Hi,
      I'm trying to fill a win64 Console with ASCII char.
      At the moment I have 2 solutions: one using std::cout for each line, let's say 30 lines at once using std::endl at the end of each one.
      The second solution is using FillConsoleOutputCharacter. This method seems a lot more robust and with less flickering. But I'm guessing, internally it's using a different table than the one used by std::cout. I'm trying to fill the console with the unsigned char 0xB0 which is a sort of grey square when I use std::cout but when using FillConsoleOutputCharacter it is outputted as the UTF8 char '°'.
      I tried using SetConsoleOutputCP before but could not find a proper way to force it to only use the non-extended ASCII code page...
      Has anyone a hint on this one?
      Cheers!
    • By Vortez
      Hi guys, i know this is stupid but i've been trying to convert this block of asm code in c++ for an hour or two and im stuck
      ////////////////////////////////////////////////////////////////////////////////////////////// /////// This routine write the value returned by GetProcAddress() at the address p /////////// ////////////////////////////////////////////////////////////////////////////////////////////// bool SetProcAddress(HINSTANCE dll, void *p, char *name) { UINT *res = (UINT*)ptr; void *f = GetProcAddress(dll, name); if(!f) return false; _asm { push ebx push edx mov ebx, f mov edx, p mov [ebx], edx // <--- put edx at the address pointed by ebx pop edx pop ebx } return res != 0; } ... // ie: SetProcAddress(hDll, &some_function, "function_name"); I tried:
      memcmp(p, f, sizeof(p)); and UINT *i1 = (*UINT)p; UINT *i2 = (*UINT)f; *f = *p; The first one dosent seem to give the right retult, and the second one won't compile.
      Any idea?
  • Popular Now