Why are enums broken

Started by
28 comments, last by frob 5 years, 10 months ago
9 hours ago, Finalspace said:

No i dont want to use bitset or anything else from the STL - just to get grouped flags. I think its just stupid that its not part of the language itself. 

If it's part of the standard library, it may as well be part of the language.

Advertisement

I agree with Oberon_Command on this one. If some feature can be implemented in the standard library without touching the language, the committee has usually preferred that. Something being in the standard library is not a good reason to not use it.

 

On 6/15/2018 at 4:08 AM, Oberon_Command said:

If it's part of the standard library, it may as well be part of the language.

I'd go further. If it's part of the standard library, it is part of the language.

A language is not just grammar, it's also the vocabulary. 

 

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

D3D12 uses some macro magic to allow you to use bitwise operators on their "flag enums", e.g. 


typedef 
enum D3D12_HEAP_FLAGS
    {
        D3D12_HEAP_FLAG_NONE	= 0,
        D3D12_HEAP_FLAG_SHARED	= 0x1,
        D3D12_HEAP_FLAG_DENY_BUFFERS	= 0x4,
        D3D12_HEAP_FLAG_ALLOW_DISPLAY	= 0x8,
        D3D12_HEAP_FLAG_SHARED_CROSS_ADAPTER	= 0x20,
        D3D12_HEAP_FLAG_DENY_RT_DS_TEXTURES	= 0x40,
        D3D12_HEAP_FLAG_DENY_NON_RT_DS_TEXTURES	= 0x80,
        D3D12_HEAP_FLAG_ALLOW_ALL_BUFFERS_AND_TEXTURES	= 0,
        D3D12_HEAP_FLAG_ALLOW_ONLY_BUFFERS	= 0xc0,
        D3D12_HEAP_FLAG_ALLOW_ONLY_NON_RT_DS_TEXTURES	= 0x44,
        D3D12_HEAP_FLAG_ALLOW_ONLY_RT_DS_TEXTURES	= 0x84
    } 	D3D12_HEAP_FLAGS;

DEFINE_ENUM_FLAG_OPERATORS( D3D12_HEAP_FLAGS );

and the magic:



// Define operator overloads to enable bit operations on enum values that are
// used to define flags. Use DEFINE_ENUM_FLAG_OPERATORS(YOUR_TYPE) to enable these
// operators on YOUR_TYPE.

// Moved here from objbase.w.

// Templates are defined here in order to avoid a dependency on C++ <type_traits> header file,
// or on compiler-specific contructs.
extern "C++" {

    template <size_t S>
    struct _ENUM_FLAG_INTEGER_FOR_SIZE;

    template <>
    struct _ENUM_FLAG_INTEGER_FOR_SIZE<1>
    {
        typedef INT8 type;
    };

    template <>
    struct _ENUM_FLAG_INTEGER_FOR_SIZE<2>
    {
        typedef INT16 type;
    };

    template <>
    struct _ENUM_FLAG_INTEGER_FOR_SIZE<4>
    {
        typedef INT32 type;
    };

    template <>
    struct _ENUM_FLAG_INTEGER_FOR_SIZE<8>
    {
        typedef INT64 type;
    };

    // used as an approximation of std::underlying_type<T>
    template <class T>
    struct _ENUM_FLAG_SIZED_INTEGER
    {
        typedef typename _ENUM_FLAG_INTEGER_FOR_SIZE<sizeof(T)>::type type;
    };

}

#define DEFINE_ENUM_FLAG_OPERATORS(ENUMTYPE) \
extern "C++" { \
inline ENUMTYPE operator | (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) | ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE &operator |= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) |= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE operator & (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) & ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE &operator &= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) &= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE operator ~ (ENUMTYPE a) throw() { return ENUMTYPE(~((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a)); } \
inline ENUMTYPE operator ^ (ENUMTYPE a, ENUMTYPE b) throw() { return ENUMTYPE(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)a) ^ ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
inline ENUMTYPE &operator ^= (ENUMTYPE &a, ENUMTYPE b) throw() { return (ENUMTYPE &)(((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type &)a) ^= ((_ENUM_FLAG_SIZED_INTEGER<ENUMTYPE>::type)b)); } \
}
#else
#define DEFINE_ENUM_FLAG_OPERATORS(ENUMTYPE) // NOP, C allows these operators.
#endif

P.S. don't go copy & pasting this into your own projects.
Copyright (c) Microsoft Corporation. All rights reserved.

But this does go to show that it's pretty easy to support this in C++.

1 hour ago, ChaosEngine said:

I'd go further. If it's part of the standard library, it is part of the language.

But also, this is C++. No sane C++ user allows the usage of the entire language within their projects.

C++ is a smorgasbord of conflicting features, from which people choose appropriate subsets. Every feature that you choose to allow comes with some costs as well as benefits.

4 hours ago, Hodgman said:

But also, this is C++. No sane C++ user allows the usage of the entire language within their projects.

C++ is a smorgasbord of conflicting features, from which people choose appropriate subsets. Every feature that you choose to allow comes with some costs as well as benefits.

Thats it - great that somebody actually brings that up!

And guess what, i am one of such a person who actually only uses the features that they want and dont accept new ones until i evaluated and weighted them by a lot of categories.

11 hours ago, Hodgman said:

But also, this is C++. No sane C++ user allows the usage of the entire language within their projects.

C++ is a smorgasbord of conflicting features, from which people choose appropriate subsets. Every feature that you choose to allow comes with some costs as well as benefits.

Of course, and that’s exactly the reason the standards committee prefer libraries over language features.

But if what you need is a bitset or a list or a vector, they’re there. Complaining that such features aren’t built into the language is pointless, which was what @Oberon_Command was responding to in the first place. 

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight
2 minutes ago, ChaosEngine said:

But if what you need is a bitset or a list or a vector, they’re there. Complaining that such features aren’t built into the language is pointless, which was what @Oberon_Command was responding to in the first place. 

I wouldn't really say that bitset is a clean/perfect solution to this problem because it's basically the same as using an int to represent an enumeration or collection of flags. You lose the benefits that come with the actual enum-language-feature. 

2 hours ago, Hodgman said:

I wouldn't really say that bitset is a clean/perfect solution to this problem because it's basically the same as using an int to represent an enumeration or collection of flags. You lose the benefits that come with the actual enum-language-feature. 

That seems simple to fix - write a wrapper template that makes it so that flags you pass into it have to be of the specified type. The compiler will optimize away the wrapper and you'll get the type safety of the enum. I've seen exactly that solution used before, as a matter of fact. It works pretty well - you get the semantic clarity of bitset with the benefits of the enum class.

All this reminds me of the problem that naming things is hard. 

Enumeration means something.  For historical reasons programmers use enums to name the bit patterns, but generally flags are not an enumeration.  An enumeration lists all the things one by one, every name must be present in the enumeration. We don't enumerate FlagsAB, FlagsBC, FlagsABC, FlagsAD, FlagsABD, FlagsACD, FlagsABCD, and so on, but that is what an enumeration would do.

The grouping was done with enumerated types because they were silently converted to integers. You can implement flags with type safety using wrappers, using template magic, using a class like a bitset, or similar.  But when you're doing that, we should stop calling them enumerations.  At that point they are bit flags, not an enumeration.  If that's what you want, implement a flag class instead.

This topic is closed to new replies.

Advertisement