Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

3477 Excellent

About Ryan_001

  • Rank
    Advanced Member

Personal Information

  • Role
    Technical Director
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ryan_001

    Font rendering Resolution independent, free, fast, doesn't require a pre-built texture, and works for even large unicode/multilanguage fonts.
  2. That's a cool proposal, I knew they were talking about that sort of thing, but the implementation is certainly non-trivial. I'll be excited to see it added. That said, it still does not provide proof of your assertions. In fact quite the opposite: Even he acknowledges that there is more than one use for an enum. He is showing examples of how meta classes can provide specialized versions of generalized constructs that already exist in C++. But read even further and he links to page 22 on which, as luck would have it, has an entire article called "Using enum classes as bitfields". Now I would normally leave this here, assume you and the others will read the note and the article without bias. But since you've linked yet another article that does not actually prove what you claim it does (ie. it feels like you've been negligent if not intentionally misleading), I feel I must explain what the 'note' and the SFINAE bit in the paper you linked, and the Overload article are referring to. Neither the note, nor the article, raise any issue as far as safety, usability, or performance of using flags as enums; rather both the proposal and the article admit that writing the appropriate overload operators is easy, but that it does amount to some boilerplate code repetition: The proposal suggests that using metaclasses as a solution (which would make sense in a proposal for metaclasses), the Overload article suggests using SFINAE and a policy class, and I suggested a macro. All 3 solutions work, pick the one you like the best. It doesn't matter to the user of the enum, only the author. I personally can't wait for metaclasses and would prefer that method over macros. Currently enums do all that you ask for already: "The generated values are constexpr. It respects type safety, prevents mixing with other types including integers, and all the rest.". The new meta classes will be a great addition because it will remove the need for some boilerplate code, but it won't add any features that don't already exist. I don't understand why there is such a reluctance to at least try this out. I have shown repeatedly in the standard where it is safe, and well defined. I have shown examples, I (and you as well inadvertently) have linked to examples in prominent C++ articles. It is common, often used, and accepted in practice (you'll find it in many large public code bases including the C++ standard library and boost). It satisfies ALL the constraints that have been listed here as desirable, safe: easy to use, high performance/low overhead. Its trivial to implement. I am flabbergasted by the level of negativity and bullying in this thread. It seems that every time someone disagree's with the 'clique' irregardless of the accuracy/legitimacy of the post, you guys circle the wagons and try to bully them. I backed every assertion I made with evidence from the standard, while repeatedly people made assertions that were completely untrue (according to the standard). If the situation had been reversed I'd have been ridiculed and eventually banned from the forums if I kept pressing the point, for what you guys wrote. I certainly wouldn't get 'upvoted' for posting things that are factually wrong (now before its said, I don't care about the reputation, but I do care about the attitudes in these forums, or at least I did before this thread). Rather than have a real and honest discussion as professionals, what we have here is nothing more than high-school level pettiness. Posting articles that you know are inaccurate in an attempt to 'prove' yourself right? Quotes taken out of context. Misleading assertions. This was not an 'error' or an 'accident', and nothing you wrote was in 'good faith'. C++ is a massive language, so there's no doubt people will make errors from time to time, heck I even saw SiCrane make an error once (and that guy was a genius when it came to knowing the standard), and I certainly wouldn't hold it against anyone for making an honest error. But these were not 'honest errors'. They were not mistakes, they were intention misuses; you knew enough to know it was wrong, and still posted it in an attempt to deceive... And to what end? To not say 'hey I never knew that, maybe I'll try that out?'. Like somehow admitting you don't know every little thing about C++ (hint, neither does Bjarne, and he'll be the first to say so) is bad? Delete this post, lock it, censor it, delete my account, I don't care anymore. I will not be a bully. I will not engage in your silly clique. I will not stop attempting to learn and better myself, I will not stop questioning or scrutinizing. You've made it clear time and time again that discussions are not wanted in these forums, that disagree'ing with any of the moderators like Hodgman or Frob is tantamount to blasphemy, and that these forums are nothing more than an industry insider circle-jerk; and I refuse to be part of this toxic environment.
  3. Your wording seems intentionally misleading... Not a single claim you make is supported in those documents. All 4 of those proposals state essentially the same problems: implicit conversion to an integer, inability and/or unpredicatble underlying type, scope issues, and (which also leads us to present day enum proposals) reflection. Not one of them even mentions or suggests removing flags or other type of capabilities from enums. "C enumerations constitute a curiously half-baked concept" - this does not refer to flags in enums. I have sitting in front of me here, right from my bookshelf, "The C++ Programming Langauge" by Bjarne Stoustrup, and on page 77, is enums. Not only does he show an example "flag f3 = flag(z | e) // ok: flag(12) is of type flag and within the range of the flag" which uses bit operators on unscoped enumerations (at the time of this book there was no such thing as scoped enumerations); he also even states: "bit-manipulation examples that require values outside the set of enumerations to be well-defined have a long history in C and C++.". That does not sound at all like he feels that flags are bad, and certainly not UB. There's not a single mention to avoid flags, talks of work arounds, or anything that you are claiming. Rather he is showing clear examples of it being used, showing how to use it properly, and claiming the uses are well defined and valid within a C++ program. I also can't find a single mention of that quote "C enumerations constitute a curiously half-baked concept" in reference to flags online, every single one I find is referring to implicit integer conversions and unscoped enums (which was known to be a problem, which was why they added scoped ones). If you want to post an actual link to where Bjarne (or any of the name's you've dropped) criticizing the use of flags in scoped enums in C++ I would love to read it (and that's not sarcasm, I love to read their thoughts on language development). Scoped enum's were never designed to provide backwards compatibility, unscoped enums were. Scoped enum's were a completely new construct, with new syntax, that was designed to move enum's closer to what they felt they should be, and farther away from C. In fact the proposals you posted as proof directly disprove your assertion. Emphasis is mine. Scopped enum's were brand new, and they included static cast, known/fixed sizes, by design. None of that was accidental, unintentional, or to satisfy backwards compatibility issues/existing codebases. The language has never 'been moving away from 'flags', as you claim. Enums didn't even exist in the original C, they weren't added until ANSI ratified it in 1989 (hence why its called c89), so there's no way people could consider enum flags to be a problem back in the early 80's or even the 70's, since prior to then it didn't exist in the language and was just a bunch of #define's. They knew that using #define or just integer constants was problematic, and that unscoped enums were problematic (for the reasons listed above in the 4 proposals, but does not include in those reasons flags). Your claims that Strustroup, Miller, or Sutter "all consider that patterns as a defect" is completely unsubstantiated by the documents you have linked and the claims you have made. But perhaps you have more knowledge on this than I do: post some links that actually back up your claims, where any of the 3 (or Stepanov, or Alexandrescu, I'd love to know their thoughts as well) state that scopped enumerations are not designed to be used in that manner.
  4. Its not type safe? You can't use combinations (where a single enumeration maps to multiple flags)? I don't have a problem with people using other methods, I just don't like it how many have claimed things about enum that are not true (according to the standard). bitset is an interesting solution, how would you use it in a normal function? Almost every usage of flags looks like: Func(SomeEnum::flag_1 | SomeEnum::flag_2 | SomeEnum::flag_3); but with bitset it would seem tedious to me to have to write: SomeBitSetTypedef bs; bs.add(SomeEnum::flag_1); bs.add(SomeEnum::flag_2); bs.add(SomeEnum::flag_3); Func(bs); Or is there a clever way around this? TBH I've never used bitset...
  5. If the type matters (you're comparing different enums) then you specify it. If it does not, then don't. This isn't difficult, we're not talking SFINAE (something that is used all the time for something its not meant for) or dependent template name lookups. By comparison to the above, 10.2 is a very easy read with only a small amount of room for ambiguity in 10.2 (8) with the range of unscoped enums. But for scoped, they tied up every loose end and gave you every tool to safely implement every sort of named constant you want, whether that be just options, flags, counters, or anything else you can imagine. They can be as simple or as complex as you author them to be. Your claim that 'its not what its meant for' does not match what we see in the standard. static_cast is not a 'bad' or dangerous cast, it is designed to be a cast that is safe and does not cause UB. Its not a reinterpret_cast, static_cast and its provisions for enum were not thrown in 'by accident'. If we had to use reinterpret_cast to cast an enum, then I think you would have a valid point... but static_cast? Here's a link to (as far as I can tell) the original proposal: Page 12, point 10: "An expression of arithmetic or enumeration type can be converted to an enumeration type explicitly. The value is unchanged if it is in the range of enumeration values of the enumeration type; otherwise the resulting enumeration value is unspecified.". Right from the beginning, it was intended that scoped enums support all values the underlying type supports. They wouldn't accidentally put that in. If you read through the whole proposal you'll see they spend a fair amount of time ensuring enum's have a clear and defined binary representation, allowing them to be serialized, stored/moved between systems, and modified. I honestly don't understand this trepidation against using enum's for flags. C++ has a so many constructs we use on a regular basis that can cause UB if used incorrectly and there's no help from the spec or the compiler, you just have learn to 'not do that'. Scoped enum's are one of the safest constructs in C++, and near impossible to break unintentionally. But enough about theoretic problems, I posted two examples of scoped enums. Show me how you would break them. Without using static_cast, how could you cause UB or how would you create a value that would cause that function to break?
  6. I agree in that unscoped enum's can have a limited range that (as I read it, and you as well) does not necessarily match their underlying type, so bitwise complement can produce UB (along with addition, subtraction, negations, etc...). But 'and' and 'or' cannot; and scoped enums do not have those little holes. Also I presented in my examples scoped enumerations. Personally I like to manually specify the underlying enumeration type if I plan to do any bit manipulation, but its not UB. Scoped enum's (or more precisely enum's with a fixed type) can have any value that their underlying type supports. They are as safe to use as any integral type in the language in terms of UB. They can safely be used for flags, counters, or anything else their underlying integral constant can be used for. Scoped enumerations are safer than using #defines, constants, bitsets, bitfields, etc... by design. You are ensured that scoped enums are separate types, so you can't accidentally pass values from one enumeration into another. By default no operators are defined, so only those you manually add are available. This is all handled by the author of the enum/library/component/whatever and not the user. Its near impossible to break without intentionally casting incorrect values, and even if you were to violate the contract of a function call, you still don't trigger UB. I'm a bit confused, this is the second time Hodgman that you presented an example of UB that had nothing to do with enumerations. If you wanted to debate aliasing or pointer casting rules and UB in C++, would it not make sense to make a new thread about it? What do these two examples have to do with enumerations?
  7. One thing to consider, the choice of type for unfixed enums is restricted to integral types. Integral types are defined in 6.9.1 (7). While it may be possible to define an integral type (on some bizzare-o world system) that isn't a multiple of 8 bits, the standard does require it to be binary: "The representations of integral types shall define values by use of a pure binary numeration system", which means if you define a flag, then that flag 'or'd' with all the other flags will yield a defined value. In a 'pure binary numeration system' all bit operations (and, or, not, xor, etc...) are all valid, well defined, and yield predictable results (no possibility for overflows or other undefined behavior). So even in an unfixed enumeration, you can still use flags without worry. You just have no knowledge of the underlying type. But you know it will be an integral type large enough for use (and if one doesn't exist on the system you get a compilation error, no undefined behavior). And if you need to know the size, that's why you can specify them.
  8. I don't understand by what you mean 'a single enum will compile, but a second one will break it!'. I don't have time to download and debug your code you for (no offense, just that is a lot to ask). We've all been there, best thing is to take a break, grab a beer, take a walk, it'll clear your head. Then try picking it apart piece by piece. My gut feeling (after just taking a quick look) is that you may have a preprocessor definition going awry. But again, cursory glance. I threw together this: #define CPP_ENUM_OPERATORS(T) \ inline T operator |(T a, T b) { return static_cast<T>(static_cast<int>(a) | static_cast<int>(b)); } \ inline T operator &(T a, T b) { return static_cast<T>(static_cast<int>(a) & static_cast<int>(b)); } \ inline T& operator |=(T& a, T b) { return a = a | b; } typedef enum fpl_KeyboardModifierType { fpl_KeyboardModifierType_None = 0, fpl_KeyboardModifierType_Alt = 1 << 0, fpl_KeyboardModifierType_Ctrl = 1 << 1, fpl_KeyboardModifierType_Shift = 1 << 2, fpl_KeyboardModifierType_Super = 1 << 3, } fpl_KeyboardModifierType; CPP_ENUM_OPERATORS(fpl_KeyboardModifierType) typedef enum fpl_InitFlag { fpl_InitFlag_None = 0, fpl_InitFlag_Window = 1 << 0, fpl_InitFlag_VideoOpenGL = 1 << 1, } fpl_InitFlag; CPP_ENUM_OPERATORS(fpl_InitFlag) enum class SomeMoreFlags { flag_0, flag_1, flag_2, }; CPP_ENUM_OPERATORS(SomeMoreFlags) And it compiles and works fine as far as I can tell (VS 2017). With all due respect, what he is attempting to do is valid and well defined according to the latest spec n4659. I explained in detail in an earlier post. And while I do agree that in32_t is by far superior 99% of the time, in this particular circumstance int is perfectly fine.
  9. Finalstate, look at my first post, you can use that. I showed you how to do it. Also your attempt where you claimed you had compilation errors, works fine on my end, here's the full code: # include <thread> # include <mutex> # include <condition_variable> # include <vector> # include <deque> # include <atomic> # include <iostream> # include <functional> # include <algorithm> using namespace std; typedef enum fpl_KeyboardModifierType { fpl_KeyboardModifierType_None = 0, fpl_KeyboardModifierType_Alt = 1 << 0, fpl_KeyboardModifierType_Ctrl = 1 << 1, fpl_KeyboardModifierType_Shift = 1 << 2, fpl_KeyboardModifierType_Super = 1 << 3, } fpl_KeyboardModifierType; #ifdef __cplusplus inline fpl_KeyboardModifierType operator |(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) { return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) | static_cast<int>(b)); } inline fpl_KeyboardModifierType operator &(fpl_KeyboardModifierType a, fpl_KeyboardModifierType b) { return static_cast<fpl_KeyboardModifierType>(static_cast<int>(a) & static_cast<int>(b)); } inline fpl_KeyboardModifierType& operator |=(fpl_KeyboardModifierType& a, fpl_KeyboardModifierType b) { return a = a | b; } #endif void Func(fpl_KeyboardModifierType f) { if (f & fpl_KeyboardModifierType_Alt) cout << "fpl_KeyboardModifierType_Alt" << endl; if (f & fpl_KeyboardModifierType_Ctrl) cout << "fpl_KeyboardModifierType_Ctrl" << endl; if (f & fpl_KeyboardModifierType_Shift) cout << "fpl_KeyboardModifierType_Shift" << endl; if (f & fpl_KeyboardModifierType_Super) cout << "fpl_KeyboardModifierType_Super" << endl; } // ----- main ----- void main() { Func(fpl_KeyboardModifierType_Shift | fpl_KeyboardModifierType_Super); // done std::cout << "done" << std::endl; getchar(); } Just copied/paste and threw it together, no changes needed. What you were doing worked fine. Its best to test these things in a smaller 'test unit' alone, and then bring them into the larger project.
  10. I'm not trying to be a jerk here, but I don't see the standard supporting many of your claims on enumerations. The relevant parts are in n4659 section 10.2 (also refer to section 8.2.9 (9) and (10)). There is no contractual obligation to only store an enumerated value in an enumeration (10.2 (8) "For an enumeration whose underlying type is fixed, the values of the enumeration are the values of the underlying type"). You can store any value in an enum even if it is not explicitly enumerated provided the underlying type supports the value. It even says in 10.2 (1) "An enumeration is a distinct type (6.9.2) with named constants.". Its no different than a bunch of static const int's except that it obeys the type system. The size of the underlying type is either explicitly specified, or determined according to 10.2 (5) (7) and (8). As long as you stay within the range of the underlying type, your program will not be undefined. The underlying bit pattern does not need a corresponding enumerated constant. The compiler does not 'optimize' an enumeration any differently than any other type. A switch on an enumeration, is the same as a switch on the underlying type. It can't treat enum's as special constructs because static_cast is allowed (see 8.2.9 (9) and (10)). You can static_cast a value back to an enumeration (8.2.9 (10)) provided that "the original value is within the range of the enumeration values"; and as per 10.2 (8) "the values of the enumeration are the values of the underlying type". It is clear from the standard that enumerations are allowed to be treated as flags, that the underlying type must be (and is) well defined. That storing bit patterns/values that do not have corresponding enumerations is well defined, and that operating on values using bit operations is well defined. If you personally (or within your company) wish to use enum as a list of mutually exclusive options, then so be it. But enum's are used for all sorts of integer constants (flags, options, counters, and half dozen other things), and are well defined in the spec to be capable of doing so.
  11. I think using enumerations as flags is fine. The whole argument that 'its bad because they're not supposed to be used that way' I think is kinda silly. Sure you don't want to have a situation where you accidentally create an undefined bit pattern, but whether that bit pattern is an 'enum' or just an uint32_t, you still have the same error. It'll be the same problem in the same piece of code. The nice thing about enum's is you can have nicer names and avoid stuff like VK_STRUCTURE_TYPE_IMPORT_MEMORY_WIN32_HANDLE_INFO_NV. Here is what I use (formatting is a bit off but you get the idea): // -------------------------------------------------------------------------------------------------------------------------- // enumeration expansion // - ENUM_CLASS_OPERATORS defines standard bit operators for enum class types // - ENUM_CLASS_AND_OR defines only 'and' and 'or' // -------------------------------------------------------------------------------------------------------------------------- # define ENUM_CLASS_OPERATORS(T) \ inline constexpr T operator~(T a) noexcept { return static_cast<T>(~static_cast<uint64_t>(a)); } \ inline constexpr T operator&(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); } \ inline constexpr T operator|(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); } \ inline constexpr T operator^(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) ^ static_cast<uint64_t>(b)); } \ inline T& operator&=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); } \ inline T& operator|=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); } \ inline T& operator^=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) ^ static_cast<uint64_t>(b)); } # define ENUM_CLASS_AND_OR(T) \ inline constexpr T operator&(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); } \ inline constexpr T operator|(T a, T b) noexcept { return static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); } \ inline T& operator&=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) & static_cast<uint64_t>(b)); } \ inline T& operator|=(T& a, T b) noexcept { return a = static_cast<T>(static_cast<uint64_t>(a) | static_cast<uint64_t>(b)); } The thing to consider is, even with bit operators, its actually quite hard to come up with a bit pattern that's undefined. Most of code with flags looks something like: enum class EEnumOptions { none, option_1, option_2, option_3, }; enum class EEnumFlags { none = 0, flag_a = 1, flag_b = 2, flag_c = 4, }; ENUM_CLASS_AND_OR(EEnumFlags) // create 'and' and 'or' bit operators for EEnumName // .... void Func(EEnumOptions e, EEnumFlags f) { // handle options switch (e) { case EEnumOptions::option_1: case EEnumOptions::option_2: case EEnumOptions::option_3: } // handle flags if ((f & EEnumName::flag_a) == EEnumName::flag_a) {} // flag_a is set if ((f & EEnumName::flag_b) == EEnumName::flag_b) {} // flag_b is set if ((f & EEnumName::flag_c) != EEnumName::flag_c) {} // flag_c is not set } Even if you were to make a silly bit pattern, things won't 'blow up'. Any bit pattern is still well defined. Also if you only restrict yourself to 'and' and 'or' (ie. don't overload 'not' and 'xor'), then its near impossible to create undefined bit patterns (short of intentionally static_cast'ing them in). Its still safer then simply integer constants or #define's, and for the most part self documenting. I don't think anyone would have any difficulty using that function, or understanding what is expected, and passing an undefined bit pattern would have to be intentional. Maybe its my own personal preference, but this seems clean, easy to understand, and hard to break; and isn't that what we want?
  12. I'm not sure exactly what you mean. The flags are pretty much taken verbatim from Table 4 of the spec: (scroll down a screen or two). I haven't played around with indirect stuff yet. I'm assuming you write to a buffer the commands (either through memmap/staging buffer/copy or through a compute shader or similar), then use that buffer as the source for the indirect command correct? If I was transferring from the host then I'd use host_write or transfer_write as my source flags (depending on whether or not I used a staging buffer), and then I'd use indirect_read as my dest flags. If I were computing the buffer on the fly would you not use shader_compute_write as src, and indirect_read as dest? Isn't that an oxymoron
  13. Interesting, I don't know if you need VK_ACCESS_MEMORY_READ_BIT and VK_ACCESS_MEMORY_WRITE_BIT there. I read that as meaning memory read/write bits are things outside the normal Vulkan scope, like the presentation/windowing system. The demo/examples I looked at also never included those bits. I agree with you completely in that the spec leaves alot of things ambiguously defined. What surprised me a bit was that image layout transitions are considered both a read and write, so you have to include access/stage masks for the hidden read/write that occurs during transitions. This thread has helped clarify a lot of these things. I wrote my own pipeline barrier wrapper, which I found made a lot more sense (apart from not really understanding what VK_ACCESS_MEMORY_READ_BIT and VK_ACCESS_MEMORY_WRITE_BIT mean). The whole thing isn't important but you might find the flag enumeration interesting. enum class MemoryDependencyFlags : uint64_t { none = 0, indirect_read = (1ull << 0), // VK_ACCESS_INDIRECT_COMMAND_READ_BIT + VK_PIPELINE_STAGE_DRAW_INDIRECT_BIT index_read = (1ull << 1), // VK_ACCESS_INDEX_READ_BIT + VK_PIPELINE_STAGE_VERTEX_INPUT_BIT attribute_vertex_read = (1ull << 2), // VK_ACCESS_VERTEX_ATTRIBUTE_READ_BIT + VK_PIPELINE_STAGE_VERTEX_INPUT_BIT uniform_vertex_read = (1ull << 3), // VK_ACCESS_UNIFORM_READ_BIT + VK_PIPELINE_STAGE_VERTEX_SHADER_BIT uniform_tess_control_read = (1ull << 4), // VK_ACCESS_UNIFORM_READ_BIT + VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT uniform_tess_eval_read = (1ull << 5), // VK_ACCESS_UNIFORM_READ_BIT + VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT uniform_geometry_read = (1ull << 6), // VK_ACCESS_UNIFORM_READ_BIT + VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT uniform_fragment_read = (1ull << 7), // VK_ACCESS_UNIFORM_READ_BIT + VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT uniform_compute_read = (1ull << 8), // VK_ACCESS_UNIFORM_READ_BIT + VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT shader_vertex_read = (1ull << 9), // VK_ACCESS_SHADER_READ_BIT + VK_PIPELINE_STAGE_VERTEX_SHADER_BIT shader_vertex_write = (1ull << 10), // VK_ACCESS_SHADER_WRITE_BIT + VK_PIPELINE_STAGE_VERTEX_SHADER_BIT shader_tess_control_read = (1ull << 11), // VK_ACCESS_SHADER_READ_BIT + VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT shader_tess_control_write = (1ull << 12), // VK_ACCESS_SHADER_WRITE_BIT + VK_PIPELINE_STAGE_TESSELLATION_CONTROL_SHADER_BIT shader_tess_eval_read = (1ull << 13), // VK_ACCESS_SHADER_READ_BIT + VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT shader_tess_eval_write = (1ull << 14), // VK_ACCESS_SHADER_WRITE_BIT + VK_PIPELINE_STAGE_TESSELLATION_EVALUATION_SHADER_BIT shader_geometry_read = (1ull << 15), // VK_ACCESS_SHADER_READ_BIT + VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT shader_geometry_write = (1ull << 16), // VK_ACCESS_SHADER_WRITE_BIT + VK_PIPELINE_STAGE_GEOMETRY_SHADER_BIT shader_fragment_read = (1ull << 17), // VK_ACCESS_SHADER_READ_BIT + VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT shader_fragment_write = (1ull << 18), // VK_ACCESS_SHADER_WRITE_BIT + VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT shader_compute_read = (1ull << 19), // VK_ACCESS_SHADER_READ_BIT + VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT shader_compute_write = (1ull << 20), // VK_ACCESS_SHADER_WRITE_BIT + VK_PIPELINE_STAGE_COMPUTE_SHADER_BIT attachment_fragment_read = (1ull << 21), // VK_ACCESS_INPUT_ATTACHMENT_READ_BIT + VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT attachment_color_read = (1ull << 22), // VK_ACCESS_COLOR_ATTACHMENT_READ_BIT + VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT attachment_color_write = (1ull << 23), // VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT + VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT attachment_depth_read_early = (1ull << 24), // VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT + VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT attachment_depth_read_late = (1ull << 25), // VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_READ_BIT + VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT attachment_depth_write_early = (1ull << 26), // VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT + VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT attachment_depth_write_late = (1ull << 27), // VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT + VK_PIPELINE_STAGE_LATE_FRAGMENT_TESTS_BIT transfer_read = (1ull << 28), // VK_ACCESS_TRANSFER_READ_BIT + VK_PIPELINE_STAGE_TRANSFER_BIT transfer_write = (1ull << 29), // VK_ACCESS_TRANSFER_WRITE_BIT + VK_PIPELINE_STAGE_TRANSFER_BIT host_read = (1ull << 30), // VK_ACCESS_HOST_READ_BIT + VK_PIPELINE_STAGE_HOST_BIT host_write = (1ull << 31), // VK_ACCESS_HOST_WRITE_BIT + VK_PIPELINE_STAGE_HOST_BIT memory_read = (1ull << 32), // VK_ACCESS_MEMORY_READ_BIT memory_write = (1ull << 33), // VK_ACCESS_MEMORY_WRITE_BIT }; The formatting is a mess, but you get the idea. Only certain combinations of stage + access are allowed by the spec, by enumerating them it made it far more clear which to pick. I then can directly convert these to the associated stage + access masks without any loss in expressiveness/performance (or at least there shouldn't be if I understand things correctly).
  14. Its good to know that theory and practice align, at least for this : ) Nice work. I'm curious, what sort of barrier parameters are you using?
  15. Well barriers in the spec are a little more fine grained, you can pick the actual pipeline stages to halt on. For example if you wrote to a buffer from the fragment shader, and then read it from the vertex shader, you would put a pipeline barrier which would halt all subsequent vertex shader (and later stages) from executing prior to the fragment shader complete'ing. But I have the feeling you were talking about what hardware actually does? In which case you are probably right, I have no idea how fine-grained the hardware really is. The spec does support queue priority, sort of: As I read it, this doesn't allow one app to queue itself higher than another, and only affects queue's created on the single VkDevice. Now whether any hardware actually does this... you would know better than I, I image. As far as secondary command buffers, I've seen that suggested. I don't disagree, its just that I don't see that being faster than just recording a bunch of primary command buffers in most circumstances. The only 2 situations I could come up with were: 1) The small command buffers are all within the same render pass, in which case you would need secondary command buffers. 2) You have way too many (thousands? millions?) small primary command buffers, and that might cause some performance issues on submit, so by recording them as secondary and using another thread to bundle them into a single primary, might make the submit faster.
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!