... Duff's device ...
Duff's Device was an interesting optimization about 40 years ago when different conditions existed.
On today's hardware it introduces essentially the worst case pipeline stall. All major compilers take steps to detect the thing and replace it with alternatives that fit better with today's hardware.
All optimizations depend on details of the system. Last year's details are different from this year's details, and next year's details will be different again.
Many optimizations that were once fast now introduce problems. Duff's Device, the XOR Swap, they're terrible on deeply pipelined processors. Fancy memory tricks are terrible on new processors, while memory is faster than it used to be every new chip is relatively more faster, meaning memory access is more and more costly as time goes on. Calculating trig functions like sin() and cos() are faster than lookup tables thanks to relative speed differences, though the reverse was true on most processors about 15 years ago. Decrementing loops to continue when equal to zero hasn't been an optimization on x86 for two decades. Low memory addresses are no longer faster than higher ones and haven't been for nearly three decades. Writing to shared objects in continuous pools is absolutely terrible for cache coherency on multi-core processors, and hasn't been viable for speedups since the 1970s. Manually marking every little function for inline compilation is usually a terrible decision in modern C and C++, it is typically better to provide information to the compiler and allow it to decide what works best with the target caches and memory performance for the target architecture. (That last one is so bad most compilers now completely ignore the inline keyword unless you force them to.) Etc., etc., etc.
Yet all these little tips and tricks are still found in books and occasionally taught online as though they were gospel truth for modern software performance, never mentioning that the conditions that made them true have changed, and now they slow processing down.
... what prevents you from (options)? Too much more conforming or not? If not, why? I am not bashing switch as is, I am just saying that an impotent c++ feature recomendation based on performace blasphemy is too ridiculous?
The performance difference in the general case is so small it doesn't really matter in the real world.
If a programmer is in a situation where the performance of this condition is critical --- and I honestly cannot fathom such a case --- then they would probably not be using C++ switch statements for the code.
The famous quote applies here: Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. -- Donald Knuth
This is part of those enormous amounts of time.
Much has been written about this quote but it withstands the test of time. Don't do stupid things that introduce inefficiences, don't introduce pessimizations, but almost always this is something you shouldn't bother with. If you have additional knowledge that the compiler doesn't, such as knowing that a specific path is highly likely, then do something that uses the knowledge. In that rare case, do something. If you know you can avoid nearly all the work with a simple test, then do the simple test. But don't worry about it.
The best advice I've heard for optimization is:
Beginner: Don't optimize your code.
Advanced: Don't optimize your code yet.
Expert: Don't optimize your code until you have proven with instrumentation that it needs improvement, then verify and document the changes for those who come after.
The vast majority of the time it just doesn't matter. Your compiler is smart, and can find smart ways to optimize this. The compiler may use jump tables, or a branching if/else tree, or a series of conditionals, or even a binary search. Next year's compiler may have additional techniques available and it may do something different. Profile-guided optimizations may allow your compiler to recognize even more advanced things. ... But none of that typically matters to the programmer.
I'm not saying make intentionally bad code, don't write bad code if you can help it. If you know something specific is a concern then write the simple version with a comment of //TODO: This is probably slow, measure it with a profiler someday.
If a person has chosen to use C++ then they need to let the C++ compiler do its job. That job includes optimizing the code. Let your compiler do its job.
If (1) you are using C++ for you code and (2) you are concerned about the performance of a single switch statement, one of those two things is a flaw. Most likely you can find far bigger performance gains in other areas of the code by swapping out algorithms or fixing up data structures and access patterns. Alternatively, if the performance is absolutely critical in this section and the implementation of a switch statement is paramount, then the development should be done in assembly language where you can control it rather than c++.
And since we're off track enough at this point, let's get back to the original with all these debates in place:
(1) I am currently learning C++ and I was just wondering if using the switch statement is better and/or a better and more efficient way of using multiple if statements?
(2) Is there advantages to using multiple if statements?
(3) If there is what are they?
(4)Are switch statements good in game development?
(1) Switch statements are a tool in the language. There are many tools available for alternating behavior. See the above discussion. The efficiency of a switch statement is not typically a performance concern in the real world.
(2) If a switch would have worked but the programmer choose a series of if statements, that decision would be about specific control. If you have specific knowledge for a situation against it, a switch statement may not be the best solution. When the problem naturally fits a switch statement, use a switch statement. Note that a compiler might rearrange your if statements according to the optimization rules as allowed by C++, and might even implement it exactly the same as it would have implemented a switch statement. Again, this is not typically a performance concern in the real world.
(3) There are many others options. Jump tables, function pointer tables, branching if/else trees, binary searches, conditional operations, virtual functions, hand-coded assembly with code tuned for specific processors, to name a few. Various options fit different problems more naturally than others. The compiler is programmed with all kinds of details about the target processor and the optimizer can pick and choose between many different options when it encounters a switch statement. Exactly how it is implemented internally doesn't matter, that's an abstraction in c++. However, this situation is not typically a performance concern in the real world.
(4) Yes, switch statements are used all the time in game development. As are branching is/else trees, jump tables, virtual functions, and the rest. You should generally use the tool that your problem suggests as a solution. In C++, if you've got a bunch of potential routes of code and the decision of which branch is based on an integer value, then a switch statement is a natural fit. If there are other similar situations, those may be a better fit. If instead of branching based on an integer value you want to select behavior based on the type of an object, a virtual function is probably a better fit. If instead you are branching based on the value in a string, a series of if/else branches may be better. If you have some other situation, a different solution may be a more natural fit for it.