I.e compare this:
for (int i = 10; i--;)
To this:
for (int i = 10; i; i--)
I have no idea where you're getting that, as the second version never appeared in the thread. Furthermore, that's not even the same loop! Think about it: i is 10 at the start, passes the non-zero check, then it is 10 as it goes into the loop body...
I have no idea what you're on about.
Counting up wouldn't have fetched each iteration if you'd cached he volatile into a local. If you want to simulate register pressure, then use a lot of variables in the loop body rather than disabling the optimizer via volatile.
Clearly, your definition of "local" and mine don't align. To me, a local variable is one of automatic storage duration defined with in a function or deeper scope. Volatile doesn't change that.
Moreover, it seems like you're arguing my method, rather than my result. I don't know why it isn't obvious that using two temporary variables (that the compiler can't optimize away to a constant) instead of one increases register pressure, and I need to prove it with numerous examples.
Basically if you pull that 'clever' shit you can consider yourself unemployed pretty quick.
Someday, people won't always assume that I'm not employed as a software engineer right now, and tell me the dictionary definition of obvious terms.
You can't use a forced example to demonstrate the optimizer's actual behavior!
Then that nullifies all of your suggested modifications to my exercise, and we're at a stalemate.
rather than forcing its hand with code that gives it zero leeway.
Because the point was that if it isn't put into a register, it can result in larger code. You are making all suggestions that would either optimize it to a constant or cache it in a register, which would still be no faster than counting down; at best, it'd break even. My argument from the beginning is reducing register pressure and comparing against zero, while your suggestions are allowing the compiler to assume there's no register pressure and use all registers at once.
Neither use negative values so signed/unsigned is nuetral, but it would be interesting to see if it affects the codegen (many code bases use uint over int by default).
Post results, I'm curious.
IMO it's also more common to write !=limit and !=0 rather than 0...
In my and my co-worker's opinions, it isn't.
If you're testing whether the compiler can transform between up/down
I'm not.
I thought the original point was the compiler's optimizer would essentially create the same exact code for:
for(int i=10;i--;){}
and
for(int i=0;i<10;i++){}
I posted three examples that disagreed with the consensus, and I got refutation about none of them in particular, so I chose the one that was most likely to make the biggest difference.
but that's just all it is, edge cases where yes you are correct, a count down version would theoretically generate tinier code.
I hardly consider any of my original three examples an edge case. I have a lot of loops that does a lot of permutations on every iteration, and the counter spills to RAM.
but I think if you get to the point where your profiling indicates that this one extra instruction is your bottlenexk
No, I never said that. People said they would be the same, and I posted an example where they weren't, with proof. This and many other arguments in this thread is a complete and utter strawman argument.
Really, I should have known better than to come into a coding horror thread, and post a use case for the idiom that people are dumping on.