Sign in to follow this  
TheOddMan

Weirdest bug ever

Recommended Posts

Okay, I have this line in my code: mSystem = inSystem; Surrounded by break points. Before the line is executed, mSystem = 0, and inSystem is a valid pointer. After the line has executed, nothing has changed, mSystem is still 0. What on earth can cause this to happen? How can a line of code just be completely ignored like this?

Share this post


Link to post
Share on other sites
Thanks that's sorted it right out! Odd thing is it was doing all kinds of other crazy stuff, like assigning wrong values to pointers and literally skipping out lines of code. Now it runs perfect! Thanks again!

Share this post


Link to post
Share on other sites
In general I don't often find the debugger to be inaccurate during release builds, although I have seen it happen on occasion for no discernable reason. I've found that [i]usually[/] this kind of behaviour is caused by out-of-sync debug information or misconfigured debugging options.

Glad to see you've already got it fixed, but I just wanted to point that out incase you or anyone else run into something similar.

Share this post


Link to post
Share on other sites
In release mode, due to optimizations, there is not necessarily any kind of ordered correspondence between assembler opcodes and lines of course: you can't meaningfully say "this line is represented by machine code bytes 0..X, then the next line is represented by bytes X+1..Y, etc.". So it should be no surprise that what the debugger tells you is effectively garbage.

That's why the other option besides release mode is called *debug mode*. [smile]

Share this post


Link to post
Share on other sites
More often the problem is that you compiled in debug mode yesterday. Then made changes and compiled in release. Then made more changes. Then again. Then you tried to debug, using that debug info generated yesterday. It's no surprise it's out of sync.

Share this post


Link to post
Share on other sites
When I do systems programming with a lot of SMC, I build in release mode with all optimizations off. This keeps the executables small without all that debugging stuff written to your program and prevents the compiler from removing and ordisplacing your code. Granted your code will run a bit smaller, and at this point it does benefit writing in an optimized style that doesn't rely on the compiler to fix up crappy coding.

Share this post


Link to post
Share on other sites
Optimizations are not 'fixing up crappy code'. You cannot beat the compiler. Period. It can do things you can't without writing it in assembly, and then it will still write better assembly than you.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Deyja
Optimizations are not 'fixing up crappy code'. You cannot beat the compiler. Period. It can do things you can't without writing it in assembly, and then it will still write better assembly than you.


good examples of things it can do that is very hard to do manually is platform specific optimizations,
you can make multiple optimized builds for different hardware architectures such as IA32 and AMD64 without changing a single line of code,
even if you are an asm expert and know everything there is to know about both platforms im pretty sure leaving it to the compiler would be the best option (atleast it saves time)

Share this post


Link to post
Share on other sites
Quote:
Original post by Conner McCloud
Never speak in absolutes. Ever.

This advice is always good. (Augh my brain!)


Anyway, it is true that it's not good to think of an optimizing compiler as "fixing crappy code". Rather, think of it as "making good code fast". If you tried to do directly in your code even a small portion of the things a good optimizing compiler can do, your code would wind up difficult to read and generally a pain to maintain. An expert could write faster machine code, but this is very seldom necessary, and as human time is more valuable than machine time, readable, maintainable code is usually preferable.

Share this post


Link to post
Share on other sites
Quote:
Original post by Conner McCloud
Quote:
Original post by Deyja
You cannot beat the compiler. Period.

Never speak in absolutes. Ever.

It is possible to beat the compiler, even without resorting to ASM. One just shouldn't seek to do so from the start.


Sure, if you're good. ordered_disordered's luddite stance towards optimization technology does not lend itself to that theory (that he's good), however.

And I am nearly willing to state that you cannot beat compiler assisted optimization within the following constraints:

1) For a meaningfully sized program
2) Without a shitload of time on your hands
3) Without wasting a shitload of time in repetitive micro-optimizations that basically just match what the compiler did with no noticable performance gains over the compiler's version.
4) While remaining within budget, in lieu of #3
5) In a manner that will let you keep your job, in lieu of #4
6) In a manner that will let you accomplish much worthwhile, in lieu of #5 in which you are fired and are forced to go lone wolf.
7) While using a modern C++ compiler. Note: I don't consider VS6 a C++ compiler at all, much less a "modern" one.

If you can pull such a thing off in the above scenario, well, congratulations, you have job security - because it's likely nobody will be able to follow your code, and thus maintain or debug it, but (maybe) you.

Share this post


Link to post
Share on other sites
Quote:
Original post by Conner McCloud
Never speak in absolutes. Ever.

It is possible to beat the compiler, even without resorting to ASM. One just shouldn't seek to do so from the start.

CM


if you can beat compiler on amd64 platform you must be a freaking genius. The compiler will always outsmart you with using all the neat registers and cpu candies out there.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Compilers have been getting better steadily over the years, some could even be described as "good" now.

You can beat them, I have many times, it just depends on the compiler.


I did an experiment when I was working for Panasonic, I took a block of code that updated the LCD and compiled it to asm. This was on an ARM processor.

The basic structure of the code was

loop over horizontal lines
loop over vertical lines
get byte from memory
write byte to hardware port
endloop
endloop

The compiler unrolled the inner loop in the code very nicely but didn't spot that one 24 bit address was used a lot within the loop. So this address was reformed every time it was needed.

By simply preforming this 24 bit address outside the loop and storing it in a register I was able to speed up the code and shrink it. Strangly when I tried forming the address outside the loop in the original code and specifying register type, the compiler didn't un roll the inner loop.

In this case it was a valid thing to do, the code was called often, we had limited code space available, and we were pushing the processor very hard.

Most versions of visual c++ pad out subroutines to 16 byte boundaries, so you have a lot of gaps in your code. Handy for hackers.

The question is not can a good coder beat the compiler, the answer is yes.

The question should be when do you need to?

If you are writing an app for windows when you have gigs of storage available a few bytes at the end of a subroutine are completly meaningless. So why worry.

However if you are writing a routine that interfaces directly with hardware you may need to swap to asm to get the performance you require.

Horses for courses.


Share this post


Link to post
Share on other sites
Of course it's possible to beat compilers. Saying otherwise means that you assume all compilers are well-written. ;)

The main thing is, though, that it's fairly pointless to bother about 99% of the time. Many modern compilers (Certainly not all of them) produce damn-near optimal code, and hammering away at it to squeeze out that extra 0.01% speed boost is a waste of time. You'd be far better off making sure the function was called less rather than making it faster. Optimisation should start from the top down, not the other way around.

Share this post


Link to post
Share on other sites
Quote:
The compiler unrolled the inner loop in the code very nicely but didn't spot that one 24 bit address was used a lot within the loop. So this address was reformed every time it was needed.


Changing the behavior of the code makes this an invalid comparison. The compiler isn't allowed to change the behavior of the code, and thus without sophisticated side effect analysis, hoisting variables out of a loop isn't something the compiler can do.

If you had written the original version in ASM, you would have at best matched the compiler. If you had written the second version in ASM, yes it would have beaten the compiler's first version. But it's using a different algorithm, so the comparison is totally invalid.

Compiler's don't do macro optimizations. You can beat them there. But they are much better at micro optimizations than you are. You aren't even going to match them in that domain without some considerable time to waste.

ordered_disorder's advice remains foolish. You can't debug as effectivly in release mode anyway; he should use debug mode for what it's for - debugging, and let release mode optimize all it wants.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Deyja
Compiler's don't do macro optimizations. You can beat them there. But they are much better at micro optimizations than you are. You aren't even going to match them in that domain without some considerable time to waste.
Taking a function call out of a loop, knowing it doesn't change any state, is an easy micro optimization that no C++ compiler could ever do in general. It requires knowing whether a function may change the value of some variable, and the problem can easily be transformed into halting problem. Current compilers are unlikely to do this optimization even in simple cases, as it may require inspecting some lib-file's compiled code to know what it does. Global optimizations were introduced in VS2003 maybe but they can't do much yet (I'm not sure if they even go down to asm inspection level at all).

Share this post


Link to post
Share on other sites
Quote:
Original post by Deyja
ordered_disorder's advice remains foolish. You can't debug as effectivly in release mode anyway; he should use debug mode for what it's for - debugging, and let release mode optimize all it wants.


What are you talking about Deyja? I was giving some good trivia on what to do if DON'T want your compiler to butcher your variables, modify and or remove your code in it's optmization zeal.

As somebody already mentioned it's very dangerous and wrong to tell people, especially beginners:

Quote:

Deyja
You cannot beat the compiler. Period.


This is utterly false and a completely untenable argument. You yourself disprove your own argument with:
Quote:

Deyja
Compiler's don't do macro optimizations.


It should be noted that the compiler can fail in more cases then just macro optimizations. Another example would be Systems programming, where compiler can disrupt and cause failure and undefined behaivor on all levels from the driver to low level userland coding.

If a programmer is forced to abandon compiler optimizations, it is a fact that it would be a disservice to their clients to code in an unoptimized, or as I colloqually said, "crappy" style.

The compiler is a great tool, but if you believe it's all you need to be a good programmer, please don't program in any mission critical sectors, or program my favorite games. I am tired of waiting for patches to fix bugs that should have never been, or having to buy 800$ video card hardware because you think the lastest in 3d technology is best programmed in c#.

***
On my last comment, what I mean is programmers these days are getting Lazy. They blindly rely on their tools and choose to do no investigations into how their tools or hardware function. Evidence being statements like "You cannot beat the compiler. Period." I don't think programmers should take a year off and study optimization theory, but I do think they should become knowledgable in in all aspects that is related to their field, and especially become knowledgable in whatever their specification is.

For example I would expect all 3d engine programmers who work on cutting edge technology to have or be on the path to mastering processor and memory optimization, and be extremely knowledable in the way video cards, processors, memory, and the motherboard all work together. I wouldn't expect this of a network programmer, but I would expect them to have low level knowledge of network protocals though..

[Edited by - ordered_disorder on September 10, 2006 9:02:29 AM]

Share this post


Link to post
Share on other sites
Thanks for all the down ratings. (sarcasm/)I too think discussion is lame, and we should just all push big buttons with a special shade of red that gives us a feeling of great power when someone comes in and tells us our amazing all knowing compilers can be wrong. Burn the witch! Let ignorance reign!

Share this post


Link to post
Share on other sites
Quote:
Taking a function call out of a loop, knowing it doesn't change any state, is an easy micro optimization that no C++ compiler could ever do in general.
I would call that a macro optimization. I would call any optimization that changes the structure of the code a macro optimization.

Quote:
What are you talking about Deyja? I was giving some good trivia on what to do if DON'T want your compiler to butcher your variables, modify and or remove your code in it's optmization zeal.
Please give an example of when you wouldn't want your code to execute faster.

Quote:
Another example would be Systems programming, where compiler can disrupt and cause failure and undefined behaivor on all levels from the driver to low level userland coding.
If it does so, it is a bug in the compiler and should be reported to the vendor. As I said before, compiler optimizations should not change the behavior of the code.

Quote:
If a programmer is forced to abandon compiler optimizations, it is a fact that it would be a disservice to their clients to code in an unoptimized, or as I colloqually said, "crappy" style.
First, do not assume 'unoptimized' code is 'crappy' code. Execution speed is not the only measure of code quality. It's not even the most important.
If you have to turn off compiler optimizations to get correct behavior, there is either a bug in the compiler (in which case you should get it fixed or replace it) or a bug in your code.
You will get much better results by working around that one compiler bug that bit you instead of turning off all optimizations and trying to implement them yourself.

Quote:
The compiler is a great tool, but if you believe it's all you need to be a good programmer, please don't program in any mission critical sectors, or program my favorite games. I am tired of waiting for patches to fix bugs that should have never been, or having to buy 800$ video card hardware because you think the lastest in 3d technology is best programmed c#.
That's not at all what I said. Are you calling me a bad programmer in a round-about way? Please do not resort to personal insults; it destroy's the debate.

Share this post


Link to post
Share on other sites
It seems some people has misinterpretted this discussion as being about 'good code' or 'bad code', when in fact it's about 'any code'. If you change the code being compiled, you can get better results. But that's not the compiler's job. If you take the exact same piece of code and write it in assembly, you will find it neigh on impossible to beat the compiler's assembly. I'm not advocating writing slow code and relying on the compiler to make it fast. I'm advocating letting the compiler do it's job. If your code is bad, the compiler's optimizations won't magically make it good.

Quote:
They blindly rely on their tools and choose to do no investigations into how their tools or hardware function. Evidence being statements like "You cannot beat the compiler. Period."
Actually; that statement on my part has come from experience and investigation. I know how my compiler functions, and I know how my CPU functions. I'm by no means an expert, but I know enough to understand how they fit together - and to understand that until I am an expert, with as much experience in and knowledge of my particular CPU as the people who designed it, I won't be able to beat the compiler. Yeah, you can beat the compiler. But you'll waste more time in development than you'll ever get back from the execution speed gain.

[edit]
Also, when one makes absolute statements, it is often meant with the implied gotcha that 'exceptions to this rule are so rare as to be inconsequential'.
I find your literalism tedious.
[/edit]

Share this post


Link to post
Share on other sites
Quote:
Original post by ordered_disorder
What are you talking about Deyja? I was giving some good trivia on what to do if DON'T want your compiler to butcher your variables, modify and or remove your code in it's optmization zeal.


What the hell compilers have you been using? Let me know so I can avoid them at all costs, because a compiler should NOT butcher your code like you describe. However, I have had some math butchered by a certain compiler (That shall not be named), but those aren't par for the course at all, and they're bugs that should be reported. By the way you're talking about them here, it sounds like you get them every 5 minutes.

Quote:

It should be noted that the compiler can fail in more cases then just macro optimizations. Another example would be Systems programming, where compiler can disrupt and cause failure and undefined behaivor on all levels from the driver to low level userland coding.


Sounds to me like you've been trying to run processor-specific optimisations for code that runs on a different processor.

Quote:

The compiler is a great tool, but if you believe it's all you need to be a good programmer, please don't program in any mission critical sectors, or program my favorite games. I am tired of waiting for patches to fix bugs that should have never been, or having to buy 800$ video card hardware because you think the lastest in 3d technology is best programmed in c#.


I really, really, really doubt that a compiler would cause the kinds of bugs you seem to be talking about. 99% of game bugs are caused by logical errors, not compiler errors. Unless you're calling other programmers that enable optimisations incapable, which is beyond the scope of this discussion.

Quote:
On my last comment, what I mean is programmers these days are getting Lazy. They blindly rely on their tools and choose to do no investigations into how their tools or hardware function.


Really, blanket statements like that aren't going to help your rating.

Share this post


Link to post
Share on other sites
Quote:
On my last comment, what I mean is programmers these days are getting Lazy. They blindly rely on their tools and choose to do no investigations into how their tools or hardware function.


Really, blanket statements like that aren't going to help your rating.[/quote]

You took what I said out of context, the rest my quote is:
Quote:
Evidence being statements like "You cannot beat the compiler. Period."


Lol. I love this shit. fuck you. I am done with this community. Enjoy conforming viewpoints and be an average programmer forever.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this