Jump to content
  • Advertisement
Sign in to follow this  
Icebone1000

Works with x64 debug, x64 release and x32 debug, but NOT with x32 release

This topic is 2542 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Im using VS 2008 pro..The release builds(all of them) are with optimization on(favor speed or size results in the same)

Its the most crazy behavior Ive ever seem, VS is just skipping code it shouldnt skip, and because of that skip, I get an access violation on a null pointer, because this pointer is assigned in the code skipped.

But it gets a lot more crazy, the code that fails its the same code that succeds 2 times after the fail..

Now to get a bit more concrete: I have a class with a method for creating a child window, and a method to bind a swapchain(and create a device and set a Render Target to the swap chain back buffer (dx stuff)) to window.. What Im doing is creating 3 windows( calling 3 times createDXwindow), and calling 3 times bindSwapChain..
Now put in your mind it works in al builds, except in x32 release with optimization on..

So, debug with optimization is a pain, but I figured out that the problem was in the bindSwapchain method..What I did was put MessageBoxes in case of fails AND succeeds, and figured out that VS just didnt call any of those.

VS is just skipping my damn code!

SO..To get things isolated, I put a damn message box in all methods/functions inside the bindSwapchain method, one for fail, before return, and one for suceed, before return.
Guess what?...nothing, the motherfucker displayed all succeed messagebox and didnt give any access violation, the code just works like that ._.

Is this a known bug that Im not aware?? Its kind of a serious problem, I know theres alredy vs 2010, but how can ppl have lived with that..

Share this post


Link to post
Share on other sites
Advertisement
Optimizations often result in the debugger "acting funny", such as showing the wrong values for watched variables, skipping code, etc. There's nothing wrong with Visual Studio, it's just that the code is optimized too much and the debugging information is not sufficient to show it accurately. As far as I know, most debuggers have this problem with optimized code.

The real problem is probably a nondeterministic bug hiding in your code (such as an uninitialized variable or heap corruption).

Share this post


Link to post
Share on other sites
The visual studio 2008 compiler is quite good. Like all compilers and tools it has its quirks and flaws, but the optimizer will very rarely generate faulty code; it prefers safety and the common complaint is that it does not take optimizations that humans can easily see.


From the looks of it you are almost certainly relying on timing-specific code. The compiler doesn't know it was a bug, and then proceeds to do what should be legal operations --- except they are not because of your bug.

As a first step, add assertions, null-checks, and sanity checks for any value you pull from elsewhere in the system. Good code is covered with them. This alone can expose many errors like the ones you described.


You mentioned that you followed a null pointer. At work that is a critical item. Every code submission gets a buddy check, and part of that check includes verifying that EVERY pointer is validated at least once before it is first used. If unchecked pointers get caught in a code review both the main programmer and the person who did a buddy check get called out publicly among the programmers. For a while we had a trophy of sorts (a very ugly one) that was 'awarded' and publicly displayed after that kind of breakage was submitted to the main code branch. The programmers who had the bad habit of using unchecked pointers very quickly corrected the habit.




Once you've found the errors you need to correct them.

If your system is single-threaded this usually isn't too hard. It is a matter of pushing the work back earlier if possible, or differing the work until prerequisites are ready.

If your system is multithreaded you need to be absolutely certain that every byte that crosses a process boundary has appropriate locking semantics. If your system is well written it is generally not hard to fix: in addition to the checks you do in a single threaded world, you also rely on how well written systems have wrappers that are easy to use for data that crosses those boundaries. If you are unfortunate enough to be working with a multithreaded code base that has played fast-and-loose with multiprocessing integrity and locking semantics, then it is time to make the sometimes-painful step to methodically replace all boundary-crossing data and code with a better-written system that handles interlocked code and data correctly.

Share this post


Link to post
Share on other sites
Its still too weird, Im trying to isolate the problem yet..

But assuming Im doing something wrong (I admit it have to be my fault), how can adding message boxes to the code make it work fine(make it behavior tottaly differently)? Its not "sometimes work", if I took out the mb it fail again. Because of that I assumed VS stop skipping code because of the mb..

Adding message boxes to different parts of the code make it behavior differently, like making methods fail or not..
The other thing is that its the same code, called 3 times, just the first fails..

If I understand why/how can that happens maybe I will have a better clue of wheres the problem.

Share this post


Link to post
Share on other sites

But assuming Im doing something wrong (I admit it have to be my fault), how can adding message boxes to the code make it work fine(make it behavior tottaly differently)?

Because the machine code generated may now be totally different. The kinds of optimization a compiler can perform are really, really extreme.

Look for uninitialized variables (initialize everything at the point of definition, always, no exceptions). Add lots of asserts that remain enabled in release mode. Put them around the code you believe is misbehaving.

Share this post


Link to post
Share on other sites

[quote name='Icebone1000' timestamp='1325157732' post='4897753']
But assuming Im doing something wrong (I admit it have to be my fault), how can adding message boxes to the code make it work fine(make it behavior tottaly differently)?

Because the machine code generated may now be totally different. The kinds of optimization a compiler can perform are really, really extreme.

Look for uninitialized variables (initialize everything at the point of definition, always, no exceptions). Add lots of asserts that remain enabled in release mode. Put them around the code you believe is misbehaving.
[/quote]

Cool link..Now I see that an illogical code can result in some fancy optimized away code..

Quite simple stupid problem actually, I was testing if a pointer parameter(by reference) where NOT NULL to assign it..I was thinking like "if user didnt passed a pointer, do not assign it"
-_-"

Share this post


Link to post
Share on other sites
94.7% (made up value but most likely very close to reality) of the time when you have something that works in debug but not in release it is due to an uninitialized variable. Now the reason it might be working in 64 but not 32 is due to the significant bits. You need to go through and make sure you set EVERY variable to an initial value.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!