Sign in to follow this  

how to delete **char ?!?

This topic is 4748 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello! I have the following code:
Box::Box(..., char *img[FACE_COUNT]) {
imgPaths = new char*[FACE_COUNT];
imgPaths = img; // Array copy
}

The question is, how do I delete imgPaths in the destructor ?!? I have tried a lot of things, nothing works...

Share this post


Link to post
Share on other sites
Assuming you use new char[...] when copying the array:

for( int i = 0; i < FACE_COUNT; ++i )
{
    delete [] imgPaths[i];
}
delete [] imgPaths;


- Benny -

Share this post


Link to post
Share on other sites
Quote:
Original post by benstr
Assuming you use new char[...] when copying the array:

for( int i = 0; i < FACE_COUNT; ++i )
{
    delete [] imgPaths[i];
}
delete [] imgPaths;


- Benny -


Thanks for the reply!
I still get error! The code looks like this now:

char** imgPaths; // The declaration...

...

imgPaths = new char*[FACE_COUNT];
for (int i = 0; i < FACE_COUNT; i++) {
imgPaths[i] = img[i];
}

...

for (int i=0; i<FACE_COUNT; i++) {
delete[] imgPaths[i];
}
delete[] imgPaths;

Share this post


Link to post
Share on other sites
Try taking out this part:
for (int i=0; i<FACE_COUNT; i++) {
delete[] imgPaths[i];
}
Assuming you keep track of any memory that the pointers in that array point to, or that they point to globals.
What is the error you get?

Share this post


Link to post
Share on other sites
remeber to set pointers to NULL after deleteing them. It will save you debugging time!! the crt's memory allocator is quite good at resuing memory.

It doesn't matter how advanced of a programmer you think you are, this will save you time in the long run!!


Cheers
Chris

Share this post


Link to post
Share on other sites
Quote:
Original post by chollida1
remeber to set pointers to NULL after deleteing them. It will save you debugging time!! the crt's memory allocator is quite good at resuing memory.

It doesn't matter how advanced of a programmer you think you are, this will save you time in the long run!!

Cheers
Chris

Unless the NULLed pointer is essential to part of your algorithms then might I suggest a better alternative:
#ifdef DEBUG
#define UNINITIALISE(x) do{ (x) = 0xCCCDCCCD; } while(0)
#else
#define UNINITIALISE(x)
#endif

p = new int;
delete p;
UNINITIALISE(p);





Not only does this not waste time by NULLing pointers unnecessarily in your release builds (possibly affecting performance), but it makes it obvious if you try and use a pointer in error, after it is deleted.
It will also still crash consistently if you try and dereference it, but it wont get automatically reallocated by any code that might reallocate any time it is NULL (assuming that it shouldn't reach that code of course).
Lastly it allows you to distinguish between a pointer which has been finished with and one that is simply marked as temporarily unallocated (i.e. NULL) at that particular time. (It's also different from the "unset variable" value MS puts in)
I think they should have put this tip in the book "Writing solid code".

MS already use:
0xCCCCCCCC for uninitialised globals, 0xCDCDCDCD for uninitialised stack variables etc... http://www.docsultant.com/site2/articles%5Cdebug_codes.html

[Edited by - iMalc on December 20, 2004 12:23:38 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by iMalc
Quote:
Original post by chollida1
remeber to set pointers to NULL after deleteing them. It will save you debugging time!! the crt's memory allocator is quite good at resuing memory.

It doesn't matter how advanced of a programmer you think you are, this will save you time in the long run!!

Cheers
Chris

Unless the NULLed pointer is essential to part of your algorithms then might I suggest a better alternative:*** Source Snippet Removed ***Not only does this not waste time by NULLing pointers unnecessarily in your release builds (possibly affecting performance), but it makes it obvious if you try and use a pointer in error, after it is deleted.
It will also still crash consistently if you try and dereference it, but it wont get automatically reallocated by any code that might reallocate any time it is NULL (assuming that it shouldn't reach that code of course).
I think they should have put this tip in the book "Writing solid code".

MS already use:
0xCCCCCCCC for uninitialised globals, 0xCDCDCDCD for uninitialised stack variables etc... http://www.docsultant.com/site2/articles%5Cdebug_codes.html


Wow, that's really cool! Never seen that before.

Matt Hughson

Share this post


Link to post
Share on other sites
Quote:
Original post by iMalc
*** Source Snippet Removed ***


Unless I'm making a really huge brain fart here, you're introducing a totally unnecessary loop construct there. Why not just:

#define UNINITIALIZE(x) { (x) = 0xBAADF00D; }

Still prevents you from doing dumb things like y = UNINITIALIZE(x) (syntax error) but doesn't introduce a wasted cmp instruction after the pointer is changed.

This is really being more anal than anything, but personally I find it very hard to work with code that uses constructs excessively just because it isn't explicitly wrong to do so. Cleaner is better.


Also, I'd personally be very wary of anyone who undefined such a macro in release code. It is not unusual for some bugs only to manifest themselves in release builds (due to certain code restructuring and optimizations), and it is a foolish assumption that you will never need safe pointer uninitialization in release code. Debugging an issue in a release build is infinitely easier when the code clearly accesses a sentinel pointer such as 0xBAADF00D or similar, as opposed to just accessing random memory (which it would do if you undefined that macro in release mode).

Arguing that undefining this macro helps performance is flawed as well. Releasing memory takes far more time than a simple memory write, because you have to do things like adjust the heap tracker and so on. The additional instruction or two is negligible. Further, if you are allocating and releasing memory often enough that such a tiny change does make a significant performance difference, you need to seriously re-evaluate your allocation practices.

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
Quote:
Original post by iMalc
*** Source Snippet Removed ***


Unless I'm making a really huge brain fart here, you're introducing a totally unnecessary loop construct there. Why not just:

#define UNINITIALIZE(x) { (x) = 0xBAADF00D; }

Still prevents you from doing dumb things like y = UNINITIALIZE(x) (syntax error) but doesn't introduce a wasted cmp instruction after the pointer is changed.

This is really being more anal than anything, but personally I find it very hard to work with code that uses constructs excessively just because it isn't explicitly wrong to do so. Cleaner is better.

Yes you could do that and some people would be happy with that as it works in 99% of cases. However you'll find that the do{ }while(0) around a #define of this nature is very common practice as it ensures that the #define can be used everywhere perfectly. e.g. Try it without the do while(0), or also without the {}, inside multiple levels of if-else statements. You'll find that there will be special cases where you have to leave off the semicolon for example, to get it to work right where you otherwise would just write it normally. The loop is completely optimised out on EVERY compiler you'll ever find I imagine (it's essentially a single "branch never" instruction which probably doesn't exist). Yes cleaner would be nice if possible but this way it ensures that it always works as intended.
[google] for this common (and more correct) practice if you like. Lots of other smart gamedevers can tell you too though.
Quote:

Also, I'd personally be very wary of anyone who undefined such a macro in release code. It is not unusual for some bugs only to manifest themselves in release builds (due to certain code restructuring and optimizations), and it is a foolish assumption that you will never need safe pointer uninitialization in release code. Debugging an issue in a release build is infinitely easier when the code clearly accesses a sentinel pointer such as 0xBAADF00D or similar, as opposed to just accessing random memory (which it would do if you undefined that macro in release mode).

Debugging in relase mode is infinitely easier if you turn optimisations off too etc, so what's your point? If you gotta do it, you gotta do it.
The whole point is that you will have picked up any errors of this kind in debug build before you switch to release build. You should have no remaining assert failures and have tested each execution path in doing code coverage. Sure, problems can appear only in release or only in debug, but it shouldn't be this kind of bug as you would have eliminated them already.
I am not suggesting that you never set pointers to NULL after dealocating them, in some cases that is exactly what you want to do so that they are not freed a second time for example. Use of the macro I posted is only for when you should never be reading the value of the pointer or accessing it's memory, without reallocating it.
Quote:

Arguing that undefining this macro helps performance is flawed as well. Releasing memory takes far more time than a simple memory write, because you have to do things like adjust the heap tracker and so on. The additional instruction or two is negligible. Further, if you are allocating and releasing memory often enough that such a tiny change does make a significant performance difference, you need to seriously re-evaluate your allocation practices.
Very true. So why do they turn off clearing blocks to 0xCCCCCCCC for example, in release build then? It's because you've found and fixed any bugs that it would help you find. If a bug of that nature is still present in your release build, clearing the pointer after use certainly isn't going to make your exe run any better, so why bother? The overall increase in speed and decrease in exe size might be small, but some people like it.

[Edited by - iMalc on December 20, 2004 12:15:44 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by iMalc
e.g. Try it without the do while(0), or also without the {}, inside multiple levels of if-else statements. You'll find that there will be special cases where you have to leave off the semicolon for example, to get it to work right where you otherwise would just write it normally.


A fast Google gives lots of hideously ugly code examples. My personal convention is to always have such macros take care of their own semicolons when needed, and no stand-alone macro is ever terminated by a semicolon in code. I find this much easier to read, which is probably why I've never had to bother with a do {} while(0) wrapper. Personal preference I guess.


Quote:
Original post by iMalc
Dubbing in relase mode is infinitely easier if you turn optimisations off too etc, so what's your point? If you gotta do it, you gotta do it.
The whole point is that you will have picked up any errors of this kind in debug build before you switch to release build. You should have no remaining assert failures and have tested each execution path in doing code coverage. Sure, problems can appear only in release or only in debug, but it shouldn't be this kind of bug as you would have eliminated them already.


Your utopian concept of all bugs being corrected in debug builds is nice and lovely. I, however, live and code in the real world. If a customer calls and complains that his software just crashed with a pointer exception at 0xBAADF00D I know instantly what is wrong - my code, accessing released memory. I can find and correct the problem often with only a short time of investigation. On the other hand, if they report some random memory address, I have a lot more work to do. If the bug only manifests itself in certain compiler setups, debugging can be a total nightmare. Turning off optimizations and/or returning to debug builds is, in a lot of cases I work with, not an option.



Quote:
Original post by iMalc
So why do they turn off clearing blocks to 0xCCCCCCCC for example, in release build then? It's because you've found and fixed any bugs that it would help you find. If a bug of that nature is still present in your release build, clearing the pointer after use certainly isn't going to make your exe run any better, so why bother? The overall increase in speed and decrease in exe size might be small, but some people like it.


I doubt it. The real reason is far more likely related to performance. If I deallocate a 300KB slab of memory, setting every word to some "cleared" value is performance intensive and likely wasted. Of course you could do something like only clear allocated memory blocks of less than a certain size, but this becomes questionable and somewhat unreliable behavior for developers to work with.

Again, I wish your youthful dream of having all bugs fixed in release software were true, but I have to live with real life instead. More's the pity, because it's a very warm, fuzzy dream [wink] Of course clearing the pointer won't make execution any more stable, but it will make the job easier for developers to come back to and fix as needed.

Really the issue for me isn't the specific case of not clearing pointers when deallocated. The issue that I see here is a programmer who doesn't appreciate the benefits of designing code that is easy to debug and maintain in any scenario. It's not the specific example here, it's the underlying mentality.

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
Your utopian concept of all bugs being corrected in debug builds is nice and lovely. I, however, live and code in the real world. If a customer calls and complains that his software just crashed with a pointer exception at 0xBAADF00D I know instantly what is wrong - my code, accessing released memory. I can find and correct the problem often with only a short time of investigation. On the other hand, if they report some random memory address, I have a lot more work to do. If the bug only manifests itself in certain compiler setups, debugging can be a total nightmare. Turning off optimizations and/or returning to debug builds is, in a lot of cases I work with, not an option.

You're missing out the fact that if it's a large product (and in my case it is), then it doesn't matter much that you know it crashed with a BAADF00D pointer, as that wont in itself help fix the bug. In most cases the thing that helps most is knowing exactly what they did (so that you know where in the code to look), and/or getting their configuration files, and/or getting a minidump (which our company makes good use of as they are very useful). I work at a decent sized software company and find that bugs only showing up in certain configurations are very rare ... 1% or less kind of rare. If you're finding it worse than that then maybe you are using a dodgy compiler. I very much live in the real world thankyou.

Quote:
Again, I wish your youthful dream of having all bugs fixed in release software were true, but I have to live with real life instead. More's the pity, because it's a very warm, fuzzy dream [wink] Of course clearing the pointer won't make execution any more stable, but it will make the job easier for developers to come back to and fix as needed.

I never said anything about all bugs being fixed in release builds, but that if you have a problem then you can easily go back to your debug build and reproduce it there as well (where it is always going to be easier to find and fix). Don't you ever switch between the two? I believe that both should always run fine, so they should be used for their respective purposes. We regularly make use of both.

Quote:
Really the issue for me isn't the specific case of not clearing pointers when deallocated. The issue that I see here is a programmer who doesn't appreciate the benefits of designing code that is easy to debug and maintain in any scenario. It's not the specific example here, it's the underlying mentality.

On the contrary, the code I write is very easy to debug and maintain. More so than someone who's code needs to be debugged regularly using a release build, perhaps because the debug build might be broken? If you can track down the cause of a bug from a debug build (which you can 99% of the time) then you wouldn't use a release build to do so would you? It'd be like going in with one hand tied behind your back! (Just try it in .NET!!!)[smile]

I'm very good and fast at debugging code, in fact I spent 3 years in a dedicated "product support" group who's main responsibility is maintaining and bug-fixing others code.
What I see is someone who writes code very defensively (which is a bad thing according to the book "Writing solid code", which have you read btw?), and doesn't make appropriate use of debug vs release builds.
My underlying mentality is no more wrong than yours. Different yes, but wrong No.

My apologies for contribution towards turning this thread into an argument.

Share this post


Link to post
Share on other sites
Quote:
Your utopian concept of all bugs being corrected in debug builds is nice and lovely.
Your utopian concept of taking imalc as a noob is nice and lovely too. i dont think he ever states any truth to this beyond "it will help find bugs faster" and he's quite simply right. weather you set the pointer to 0xBAADF00D or not makes very little difference, the program still crashed.

Quote:
Your utopian concept of all bugs being corrected in debug builds is nice and lovely. I, however, live and code in the real world. If a customer calls and complains that his software just crashed with a pointer exception at 0xBAADF00D I know instantly what is wrong - my code, accessing released memory. I can find and correct the problem often with only a short time of investigation. On the other hand, if they report some random memory address, I have a lot more work to do. If the bug only manifests itself in certain compiler setups, debugging can be a total nightmare. Turning off optimizations and/or returning to debug builds is, in a lot of cases I work with, not an option.
personally id rather not rely on comedic pointer addresses to trace my bugs, i like to use exceptions and logging. just personal preference though :)


Quote:
I doubt it. The real reason is far more likely related to performance. If I deallocate a 300KB slab of memory, setting every word to some "cleared" value is performance intensive and likely wasted. Of course you could do something like only clear allocated memory blocks of less than a certain size, but this becomes questionable and somewhat unreliable behavior for developers to work with.
this is moot, we have already noted that these "tricks" are not used in release mode.

Quote:

Really the issue for me isn't the specific case of not clearing pointers when deallocated. The issue that I see here is a programmer who doesn't appreciate the benefits of designing code that is easy to debug and maintain in any scenario. It's not the specific example here, it's the underlying mentality.
your flaming him for not setting his deleted pointers to 0xBAADF00D, wtf.

you say code that is "easy to debug and maintain in any scenario." and yet you claim to live in the real world...

EDIT: i'm sorry if this comes across flamey, i get excited when i discuss :)

Cheers
Peace Out,
Danu

Share this post


Link to post
Share on other sites

This topic is 4748 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this