Jump to content
  • Advertisement
Sign in to follow this  
jarod83

how to delete **char ?!?

This topic is 4962 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello! I have the following code:
Box::Box(..., char *img[FACE_COUNT]) {
imgPaths = new char*[FACE_COUNT];
imgPaths = img; // Array copy
}

The question is, how do I delete imgPaths in the destructor ?!? I have tried a lot of things, nothing works...

Share this post


Link to post
Share on other sites
Advertisement
Assuming you use new char[...] when copying the array:

for( int i = 0; i < FACE_COUNT; ++i )
{
    delete [] imgPaths;
}
delete [] imgPaths;


- Benny -

Share this post


Link to post
Share on other sites
Quote:
Original post by benstr
Assuming you use new char[...] when copying the array:

for( int i = 0; i < FACE_COUNT; ++i )
{
    delete [] imgPaths;
}
delete [] imgPaths;


- Benny -


Thanks for the reply!
I still get error! The code looks like this now:

char** imgPaths; // The declaration...

...

imgPaths = new char*[FACE_COUNT];
for (int i = 0; i < FACE_COUNT; i++) {
imgPaths = img;
}

...

for (int i=0; i<FACE_COUNT; i++) {
delete[] imgPaths;
}
delete[] imgPaths;

Share this post


Link to post
Share on other sites
Try taking out this part:
for (int i=0; i<FACE_COUNT; i++) {
delete[] imgPaths;
}
Assuming you keep track of any memory that the pointers in that array point to, or that they point to globals.
What is the error you get?

Share this post


Link to post
Share on other sites
remeber to set pointers to NULL after deleteing them. It will save you debugging time!! the crt's memory allocator is quite good at resuing memory.

It doesn't matter how advanced of a programmer you think you are, this will save you time in the long run!!


Cheers
Chris

Share this post


Link to post
Share on other sites
Quote:
Original post by chollida1
remeber to set pointers to NULL after deleteing them. It will save you debugging time!! the crt's memory allocator is quite good at resuing memory.

It doesn't matter how advanced of a programmer you think you are, this will save you time in the long run!!

Cheers
Chris

Unless the NULLed pointer is essential to part of your algorithms then might I suggest a better alternative:
#ifdef DEBUG
#define UNINITIALISE(x) do{ (x) = 0xCCCDCCCD; } while(0)
#else
#define UNINITIALISE(x)
#endif

p = new int;
delete p;
UNINITIALISE(p);





Not only does this not waste time by NULLing pointers unnecessarily in your release builds (possibly affecting performance), but it makes it obvious if you try and use a pointer in error, after it is deleted.
It will also still crash consistently if you try and dereference it, but it wont get automatically reallocated by any code that might reallocate any time it is NULL (assuming that it shouldn't reach that code of course).
Lastly it allows you to distinguish between a pointer which has been finished with and one that is simply marked as temporarily unallocated (i.e. NULL) at that particular time. (It's also different from the "unset variable" value MS puts in)
I think they should have put this tip in the book "Writing solid code".

MS already use:
0xCCCCCCCC for uninitialised globals, 0xCDCDCDCD for uninitialised stack variables etc... http://www.docsultant.com/site2/articles%5Cdebug_codes.html

[Edited by - iMalc on December 20, 2004 12:23:38 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by iMalc
Quote:
Original post by chollida1
remeber to set pointers to NULL after deleteing them. It will save you debugging time!! the crt's memory allocator is quite good at resuing memory.

It doesn't matter how advanced of a programmer you think you are, this will save you time in the long run!!

Cheers
Chris

Unless the NULLed pointer is essential to part of your algorithms then might I suggest a better alternative:*** Source Snippet Removed ***Not only does this not waste time by NULLing pointers unnecessarily in your release builds (possibly affecting performance), but it makes it obvious if you try and use a pointer in error, after it is deleted.
It will also still crash consistently if you try and dereference it, but it wont get automatically reallocated by any code that might reallocate any time it is NULL (assuming that it shouldn't reach that code of course).
I think they should have put this tip in the book "Writing solid code".

MS already use:
0xCCCCCCCC for uninitialised globals, 0xCDCDCDCD for uninitialised stack variables etc... http://www.docsultant.com/site2/articles%5Cdebug_codes.html


Wow, that's really cool! Never seen that before.

Matt Hughson

Share this post


Link to post
Share on other sites
Quote:
Original post by iMalc
*** Source Snippet Removed ***


Unless I'm making a really huge brain fart here, you're introducing a totally unnecessary loop construct there. Why not just:

#define UNINITIALIZE(x) { (x) = 0xBAADF00D; }

Still prevents you from doing dumb things like y = UNINITIALIZE(x) (syntax error) but doesn't introduce a wasted cmp instruction after the pointer is changed.

This is really being more anal than anything, but personally I find it very hard to work with code that uses constructs excessively just because it isn't explicitly wrong to do so. Cleaner is better.


Also, I'd personally be very wary of anyone who undefined such a macro in release code. It is not unusual for some bugs only to manifest themselves in release builds (due to certain code restructuring and optimizations), and it is a foolish assumption that you will never need safe pointer uninitialization in release code. Debugging an issue in a release build is infinitely easier when the code clearly accesses a sentinel pointer such as 0xBAADF00D or similar, as opposed to just accessing random memory (which it would do if you undefined that macro in release mode).

Arguing that undefining this macro helps performance is flawed as well. Releasing memory takes far more time than a simple memory write, because you have to do things like adjust the heap tracker and so on. The additional instruction or two is negligible. Further, if you are allocating and releasing memory often enough that such a tiny change does make a significant performance difference, you need to seriously re-evaluate your allocation practices.

Share this post


Link to post
Share on other sites
Quote:
Original post by ApochPiQ
Quote:
Original post by iMalc
*** Source Snippet Removed ***


Unless I'm making a really huge brain fart here, you're introducing a totally unnecessary loop construct there. Why not just:

#define UNINITIALIZE(x) { (x) = 0xBAADF00D; }

Still prevents you from doing dumb things like y = UNINITIALIZE(x) (syntax error) but doesn't introduce a wasted cmp instruction after the pointer is changed.

This is really being more anal than anything, but personally I find it very hard to work with code that uses constructs excessively just because it isn't explicitly wrong to do so. Cleaner is better.

Yes you could do that and some people would be happy with that as it works in 99% of cases. However you'll find that the do{ }while(0) around a #define of this nature is very common practice as it ensures that the #define can be used everywhere perfectly. e.g. Try it without the do while(0), or also without the {}, inside multiple levels of if-else statements. You'll find that there will be special cases where you have to leave off the semicolon for example, to get it to work right where you otherwise would just write it normally. The loop is completely optimised out on EVERY compiler you'll ever find I imagine (it's essentially a single "branch never" instruction which probably doesn't exist). Yes cleaner would be nice if possible but this way it ensures that it always works as intended.
[google] for this common (and more correct) practice if you like. Lots of other smart gamedevers can tell you too though.
Quote:

Also, I'd personally be very wary of anyone who undefined such a macro in release code. It is not unusual for some bugs only to manifest themselves in release builds (due to certain code restructuring and optimizations), and it is a foolish assumption that you will never need safe pointer uninitialization in release code. Debugging an issue in a release build is infinitely easier when the code clearly accesses a sentinel pointer such as 0xBAADF00D or similar, as opposed to just accessing random memory (which it would do if you undefined that macro in release mode).

Debugging in relase mode is infinitely easier if you turn optimisations off too etc, so what's your point? If you gotta do it, you gotta do it.
The whole point is that you will have picked up any errors of this kind in debug build before you switch to release build. You should have no remaining assert failures and have tested each execution path in doing code coverage. Sure, problems can appear only in release or only in debug, but it shouldn't be this kind of bug as you would have eliminated them already.
I am not suggesting that you never set pointers to NULL after dealocating them, in some cases that is exactly what you want to do so that they are not freed a second time for example. Use of the macro I posted is only for when you should never be reading the value of the pointer or accessing it's memory, without reallocating it.
Quote:

Arguing that undefining this macro helps performance is flawed as well. Releasing memory takes far more time than a simple memory write, because you have to do things like adjust the heap tracker and so on. The additional instruction or two is negligible. Further, if you are allocating and releasing memory often enough that such a tiny change does make a significant performance difference, you need to seriously re-evaluate your allocation practices.
Very true. So why do they turn off clearing blocks to 0xCCCCCCCC for example, in release build then? It's because you've found and fixed any bugs that it would help you find. If a bug of that nature is still present in your release build, clearing the pointer after use certainly isn't going to make your exe run any better, so why bother? The overall increase in speed and decrease in exe size might be small, but some people like it.

[Edited by - iMalc on December 20, 2004 12:15:44 AM]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!