Jump to content

  • Log In with Google      Sign In   
  • Create Account


about "delete" and "delete []"


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
35 replies to this topic

#1 xiajia   Members   -  Reputation: 163

Like
0Likes
Like

Posted 14 January 2013 - 11:51 PM

code like this:
 

 

 

int * a = new int(0);
int * b = new int(0);
int * c = new int[2];
int * d = new int[2];
int * e = new int[1];
delete a;
delete []b;
delete c;
delete []d;
delete e;

 

The compiler does not prompt "error" or "warning"(vs 2005 at warning level 4).
But such use of "delete" and "delete []" is incorrect according the book(line "delete []b;" and "delete c;").
Is that so?what problem will cause?



Sponsor:

#2 Bacterius   Crossbones+   -  Reputation: 8134

Like
3Likes
Like

Posted 14 January 2013 - 11:59 PM

Well, it's undefined behaviour, so technically, while there is no guarantee it will work, there is no guarantee it will fail, either.

 

I suggest using it the right way, the reason it worked here is probably because int is a simple data type. If you were using a more complicated class, the new[] will create each element of the array as expected, and the delete[] will call the destructor of each element, whereas the delete would just free the array without calling the destructor for each element, which may result in a memory leak and possibly much worse errors. Again, just because it works, doesn't mean it's correct.

 

Also, I suppose the lack of compiler warnings comes from the fact that it's not always possible to distinguish an array from a pointer to a single object, but I could be wrong on this.


Edited by Bacterius, 15 January 2013 - 01:02 AM.

The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#3 Hodgman   Moderators   -  Reputation: 27510

Like
5Likes
Like

Posted 15 January 2013 - 12:54 AM

Is that so?what problem will cause?

Maybe nothing. Maybe a memory leak. Maybe random memory corruption anywhere in your program. Maybe an instant crash.

 

 

Always make sure you use the correct 'delete' code depending on what kind of 'new' code was used.


Edited by Hodgman, 15 January 2013 - 01:26 AM.


#4 iMalc   Crossbones+   -  Reputation: 2258

Like
0Likes
Like

Posted 15 January 2013 - 12:54 AM

That sort of thing can't be detected at compile-time, in the general case.
Consider:
void foo(int x)
{
    int *p;
    if ((x % 2) == 0)
        p = NULL;
    else if ((x % 3) == 0)
        p = new int(0);
    else if ((x % 5) == 0)
        p = new int[2];

    if ((x % 7) == 0)
        delete p;
    else if ((x % 11) == 0)
        delete[] p;
}
This will produce well-defined behaviour if I pass in say 4, 14, 21, 22, or 55, but will produce undefined behaviour if I pass in say 7, 33 or 35, and will leak memory if I pass in say 3.

Edited by iMalc, 15 January 2013 - 12:57 AM.

"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

#5 Cornstalks   Crossbones+   -  Reputation: 6966

Like
8Likes
Like

Posted 15 January 2013 - 12:59 AM

delete what you new, and delete [] what you new [].

 

Welcome to the land of C++, where you can completely shoot your leg off and introduce massive bugs without any errors from the compiler. The compiler doesn't force you to Do The Right Thing (and it many cases, it can't deduce what the Right Thing is to enforce it even if it wanted to). It assumes you know what you should and should not do, and only enforces a very minimal set of rules (the other rules aren't enforced, but you're still expected to follow them, like this rule we're talking about here).

 

If you don't follow these rules, anything can happen, from your program continuing to work as if everything was fine (this is the worst case, because it hides the error), to crashing the program (this is the best case, because now it's obvious there's a problem), to a black hole opening up and swallowing us all (ok, maybe this is the worst case). It's called "undefined behavior."

 

Now, as for why you have to use delete [] when you use new [] (and can't just use delete for everything), the main reason that I can think of is memory book keeping by the compiler. How does the system know how much memory to free when you say delete (or delete [])? There's some book keeping of memory done behind the scenes, and the compiler is allowed to do book keeping however it wants, pretty much. It's entirely possible that the compiler decides to use different types of book keeping, depending on whether you used new or new []. Thus, you need to make sure the right book keeping is undone when you call delete.

 

In combination with the book keeping described above, when you destroy an object its destructor gets called. delete signals that there is only one object being destroyed, and thus only one destructor needs to be called. However, delete [] signals that there are one or more objects being destroyed, and thus potentially many destructors need to be called. Mixing the two incorrectly is just asking for some bad mojo.

 

There might be additional reasons as for why its important to match delete with new and delete [] with new [], but I'd consider these as the important base reasons why.

 

Ultimately, though, you can sum it up and just say "because the C++ standard says so."


Edited by Cornstalks, 15 January 2013 - 01:13 AM.

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#6 brainpoint   Members   -  Reputation: 89

Like
-3Likes
Like

Posted 15 January 2013 - 01:13 AM

the delete[] will call the object's destructor function



#7 ISDCaptain01   Members   -  Reputation: 1301

Like
0Likes
Like

Posted 15 January 2013 - 01:28 AM

i use new[ ] and delete [ ] when i need to  create and delete an allocated array


Edited by ISDCaptain01, 15 January 2013 - 01:30 AM.


#8 L. Spiro   Crossbones+   -  Reputation: 12212

Like
5Likes
Like

Posted 15 January 2013 - 01:33 AM

the delete[] will call the object's destructor function
The previously posted answers were meant just as much for you as for the original topic-poster.
Read them. Learn from them.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#9 mk1x86   Members   -  Reputation: 442

Like
-3Likes
Like

Posted 15 January 2013 - 03:13 AM

The only difference between delete and delete[] is, that in the latter case the object destructor isn't called for the pointer address only but all subsequent "positions" in the array until the end of the memory block is reached. In theory it should be save to always call delete[] since the memory block would have only sizeof(object). But it's better to follow the rules to ensure compiler compability. You should also consider using dynamic arrays like std::vector which does the gritty things for you.



#10 SiCrane   Moderators   -  Reputation: 9386

Like
10Likes
Like

Posted 15 January 2013 - 09:55 AM

Let's talk one common implementation and what exactly happens when you mix delete and delete [] with this kind of implementation.

Let's say you allocate a single std::string object with regular new. The implementation will use ::operator new to allocate sizeof(std::string) bytes, construct the object in that space, and return a pointer to the newly constructed object (which will be the same as the pointer obtained from ::operator new). When things are deleted later with scalar delete the destructor is called on the pointed to object and the memory returned with ::operator delete.

Now let's say you allocate an array of N std::string objects with new []. When you call delete [] later the compiler needs to figure out how to call the destructor for N std::string objects. What happens with one exceptionally common implementation is that new [] actually allocates memory for N * sizeof(std::string) + sizeof(size_t) (along with some possible fudge factors do to alignment or what not) with ::operator new. It stuffs the number of elements in the array in the first sizeof(size_t) bytes and constructs the actual elements in rest of the allocated memory and returns a pointer to the first element, which is not the pointer obtained from ::operator new. When delete [] is called later, it takes the pointer you pass it, subtracts the size of the count from the pointer and uses that information to determine how many objects to destroy. The memory is then freed by passing the subtracted pointer to ::operator delete.

So what happens when you use scalar delete on a std::string array allocated with new []? It destroys the first element in the array, without calling the destructor for any of the other elements in the array and then passes the address of that element to ::operator delete, which is not the same pointer that came from ::operator new. Boom.

What happens when you use delete [] on a single std::string object allocated with new? It tries to interpret the memory preceeding the allocated object as a count and then calls the destructor for that many objects, and then passes the address of that count to ::operator delete, which is still not the same pointer that came from ::operator new. Boom.

Mixing delete and delete [] isn't safe in either direction.

#11 Cornstalks   Crossbones+   -  Reputation: 6966

Like
2Likes
Like

Posted 15 January 2013 - 11:27 AM

SiCrane mentions some very good points. I'm going to dive a little deeper into the internals of memory management and try to explain why the above is such a big problem.

 

There are various strategies for managing allocated memory. The allocator is what actually allocates the memory and frees it when you call new/delete. One very common allocation strategy is to request memory from the operating system when more is needed, but when its freed, don't immediately hand it back to the operating system. Keep it around. Why? Well, chances are the programmer is going to allocate more memory some time soon. If you give it all back to the OS right now, you'll just have to request more form the OS soon anyway (and these can all be pretty slow operations), so if the allocator hangs onto it, when you request more memory the allocator has some already available to give you. Yay, increase in speed! (there are other practical reasons why, but this is a big one) (and of course, if the allocator has a ton of free memory it's holding onto, it might decide to hand some of it back to the OS, but still keep a "reserve" of free memory hanging around).

 

Now, as you allocate memory and free it, your memory becomes fragmented. If you looked at it as one continuous block of memory, you'd have free chunks and allocated chunks mixed all over. How does the allocator keep track of what's allocated and what's free? Remember, this is the allocator! It can't just ask some other allocator for memory and keep a nice std::vector<ptr> of free and allocated chunks of memory. It's a pain in the butt to do, but here's a common way it's done:

 

Linked lists. In memory. When you allocate memory, the allocator does something like this:

 

void* malloc(size_t bytes) // same idea for new/new[] (horribly simplified, though)
{
    void* ptr = RequestBytesFromOperatingSystem(bytes + HEADER_SIZE);
 
    WRITE_HEADER(ptr, bytes + HEADER_SIZE); // writes header info to ptr, note this is being horribly simplified
 
    return (char*)ptr + HEADER_SIZE; // skip the header and give return the usable block to the user
}

 

 

The allocator allocates an extra few bytes to keep a header for each block of memory. This header usually says how large the block is, if the block is free or allocated and in use, if the next block (at ptr + bytes + HEADER_SIZE) is free or allocated and in use, and if the previous block is free or allocated and in use. If the block is free, the allocator might use some of the space in the block to store pointers to the previous and next free blocks (maintaining a linked list).

 

So now let's take this idea for memory blocks, and what SiCrane said above, and document them to figure out how a block of memory might look to the allocator

 

/*
Potential structure of a memory block allocated by new (that is, this block is allocated and in use):
Header Tag (4 bytes)
    Upper 29 bits: size of this block in memory
    Bit 2: 1 if previous block is free, 0 if previous block is allocated
    Bit 1: 1 if next block is free, 0 if next block is allocated
    Bit 0: 1 if this block is free, 0 if this block is allocated
Pointer into segment table (4 bytes) (also acts as padding to maintain 8-byte alignment, like many CPUs need)
    Instead of wasting the padding, we'll store which segment into our segment table this block should be
    inserted into when its freed (so we don't have compute it when we free it, yay speed!)
User data (N bytes)
    This is the memory the programmer is actually able to use

 
Potential structure of a memory block allocated by new [] (that is, this block is allocated and in use):
Header Tag (4 bytes)
    Upper 29 bits: size of this block in memory
    Bit 2: 1 if previous block is free, 0 if previous block is allocated
    Bit 1: 1 if next block is free, 0 if next block is allocated
    Bit 0: 1 if this block is free, 0 if this block is allocated
Number of elements (4 bytes) (also acts as padding to maintain 8-byte alignment, like many CPUs need)
    Instead of wasting the padding, we'll use it to store the number of elements in this array
User data (N bytes)
    This is the memory the programmer is actually able to use
 

Potential structure of a free block of memory (after it's been freed by delete or delete []):
Header Tag (4 bytes)
    Upper 29 bits: size of this block in memory
    Bit 2: 1 if previous block is free, 0 if previous block is allocated
    Bit 1: 1 if next block is free, 0 if next block is allocated
    Bit 0: 1 if this block is free, 0 if this block is allocated
Pointer to the next free block (4 bytes)
    This way we can maintain a linked list of free blocks
Block data (N bytes)
    Junk
*/
 
There are various allocation strategies and memory block formats, but the above encapsulates some common practices and ideas shared by many allocators.
 
Now, when the allocator tries to free a block of memory, it's going to try to use the block's header (the first 4 bytes of the block), and the following 4 bytes (what those bytes represent depends on whether or not you called new or new [], and depending on whether you call delete or delete [] it will change how those bytes are interpreted). If you call the delete on memory you used new [], the allocator will think the 4 bytes after the header are a pointer into the segment table (which they're not; those 4 bytes represent the number of elements in the array), and boom, you've successfully corrupted your memory (or hopefully crashed). If you call delete [] on memory you used new, the allocator will think the 4 bytes after the header are the number of elements in the array (which they're not; those 4 bytes represent a pointer into the segment table), and boom, you've successfully corrupted your memory (or hopefully crashed).
 
If different block formats and allocation strategies are used, it's possible to completely destroy the allocator's linked list of free/allocated blocks by calling the wrong delete, and now you've got memory leaks, you're overwriting data and double allocating the same blocks of memory for different uses, etc. Bad, bad, bad mojo.
 
Mixing delete with new [] or delete [] with new is not safe! Yes, it's possible the allocator implements new as new [1] (which would be a valid way to do it), and delete as delete [], but banking on that idea is like playing Russian roulette (with a gun that has more than one round in it).
 
Writing a correct, proper, fast, and efficient allocator is very hard work and involves lots of tricks. The last thing you need as an allocator is someone screwing up your tricks and hacks by freeing memory wrong.

Edited by Cornstalks, 15 January 2013 - 11:30 AM.

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#12 xiajia   Members   -  Reputation: 163

Like
0Likes
Like

Posted 15 January 2013 - 09:39 PM

It sounds very interesting,to use "new[1]" instead "new" and at the same time use "delete[]" instead "delete".

 



#13 xiajia   Members   -  Reputation: 163

Like
0Likes
Like

Posted 15 January 2013 - 09:56 PM

If I want to implement a memory recovery function. This will become very complicated.

 

void freeMemory(void * p)

{

     //how to do

}



#14 Cornstalks   Crossbones+   -  Reputation: 6966

Like
2Likes
Like

Posted 15 January 2013 - 10:05 PM

If I want to implement a memory recovery function. This will become very complicated.

 

void freeMemory(void * p)

{

     //how to do

}

You might want to a) start a new thread, b) specify language, and c) explain things very clearly. Writing a memory allocator is a huge pain in the butt.


[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#15 xiajia   Members   -  Reputation: 163

Like
0Likes
Like

Posted 15 January 2013 - 10:45 PM

Well, with a specific example to illustrate.Smart pointers.The general method is as follows:

 

 

#define XDELETE(p) { if (p != NULL) { delete p; p = NULL; } }
template<class T> class _XSmartP;
template<class T> class _XBackP 
{ 
private:    
    friend _XSmartP<T>;     
    T *m_p;    
    size_t m_counter;    
    _XBackP(T *p)  
       :m_p(p)  
       ,m_counter(1)     
    {   
        printf("_XBackP constructor called!\n");     
    }     
    ~_XBackP()     
    {         
        XDELETE(m_p);   
        printf( "_XBackP distructor called!\n");     
    } 
};  
template<class T> class _XSmartP 
{ 
public:
    _XSmartP(T *p)
        :m_backP(new _XBackP<T>(p))    
    {   
        printf("_XSmartP constructor called ! use = %d\n",m_backP->m_counter);    
    }      
    _XSmartP(const _XSmartP& temp)  
        :m_backP(temp.m_backP)    
    {
        ++m_backP->m_counter;     
        printf("_XSmartP copy constructor called ! use = %d\n",m_backP->m_counter);    
    }       
    _XSmartP<T>& operator=(const _XSmartP<T>&temp) 
    {   
        if(this == &temp) return *this;  
        ++temp.m_backP->m_counter;   
        if(--m_backP->m_counter == 0)   
        {   
            XDELETE(m_backP);  
        }  
        m_backP = temp.m_backP;  
        return *this;  
    }        
    ~_XSmartP()     
    {   
        printf("_XSmartP distructor called ! use = %d\n",m_backP->m_counter);  
        if(--m_backP->m_counter == 0)   
        {   
            XDELETE(m_backP);  
        }    
    }      
    T *getPtr() const     
    {         
        return 
        m_backP->m_p;     
    }      
    T getVal() const     
    {         
        return *m_backP->m_p;
    }     
    void setVal(T val)     
    {         
        *m_backP->m_p = val;     
    } 
private:     
    _XBackP<T> *m_backP;
};

 

 if use as follow:

 

 

_XSmartP<int> temp(new int[20]);

it will cause errors because of "delete" to "new[]".If you want to avoid this problem, you need to define a similar structure for the array.However, this will have a lot of duplicate code.Is there a way to do it to the best of both worlds?


Edited by xiajia, 15 January 2013 - 10:46 PM.


#16 Cornstalks   Crossbones+   -  Reputation: 6966

Like
2Likes
Like

Posted 16 January 2013 - 12:50 AM

  1. Don't use _XSmartP or _XBackP. Identifiers beginning with an underscore followed by an upper case are reserved for the implementation.
  2. This: XDELETE(p) { if (p != NULL) { delete p; p = NULL; } } is useless. delete already checks if the pointer is NULL. Calling delete (or delete []) on a NULL pointer does nothing, and is valid code. No need to check for NULL yourself before deleting.
  3. You may want to consider looking at C++'s smart pointers
  4. Specifically, in C++11 std::unique_ptr was added with partial template specialization to handle both arrays and non-arrays; you may want to look at it (basically, if you specialize things properly, you can say _XSmartP<int> temp(new int); and _XSmartP<int[]> temp(new int[20]); and the right deleter will be called)
  5. Be aware that the usage for single pointers and arrays is a little different (for example, if you have a single pointer, you may not want to provide operator [], but if you have an array, you may want to provide operator [] but not operator * or operator -> (it's up to you in the end, though))
  6. Sometimes, you just have to duplicate more code than you'd really like

[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#17 xiajia   Members   -  Reputation: 163

Like
0Likes
Like

Posted 16 January 2013 - 09:40 AM

Thank you very much for your advice.
about 2:it's true that "delete" already checks if the pointer is "NULL".(I do not know whether all compilers are like this. But vs2005 is indeed the case.)
but "delete" not set the pointer to "NULL"


 

int *p = new int[100];
delete []p;
delete []p; // this is a runtime error

The actual situation may be more complicated than this.

about 3:Linked content is very good (yet not try to use), but I've seen about "boost::scoped_ptr","boost::shared_ptr","boost::scoped_array","boost::shared_array","boost::weak_ptr","boost:: intrusive_ptr:, so be a bit confusing to me.


Edited by xiajia, 16 January 2013 - 09:51 AM.


#18 Brother Bob   Moderators   -  Reputation: 7774

Like
0Likes
Like

Posted 16 January 2013 - 10:09 AM

Thank you very much for your advice.
about 2:it's true that "delete" already checks if the pointer is "NULL".(I do not know whether all compilers are like this. But vs2005 is indeed the case.)
but "delete" not set the pointer to "NULL"


 
int *p = new int[100];
delete []p;
delete []p; // this is a runtime error

The actual situation may be more complicated than this.

The fact that it is an error to double-delete something is a good thing, not a bad thing. If you're ever trying to double-delete a pointer, you should want your program to say so immediately, because you have a fundamental design error that has to be fixed. You should never even get to the point in the first place where a pointer can be double-deleted. Setting the pointer to null hides that problem and you may never be aware of it.



#19 Cornstalks   Crossbones+   -  Reputation: 6966

Like
2Likes
Like

Posted 16 January 2013 - 10:16 AM

about 2:it's true that "delete" already checks if the pointer is "NULL".(I do not know whether all compilers are like this. But vs2005 is indeed the case.)

The C++ standard requires this. All compilers should do this (and if they don't, it's a very serious bug in the compiler)

 

but "delete" not set the pointer to "NULL"

But to be honest, setting it to NULL here is kinda useless. There are 3 places XDELETE is called: the two destructors and the assignment operator. Setting a pointer to NULL in the destructor is useless because the object is going out of scope anyway. Setting it to NULL in the assignment operator might be useful and help detect accidental dereferencing after it's deleted, but it can also mask double-delete bugs (where the logic of the program isn't very good, and it double deletes the pointer (but since it's NULL the second time, nothing happens)). Setting it to NULL in the assignment operator is something I'd consider okay, but just be aware of its pros and cons.

 

int *p = new int[100];
delete []p;
delete []p; // this is a runtime error

The actual situation may be more complicated than this.

Indeed, it's a runtime error, but I'd argue that code has faulty logic and the error is a good thing that brings the faulty logic to your attention. Hiding the faulty logic only causes headaches down the road. That's why most debuggers set deleted/invalid pointers to some garbage value (like 0xDEADBEEF, 0xFCFCFCFC, 0xFFFFFFFF, etc), so that if you have faulty logic like that, it errors out rather than hides the poor logic.

 

about 3:Linked content is very good (yet not try to use), but I've seen about "boost::scoped_ptr","boost::shared_ptr","boost::scoped_array","boost::shared_array","boost::weak_ptr","boost:: intrusive_ptr:, so be a bit confusing to me.

They're different smart pointers with different usage semantics.  With a little googling, you can find some useful resources explaining some of the differences. Interesting factoid: Boost has shared_ptr as well as shared_array (because of the delete vs delete [] problem you're describing in this thread, and each smart pointer has a few different accessor methods (operator* and operator-> vs operator[]))


[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#20 rip-off   Moderators   -  Reputation: 7641

Like
2Likes
Like

Posted 16 January 2013 - 10:18 AM

It sounds very interesting,to use "new[1]" instead "new" and at the same time use "delete[]" instead "delete".

This would incur unnecessary performance overhead in the "exceptionally common" implementation SiCrane described.

 

it will cause errors because of "delete" to "new[]".If you want to avoid this problem, you need to define a similar structure for the array.However, this will have a lot of duplicate code.Is there away to do it to the best of both worlds?

One way is to support the idea of a "custom deleter", like the shared_ptr<>. For example, if you are managing a third party resource, such as SDL_Surfarce, you need to use SDL_FreeSurface(). By supporting a custom deleter, you can support this usage, while also supporting the idea of a "shared array".

 

That said, I concur with Cornstalks' point #5. You might want to have a different client interface for a shared array.

 

One option is to have a private smart container, which supports a custom deleter, and then expose a public shared pointer and shared array classes, which provide a convenient client interface and handle the custom deleter themselves. The latter would be implemented in terms of the former, minimising code duplication.

 

Or as Cornstalks mentions in points #3 and #4, you could use the standard library.

 

about 2:it's true that "delete" already checks if the pointer is "NULL".(I do not know whether all compilers are like this. But vs2005 is indeed the case.)

It is required behaviour of a standards conforming compiler. All the popular compilers get this one right.

 

but "delete" not set the pointer to "NULL"

 

Neither does XDELETE:

int *one = new int(42);
int *two = one;
XDELETE(one);
XDELETE(two); // Uh oh!

XDELETE does not solve the general problem. There is a body of thought that such "safe delete" macros merely hide a certain category of pointer bug. The real solution is to understand the lifecycle of the objects involved and ensure it gets deleted once and only once.

 

However, if a given pointer is going to be pointed at another object, then it is correct to nullify it. This is not common though.

 

 

If you really do want to use XDELETE, despite Cornstalks' excellent point #2, you can and should provide it as a regular free function rather than a macro:

template<typename T>
void XDELETE(T *& pointer) {
    delete pointer;
    pointer = nullptr;
}






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS