Jump to content

  • Log In with Google      Sign In   
  • Create Account

phil_t

Member Since 06 Oct 2006
Offline Last Active Today, 11:22 AM

Posts I've Made

In Topic: Warning conversion from size_t to int in x64

29 January 2016 - 02:26 PM


This is a problem of poor interface design for the container, not a problem with unsigned types. I would prefer something like this using an output parameter for the index:
bool IndexOf( const T& x, size_t& index ) const;
Then the usage of the method is:
size_t index;
if ( container.IndexOf( x, index ) )
{
… use index
}

 

In general I agree with "don't conflate error codes with values", but when it comes to array indices I'm not as convinced (I mean, even the C++ standard conflates valid iterators with errors).

 

The pattern above becomes cumbersome in the common case when you know IndexOf will succeed.


In Topic: Sorting with dependencies?

07 January 2016 - 02:31 PM

So, generally, this is just a sort with hierarchical sort criteria (which is pretty straightforward - just a single sorting pass comparing on multiple criteria).

 

Except that you have some need to group all trees together in one draw call. In that case, I would just make the trees a single object (from the point of view of the sorting algorithm), with a depth equal to the minimum depth of all trees.


In Topic: C++ VirtualFunctions [SOLVED]

24 December 2015 - 12:30 PM


but I think I was mostly thinking in terms of destructors or temporary objects

 

Agreed, no point in setting to null in those cases.

 


Thats not really an issue in MSVC at least, since once you inspect the pointer it will show you just garbage data, which is enough to identify an invalid pointer - but null values are still easier to catch, I give you that.

 

Well, re garbage data... when looking at what the invalid pointer points to in the debugger it will either show you either

 

1) ????? if the memory has been unmapped from the process's address space (causes access violation on delete, easy to catch)

2) whatever was left there before (has it already been deleted? you don't really know... the data looks "correct")

3) whatever has been placed there by new stuff that has been allocated (you'll need to inspect the data to see if it's "correct" or not)

 

Of course, with the debug heap, case 2 will show 0xfefefefefe (or whatever the marker is for freed memory)

 

 


Honestly, I'm more curious - why? Since I started using unique_ptr I don't want to use anything else, I'm still in the process of converting my old code because it makes things so much more safe, reliable and produces way cleaner code. Unless I have very specific requirements for memory management but yeah... is there anything particullary that you like about raw pointer managment, or is it rather habit? ;)

 

Agreed, I can't think of any good reason to eschew using smart pointers. Eliminates things like double-delete bugs, and makes intent clearer (assuming you're using the correct smart pointer for the job), with pretty much no drawbacks.


In Topic: C++ VirtualFunctions [SOLVED]

24 December 2015 - 10:47 AM


I would like to disagree. If you call delete a second time, you are likely to get an immediate crash on that line, or some sort of memory corruption. You might think this is a bad thing but actually it can be a good thing. Every time something is deleted twice, there is some sort of logic error in the code. Its better to expose those issues and fix them, than just carry them around forever and silently ignore them.

 

I definitely agree with "fail fast", but I disagree with your advice in this particular case. The problem is, passing an invalid pointer to delete doesn't always cause a crash right away (unless you're always running with app verifier, which isn't realistic). It can simply corrupt the heap causing problems later, or in fact it could cause no problems at all if something has already been reallocated at that memory location (not unlikely when space is requested shortly after the delete for a same size object, since there will be a nice hole in the heap there).

 

It also makes it harder to diagnose issues, because you have no idea if the pointer is "valid" when inspecting it in the debugger.

 

The other thing is that it is not uncommon to have code that, say, resets the state of an object, which would include deleting any members that were new'd. In that case, there is no logic error, and you would need to store additional state in order to know if a pointer is valid or not. So why not just set it to null.

 

But yes, definitely use smart pointers if you can (which do set their internal pointer to null when they delete it).


In Topic: Should I release vertex buffer?

22 December 2015 - 04:42 PM

COM objects are ref-counted, and the general convention is that retrieving a pointer to a COM interface increments the ref count on the object it points to. So yes, when you're done using it, you need to call Release on the vertex buffer returned from GetVertexBuffer to decrement the ref count. Otherwise you'll have a memory leak.


PARTNERS