wierd problem

Started by
9 comments, last by GameDev.net 19 years, 3 months ago
I have a wierd problem here, I am working on a program, and it works when it runs it from inside VC++ .NET, but when i run the executable from outside VC it does not work the same. I am doing some image processing, and getting different results if I dont run from inside the IDE. What would cause this, aren't I running the exact same application. Also this is on release mode.
Advertisement
also, i have soem ugly memory allocations, I'm pretty sure it's fine, but here's the code for it.
bool AllocateReconMemory(){	FilterCoeff = new float[1 + NumPixelsX];	SpreadingTerm = new float[2 * NumPixelsX];	InputData_ZeroPadded = new float[2 * NumPixelsX];	RawImage = new float*[NumPixelsX + 1];	for (int i = 0; i < NumPixelsX + 1; i++)		RawImage = new float[NumPixelsZ];	ProcessedImage = new float**[ImageDimX];	for (int i = 0; i < ImageDimX; i++)		ProcessedImage = new float*[ImageDimX];	for (int i = 0; i < ImageDimX; i++)		for (int j = 0; j < ImageDimX; j++)			ProcessedImage[j] = new float[NumPixelsZ];	return true;}void FreeReconMemory(){	delete [] FilterCoeff;	delete [] SpreadingTerm;	delete [] InputData_ZeroPadded;		for (int i = 0; i < (NumPixelsX + 1); i++)		delete [] RawImage;	delete [] RawImage;	for (int i = 0; i < ImageDimX; i++)		for (int j = 0; j < ImageDimX; j++)			delete [] ProcessedImage[j];	for (int i = 0; i < ImageDimX; i++)		delete [] ProcessedImage;	delete [] ProcessedImage;}
Erm.. what's up?
Yes it should be the same version, but how exactly is it different?

btw: I am scared @ confused by your memory allocations (particually the second pointer allocations(?) )
What's wrong with
RawImage = new float[(NumPixelsX+1)*NumPixelsZ]
? 1d arrays make much more sense to manage
Why the +1 on x?
_______________________________ ________ _____ ___ __ _`By offloading cognitive load to the computer, programmers are able to design more elegant systems' - Unununium OS regarding Python
Ok, yes i would love to make them 1d arrays, but i am interfacing with lots of old code, and don't want to have to change all the old code to use the 1d arrays.
ok, It appears to be having trouble with the freeing of the memory, but only some times is it causing problems. It doesn't make it crash though, when run outside of the IDE, it only seems to make my output images a little darker, and then finally just black. I know my memory allocation code looks hideous, but isn't it correct. ALso if i don't deallocate the memory, it works fine, so the problem must be there, but how is it wrong. Someone shed some light on this. Thanks
Well I took another look. The deallocation does look correct, yes. (I ergh think, could somebody else back me up on this?)
It's probally more of a common sense problem. like accidently accessing the memory after it's been freed? check your program run structure thoroughly
(infact chuck the whole thing out & rewrite it all neatly object based you'll save yourself so much trouble in the long run)
Fraid I can't say anything else other than that & how damn ugly it is :/ soz
_______________________________ ________ _____ ___ __ _`By offloading cognitive load to the computer, programmers are able to design more elegant systems' - Unununium OS regarding Python
nope, i don't try to access the memory after it has been freed.
The amount of memory the function is allocating more than:
4 * NumPixelsZ * NumPixelsX * NumPixelsX
which for NumPixelsX & NumPixelsZ being say 500 is around half a GB (I'm guessing what the real values might be). I'd say it is at least possibly failing to allocate it.

Initialise all of the pointers to NULL where they are defined (or in the constructor). Check that each pointer is not NULL after allocation, or use the option to make allocation failures throw an exception if possible.

Take out the two unnecessary "for (int i = 0; i < ImageDimX; i++)" duplicate loops, and deallocate everything in the reverse order to alocation (my personal preference):

You can reduce the number of calls to new if you want, by piggybacking allocations of the same type (which I've started to do below) Pros: better cache usage, speed, deallocation simplicity etc. Cons: you gotta make it obvious to anyone maintaining your code that some things do not need to be explicitly freed.
#ifdef DEBUG#define UNINITIALISE_PTR(x) do{ (x) = 0xCCCDCCCD; }while(0)#else#define UNINITIALISE_PTR(x)#endifbool AllocateReconMemory(){	FilterCoeff = new float[1 + 5 * NumPixelsX];	SpreadingTerm = FilterCoeff[1 + NumPixelsX]; //Piggybacked allocation	InputData_ZeroPadded = SpreadingTerm[2 * NumPixelsX]; //Piggybacked allocation	RawImage = new float*[NumPixelsX + 1];	for (int i = 0; i < NumPixelsX + 1; i++)		RawImage = new float[NumPixelsZ];	ProcessedImage = new float**[ImageDimX];	for (int i = 0; i < ImageDimX; i++)	{		ProcessedImage = new float*[ImageDimX];		for (int j = 0; j < ImageDimX; j++)			ProcessedImage[j] = new float[NumPixelsZ];	}	return true;}void FreeReconMemory(){	for (int i = 0; i < ImageDimX; i++)	{		for (int j = 0; j < ImageDimX; j++)			delete [] ProcessedImage[j];		delete [] ProcessedImage;	}	delete [] ProcessedImage;	UNINITIALISE_PTR(ProcessedImage);	for (int i = 0; i < NumPixelsX + 1; i++)		delete [] RawImage;	delete [] RawImage;	UNINITIALISE_PTR(RawImage);	delete [] FilterCoeff;	UNINITIALISE_PTR(FilterCoeff);}
You should also add a lot of asserts to your array access lines of code to ensure that you're not writing past the end or beginning (if you haven't already).

My UNINITIALISE_PTR macro ensures that you can detect any attempt to access the memory after it has been freed (in your debug version of course)

NumPixelsX + 1 definately seems logically wrong. The + 1 shouldn't need to be there. There should at least be a comment in the code to explain why it is present if it is absolutely necessary.

Hope that helps. It's not ideal, but an improvement anyway.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
I don't see anything wrong with the code you've posted (except for the complete lack of exception safety). Most likely you're writing beyond the bounds of your arrays somewhere else in the program.

iMalc: Piggybacking allocations like that seems like a questionable optimisation to me on a modern PC. I would advise against using it until a profiler shows that such array accesses are a bottleneck because to my mind it is liable to cause deletion errors (as you noted) and provides questionable benefit at the cost of a great deal of code clarity. Also, why do you use a macro to uninitialise a pointer rather than an inline function? It seems overly vulnerable to name-stomping by another library.

Enigma
I knew someone would say that :-)

Absolutely! It can seem a questionable optimisation, but it has it's place. simple code alocating a 2D array could be done with just 2 alocations instead of potentially 1000's. It can make a big performance difference to BOTH the allocation and the deallocation, as well as to cache access patterns, and reduces OS memory tracking overhead.

I probably shouldn't have mentioned it though as it should really only be used when really needed, as the maintainability decrease is quite possibly not worth it usually. This was a bad example of it's usage though.

Yeah that could be an inline function I suppose.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms

This topic is closed to new replies.

Advertisement