Jump to content

  • Log In with Google      Sign In   
  • Create Account

Lil_Lloyd

Member Since 01 Feb 2006
Offline Last Active Jun 05 2013 07:41 AM

Topics I've Started

My custom mem allocator isn't working because....

18 February 2013 - 08:21 AM

...my line of code in the free function isn't accessing the value I expect!

I have the free blocks of memory kept in a doubly linked list and used blocks also carry size information but aren't kept in the linked list.
 
const int MAX_BYTES = 16 * 1024 * 1024;
 
static MallocStruct* freeListHeader;
static unsigned char DataBuffer[MAX_BYTES];

struct MallocStruct{
	MallocStruct*  next;
	MallocStruct*  prev;
	unsigned char* buffer;
	size_t size;
};
 

void  InitMalloc(){
    freeListHeader = (MallocStruct*)DataBuffer;
    freeListHeader->next   = NULL;
    freeListHeader->prev   = NULL;
    freeListHeader->size   = MAX_BYTES - sizeof(MallocStruct);
    freeListHeader->buffer = (unsigned char*)(freeListHeader + sizeof(MallocStruct));
}
 
void* MyMalloc(int size){

	MallocStruct** ptr = &freeListHeader;

	//round up size
	if(size % 4 != 0){
		size = (float)size/4 + 1;
		size *= 4;
	}

	while(size + sizeof(MallocStruct) > (*ptr)->size){
		ptr = &(*ptr)->next;

		if(*ptr == NULL) return NULL;
	}
	
	int newSize = (*ptr)->size - (size + sizeof(MallocStruct));
	MallocStruct* next = (*ptr)->next;
	MallocStruct* prev = (*ptr)->prev;

	//now change the pointer in the list, and get a pointer to this current location

	(*ptr)->next   = NULL;
	(*ptr)->prev   = NULL;
	(*ptr)->size   = size;
	(*ptr)->buffer = (unsigned char*)((*ptr) + sizeof(MallocStruct));

	unsigned char* data = (*ptr)->buffer;

	//update the freelist pointer
	unsigned char* ptr3 = (*ptr)->buffer + (*ptr)->size;
	*ptr = (MallocStruct*)ptr3;
	(*ptr)->buffer = (unsigned char*)((*ptr) + sizeof(MallocStruct));
	(*ptr)->size = newSize;
	(*ptr)->next = next;
	(*ptr)->prev = prev;

	if((*ptr)->next != NULL) (*ptr)->next->prev = *ptr;
	if((*ptr)->prev != NULL) (*ptr)->prev->next = *ptr;

	return data;
}


void  MyFree(void* data){
	//create a free node, uncoalesced
	MallocStruct* newFree = (MallocStruct*)((unsigned char*)data - sizeof(MallocStruct));
	int size = newFree->size + sizeof(MallocStruct);

	//now insert it into the linked list
	MallocStruct** ptr = &freeListHeader;

	//case 1 - no prev for freeListHeader and address below
	if((*ptr)->prev == NULL && newFree < *ptr){
		(*ptr)->prev  = newFree;
		newFree->next = *ptr;
		newFree->size = size;
		*ptr = newFree;
		return;
	}

	//case 2 - no next for the freeListHeader and address higher
	if((*ptr)->next == NULL && newFree > *ptr){
		(*ptr)->next = newFree;
		newFree->prev = *ptr;
		newFree->size = size;
		return;
	}

	//case3 other wise needs to be inserted into a list somewhere...
	while(newFree > (*ptr)->next){

		//if next is NULL we need to go to case 4 - inserting at the end
		if((*ptr)->next == NULL) break;

		ptr = &((*ptr)->next);

	}

	//case 4 - end of the list
	if((*ptr)->next == NULL){
		(*ptr)->next = newFree;
		newFree->prev = (*ptr);
		newFree->size = size;
		return;
	}

	//back to case 3 - list insertion
	newFree->next = (*ptr)->next;
	(*ptr)->next->prev = newFree;
	(*ptr)->next = newFree;
	newFree->prev = *ptr;
	newFree->size = size;
}

More specifically the lines
 
MallocStruct* newFree = (MallocStruct*)((unsigned char*)data - sizeof(MallocStruct));
int size = newFree->size + sizeof(MallocStruct);
 
aren't extracting the size value I expect, newFree->size is being read as '0' so the size is being recorded as 16, the size of a MallocStruct only. I don't understand why at all!
 
Someone please explain my hopefully silly and trivial mistake.

Frustum culling - culling nearby objects?

07 February 2013 - 12:07 PM

I have a quadtree terrain, but nearby quads are being culled. Like so:

 

 

I was thinking maybe its my frustum extraction code? This is in OpenGL by the way in case matrix layout has any bearing on the problem and I am using the glm mat4x4 and vec4 types to represent my projection matrices and frustum planes 

 

void Frustum::Extract(const vec3& eye,const mat4x4& camMatrix)
{
	m_pos = eye;

	m_modView = camMatrix;
	m_projMatrix = GraphicsApp::GetInstance()->GetProjection();
	mat4x4 MVPMatrix = m_projMatrix * m_modView;

	for(int plane = 0; plane < 3; plane ++){
		int index = plane * 2;
		m_planes[index] = MVPMatrix[3] - MVPMatrix[plane];
		NormalizePlane(index);
	}

	for(int plane = 0; plane < 3; plane ++){
		int index = plane * 2 + 1;
		m_planes[index] = MVPMatrix[3] + MVPMatrix[plane];
		NormalizePlane(index);
	}
	
}

void  Frustum::NormalizePlane(int index){
  float normFactor = m_planes[index][0] * m_planes[index][0] + m_planes[index][1] * 
                     m_planes[index][1] + m_planes[index][2] * m_planes[index][2];
  m_planes[index] /= normFactor;
}

 

 


Tutorial: an XBOX360 pad Class using GLFW

06 February 2013 - 08:05 AM

Hi

 

I always wanted to get some form of feedback on the only tutorial I've ever written! It's for people who want to use Xbox 360 pads with GLFW (open GL for Windows framework).

 

Please enjoy

 

http://www.lloydcrawley.com/reading-input-from-an-xbox-360-pad-using-glfw-pt-1/


INSANE quadtree bugs/artefacts

06 February 2013 - 08:02 AM

Hi people

I've been working on a quad tree demo for some time and used this gentleman's code as a study to learn: 

 

However my results for the island scene look like this:

 

 

I started following the base code loosely, but to try and get the rendering of the quadtree correct I adhered quite closely to the terrain files as I was being driven slowly crazy. Even now when I see very little difference between the sets of code there must be some faults. 

 

The important code is as follows:

 

int halfWidth  = m_width/2;
	int halfHeight = m_height/2;

	//init verts
	vec3* vertData = new vec3[m_numVertices];

	for(int z = 0; z < m_height; z++){
		for(int x = 0; x < m_width; x++){

			int index = z * m_width + x;

			int imgIndexZ = z;
			int imgIndexX = x;
			if(z == imgHeight){
				imgIndexZ--;
			}
			if(x == imgWidth){
				imgIndexX--;
			}

			int imgIndex = imgIndexZ * m_width + imgIndexX;


			vertData[index].x = (x - halfWidth) * LANDSCAPE_SCALE;
			vertData[index].y = (float)data[imgIndex] * HEIGHT_SCALE;
			vertData[index].z = (z - halfHeight) * LANDSCAPE_SCALE;
		}
	}

 

The above is how I init my vertices values, with data being an image file holding the height values. The land mass itself renders fine if I render it as a whole block without a quadtree using an indexed VAO and GL_TRIANGLES.

 

The indices generating function of the terrain chunks if I go the alternative route, however, is as follows:

 

int heightmapDataPosX = posX;
int heightmapDataPosY = posY;
unsigned int offset = (unsigned int)pow(2.0f, (float)(lod));
unsigned int div = (unsigned int)pow(2.0f, (float)(depth+lod));
	
int heightmapDataSizeX = HMsizeX/(div) + 1;
int heightmapDataSizeY = HMsizeY/(div) + 1;

int nHMWidth   = heightmapDataSizeX;
int nHMHeight  = heightmapDataSizeY;
int nHMOffsetX = heightmapDataPosX;
int nHMOffsetY = heightmapDataPosY;

GLuint nHMTotalWidth = HMsizeX;
GLuint nHMTotalHeight = HMsizeY;

m_indOffsetW[lod] = nHMWidth*2;
m_indOffsetH[lod] = nHMHeight-1;
GLuint numIndices = (nHMWidth-3)*(nHMHeight-4)*2;
m_indices[lod] = new GLuint[numIndices]; 
m_numIndices[lod] = numIndices;

int index = 0;

for(GLuint y=1; y< nHMHeight-3; y++)
{
  for(GLuint x=1; x< nHMWidth-2; x++)
  {
    GLuint id0 = COORD(x*(offset) + nHMOffsetX, y*(offset) + nHMOffsetY);
    GLuint id1 = COORD(x*(offset) + nHMOffsetX, y*(offset) + nHMOffsetY+(offset));
    m_indices[lod][index++] = id0;
    m_indices[lod][index++] = id1;
  }
}

 

I have to alter the termination conditions for y and x and the numIndices because for some reason I get a memory access error, even though the numberOfVertices should be ok...

 

I know its masochistic but if anyone can help me I'd be so grateful! I've been going crazy for days... My github repo is here: https://github.com/LloydGhettoWolf/TitanicTerrain/tree/workingversion


Naivete Challenged: Interleaving arrays causes slow performance?

28 January 2013 - 08:06 AM

Ok, so I was thinking about how to speed up my opengl application, and remembered hearing something about interleaving array data. To my simple mind it seemed that when I pass attributes to a shader in separate buffer objects some kind of jumping around in the video cards' memory would occur, so if I interleaved the data for every vertex then this should provide the benefit of locality of reference like in a cache, and things should speed up.

 

However after some experimenting it cause a decline of 2 frames per second in my overall speed. So WHY does this happen? I'm clearly drinking from the wrong bottle of GL goodness. 


PARTNERS