# Rendering Variable Heightmaps

## Recommended Posts

tritone    126
Hello, I am working on a project to create a 3D graph of some image data. Basically it is a heightmap. The input data contains intensity (height), and from its position in the buffer I can calculate an X and Z coordinate. The data is at least 256 x 256, and can go up to 1024 x 1024. My question is : what is the best way to render this? I'd like to be able to render new data on the fly, so a display list is out. I think an octree might work, but there would be alot of overhead if the input data changes and I have to recreate the octree. Currently I am rendering this with triangle strips in a vertex array (see below), and getting a measly 5FPS with a 512 x 512 dataset (although I am not doing any frustum culling or anything like that). I'd also like to be able to run this on as many cards as possible. Here's the code I am using to create the array of triangle strips:
for(int iZIndex = 0; iZIndex < (m_iHeight - 1); iZIndex++)
{
if(bSwitch)
{
for(int iXIndex = (m_iWidth - 1); iXIndex >= 0; iXIndex--)
{

SetVertexColor(m_fArray[iXIndex][iZIndex + 1], iCount);

m_pVertices[iCount].x = iXIndex;
m_pVertices[iCount].y = m_fArray[iXIndex][iZIndex + 1];
m_pVertices[iCount].z = iZIndex + 1;
iCount++;

SetVertexColor(m_fArray[iXIndex][iZIndex], iCount);

m_pVertices[iCount].x = iXIndex;
m_pVertices[iCount].y = m_fArray[iXIndex][iZIndex];
m_pVertices[iCount].z = iZIndex;
iCount++;

}
}

else
{
for(int iXIndex = 0; iXIndex < (m_iWidth - 1); iXIndex++)
{

SetVertexColor(m_fArray[iXIndex][iZIndex], iCount);

m_pVertices[iCount].x = iXIndex;
m_pVertices[iCount].y = m_fArray[iXIndex][iZIndex];
m_pVertices[iCount].z = iZIndex;
iCount++;

SetVertexColor(m_fArray[iXIndex][iZIndex + 1], iCount);

m_pVertices[iCount].x = iXIndex;
m_pVertices[iCount].y = m_fArray[iXIndex][iZIndex + 1];
m_pVertices[iCount].z = iZIndex + 1;
iCount++;

}
}

bSwitch = !bSwitch;
}


Then I set up the vertex arrays :
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);

glVertexPointer(3, GL_FLOAT, sizeof(CVector3), &(m_pVertices->x));
glColorPointer(3, GL_FLOAT, sizeof(CVector3), &(m_pVertices->R));


And finally I render:
glBegin(GL_TRIANGLE_STRIP);

for(int i = 0; i < m_iTotalPoints; i++)
{
glArrayElement(i);
}

glEnd();


Any help or ideas would be greatly appreciated! Thanks. [Edited by - tritone on August 4, 2004 12:42:38 PM]

#### Share this post

##### Share on other sites
zedzeek    529
>>Currently I am rendering this with triangle strips in a vertex array (see below), and getting a measly 5FPS with a 512 x 512 dataset (although I am not doing any frustum culling or anything like that). I'd also like to be able to run this on as many cards as possible.<<

thats a lot of data to draw every frame, break the terrain up into patches of say16x16 or 32x32 put those in a quadtree or octree (quadtree is normally used for terrains),
transverse the quadtree testing to see if the nodes are onscreen, (if so check the children all the way down to the quadtreee leaves) if they are then send them to the card to be drawn.

ie only send to the card what needs to be drawn (not the whole scene) youll see a large framerate increase

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account

## Sign in

Already have an account? Sign in here.

Sign In Now