Jump to content
  • Advertisement
Sign in to follow this  
Alessandro

Improving performances: advice needed

This topic is 2538 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have a terrain grid made of 1024x1024 quads (1.048.576 quads). I'm currently using vertex arrays, no culling (the whole terrain has to be visible) and no GLSL shaders:

glVertexPointer( 3, GL_FLOAT, 0, verts );
glEnableClientState( GL_VERTEX_ARRAY );
glDrawElements( GL_TRIANGLES, numIndices, GL_UNSIGNED_INT, indices );

I'm using a "degraded" vertex array set (1/16th of the above) during camera operations, so that when I move or rotate the camera, the action is fast.
However, when I perform other operations, where I can't use a "degraded" set, like deforming the terrain (I wrote a routine that allows to push and pull vertices with the mouse), performances are about 15-20 fps and I suppose that is expected, with such large poly count and current implementation.

And so I'd like to ask some suggestions on techniques that would allow me to improve performances (terrain grid has to be fixed, so no octree or similar). Thanks

Share this post


Link to post
Share on other sites
Advertisement
Try to use VBO.
Introduction and example and tips http://www.opengl.org/wiki/Vertex_Buffer_Object

For dynamic data http://www.opengl.org/wiki/VBO_-_more#Dynamic_VBO

Share this post


Link to post
Share on other sites
I get a crash to desktop when running the converted-to-VBO's program. I hope you can give me suggestions about it, details below.

Here is the VA draw() version that works just fine:


void draw()
...
glEnableClientState( GL_VERTEX_ARRAY );
glVertexPointer(3, GL_FLOAT, 0, verts);
glDrawElements( GL_TRIANGLES, numIndices, GL_UNSIGNED_INT, 0 );
...


In order to implement VBO's, I did the following:


void init()
{
GLuint vboId = 0; // ID of VBO for vertices arrays
GLuint iboId = 0; // ID of VBO for indices arrays
}

void createVBOs()
{
glGenBuffersARB(1, &vboId);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboId);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, sizeof(verts), verts, GL_DYNAMIC_DRAW_ARB); // tried also with GL_STATIC_DRAW_ARB

glGenBuffersARB(1, &iboId);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, iboId);
glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, sizeof(indices), indices, GL_DYNAMIC_DRAW_ARB); // tried also with GL_STATIC_DRAW_ARB
}

void draw()
{
...
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboId); // bind vertices array
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, iboId); // bind indices array
glEnableClientState( GL_VERTEX_ARRAY ); // activate vertex array
glVertexPointer(3, GL_FLOAT, 0, 0);
glDrawElements( GL_TRIANGLES, numIndices, GL_UNSIGNED_INT, 0 );
glDisableClientState( GL_VERTEX_ARRAY ); //deactivate vertex array
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0); //delete vbo buffer
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0); //delete ibo buffer
...
}


Any help would be very much appreciated. Thanks

Share this post


Link to post
Share on other sites
"I get a crash to desktop .. glBindBufferARB" -- Have you checked the ARB functions are available? Have you initialised GLEW properly?

Don't use "glBufferData" because that's a GL operation which will probably block your GL pipeline until completion, map the buffer, write to it, do something else, then unmap and then draw -- this turns it into an OS operation and that should get parallelised by your OSes memory control -- the memory transfers should run in the background via a PCI DMA transfer. It'll only block if you try and access the memory before the txn has completed.

Lay your vertex data out differently and use a shader to combine it. Why? Two sets of attributes for each vertex -- one is the XY, one is the height. Why? Your XYs will not change; this minimises the amount of changeable data which needs transmitting over the pipe. Make sure you set the usage hints on the VBOs appropriately.

Share this post


Link to post
Share on other sites

Don't use "glBufferData" because that's a GL operation which will probably block your GL pipeline until completion, map the buffer, write to it, do something else, then unmap and then draw


"glBufferData" is OK. You're thinking of "glMapBuffer".

Share this post


Link to post
Share on other sites
glBufferData is perfectly fine at load time; there's nothing drawing yet so there's no harm in blocking. If you were to update the buffer at runtime you would need to look at alternatives.

Regarding glMapBuffer, you really need to have a read of this: http://www.stevestreeting.com/2007/03/16/glmapbuffer-how-i-mock-thee/

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!