problems with GL_ARB_vertex_buffer_object

Started by
1 comment, last by plasm 20 years, 8 months ago
Hello, I have some problems with VBOs on Nvidia graphics cards. With VBOs I get about 10% of the performance I had using VARs. I assume the reason for the drop is probably caused by glBufferDataARB always using GPU memory for the buffer to-be-created regardless what enum I give as parameter. When updating the geometry in the VBO I need to map the buffer and read/write a lot from/to it randomly. The readback performance from GPU memory on my GeForce4 is pretty poor and thats causing the slowdown. I doubt glBufferDataARB always using GPU memory is the desired behavior. I permutated all other allowed enums, it had no effect on the performance whatsoever. When using standard vertex arrays and updating the geometry in main memory the framerate is 10 times higher. Is this behavior known? Will it be fixed in a way that maybe when using the GL_DYNAMIC_DRAW_ARB enum DMA accessible memory is used for the buffer instead of GPU memory? In case I messed up, here are some snips from my code: ..... glGenBuffersARB(2,m_iVBOBuffers); glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, m_iVBOBuffers[1]); glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, m_iStripIndices * sizeof(unsigned int),&m_pStrips[1], GL_STATIC_DRAW_ARB); glBindBufferARB(GL_ARRAY_BUFFER_ARB, m_iVBOBuffers[0]); glBufferDataARB(GL_ARRAY_BUFFER_ARB, m_iFFVert * sizeof(phVertexArrayElement), m_pVertexArray, GL_DYNAMIC_DRAW_ARB); glNormalPointer(GL_FLOAT,sizeof(phVertexArrayElement),(const GLvoid *)(NULL + sizeof(phVertex))); glEnableClientState(GL_NORMAL_ARRAY); glVertexPointer(3,GL_FLOAT,sizeof(phVertexArrayElement),(const GLvoid *) NULL); glEnableClientState(GL_VERTEX_ARRAY); ...... m_pVertexArray = glMapBufferARB(GL_ARRAY_BUFFER_ARB,GL_READ_WRITE_ARB); ... (reading/writing from/to mapped buffer *ALOT*, this part of the code, when profiled, gets 13 times faster when not using VBOs on my GeForce4 4400. vertex coordinates as well as their normals are updated at least 30 times per second, usually more often) glUnmapBufferARB(GL_ARRAY_BUFFER_ARB); ...... glDrawElements(GL_TRIANGLE_STRIP,m_iStripIndices,GL_UNSIGNED_INT,NULL); At *m_pVertexArray vertex coordinates and their normals are stored interleaved. The topology of the geometry is static, only the vertices and the normals are changing over time. I checked every single GL function call with glGetError() for errors. There are none. glUnmapBufferARB(GL_ARRAY_BUFFER_ARB) returns true. As mentioned above I got the best performance up to date partially using GL_NV_vertex_array_range. By allocating the VAR buffer using wglAllocateMemoryNV() with parameters resulting in DMA memory being used. Is there a way to allocate this type of memory without relying on extensions? I didnt activate the rest of the VAR extension, since it resulted (surprisingly) in a loss of performance. Just using memory from this special range sped up the rendering. If anyone can confirm my observations or show me where I messed up, I would be glad. I read the other threads here regarding VBOs and they didnt help me. Michael
Advertisement
http://oss.sgi.com/projects/ogl-sample/registry/
Game Core
You should NEVER read back from VBO, no mater where buffer is allocated. Even if your data is in AGP it will stil be slow since this is uncached area. Another think slowing you down is random access. This is a big no-no when writing to AGP.

If size of your data is not to big you might wat to keep copy of it in loacl(system) memory. Then update this one and copy it to VBO in one big chunk.

You should never let your fears become the boundaries of your dreams.
You should never let your fears become the boundaries of your dreams.

This topic is closed to new replies.

Advertisement