Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.


problems using GL_ARB_vertex_buffer_object

This topic is 5542 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I posted this message accidently in the NeHe Forum.... Sorry if posting it here too upsets anyone. This board was the intended location.... Hello, I have some problems with VBOs on Nvidia graphics cards. With VBOs I get about 10% of the performance I had using VARs. I assume the reason for the drop is probably caused by glBufferDataARB always using GPU memory for the buffer to-be-created regardless what enum I give as parameter. When updating the geometry in the VBO I need to map the buffer and read/write a lot from/to it randomly. The readback performance from GPU memory on my GeForce4 is pretty poor and thats causing the slowdown. I doubt glBufferDataARB always using GPU memory is the desired behavior. I permutated all other allowed enums, it had no effect on the performance whatsoever. When using standard vertex arrays and updating the geometry in main memory the framerate is 10 times higher. Is this behavior known? Will it be fixed in a way that maybe when using the GL_DYNAMIC_DRAW_ARB enum DMA accessible memory is used for the buffer instead of GPU memory? In case I messed up, here are some snips from my code: ..... glGenBuffersARB(2,m_iVBOBuffers); glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, m_iVBOBuffers[1]); glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, m_iStripIndices * sizeof(unsigned int),&m_pStrips[1], GL_STATIC_DRAW_ARB); glBindBufferARB(GL_ARRAY_BUFFER_ARB, m_iVBOBuffers[0]); glBufferDataARB(GL_ARRAY_BUFFER_ARB, m_iFFVert * sizeof(phVertexArrayElement), m_pVertexArray, GL_DYNAMIC_DRAW_ARB); glNormalPointer(GL_FLOAT,sizeof(phVertexArrayElement),(const GLvoid *)(NULL + sizeof(phVertex))); glEnableClientState(GL_NORMAL_ARRAY); glVertexPointer(3,GL_FLOAT,sizeof(phVertexArrayElement),(const GLvoid *) NULL); glEnableClientState(GL_VERTEX_ARRAY); ...... while(AppRuns){ // just a meta-loop construct not present in the source in this form m_pVertexArray = glMapBufferARB(GL_ARRAY_BUFFER_ARB,GL_READ_WRITE_ARB); ... (reading/writing from/to mapped buffer *ALOT*, this part of the code, when profiled, gets 13 times faster when not using VBOs on my GeForce4 4400. vertex coordinates as well as their normals are updated at least 30 times per second, usually more often) glUnmapBufferARB(GL_ARRAY_BUFFER_ARB); ...... glDrawElements(GL_TRIANGLE_STRIP,m_iStripIndices,GL_UNSIGNED_INT,NULL); SwapBuffers(); } At *m_pVertexArray vertex coordinates and their normals are stored interleaved. The topology of the geometry is static, only the vertices and the normals are changing over time. I checked every single GL function call with glGetError() for errors. There are none. glUnmapBufferARB(GL_ARRAY_BUFFER_ARB) returns true. As mentioned above I got the best performance up to date partially using GL_NV_vertex_array_range. By allocating the VAR buffer using wglAllocateMemoryNV() with parameters resulting in DMA memory being used. Is there a way to allocate this type of memory without relying on extensions? I didnt activate the rest of the VAR extension, since it resulted (surprisingly) in a loss of performance. Just using memory from this special range sped up the rendering. If anyone can confirm my observations or show me where I messed up, I would be glad. I read the other threads here regarding VBOs and they didnt help me. Michael

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!