Jump to content
  • Advertisement
Sign in to follow this  
jakesee

GL_ELEMENT_BUFFER_ARB with client side index pointer

This topic is 2803 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

Tried VBO for many hours... hope someone can help.
I want to use non-vbo indices into a height field VBO of vertices with the struct


struct Vertex3
{
Vector3 position;
Vector3 normal;
Color color;
};

// creation
int count = 1201 * 1201; // USGS DEM file
Vertex3 data = new Vertex3[count];
int size = count * sizeof(Vertex3);
glGenBuffersARB(1, &vboid);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboid);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, size, data, GL_STATIC_DRAW_ARB);

// drawing
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vboid);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex3), 0);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(Vertex3), sizeof(Vector3));
glEnableClientState(GL_COLOR_ARRAY);
glColorPointer(4, GL_UNSIGNED_BYTE, sizeof(Vertex3), sizeof(Vector3) * 2);

// unbind
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);

// really draw
// What is the correct method here?
glDrawElements(GL_TRIANGLES, index_count, GL_UNSIGNED_INT, indices); // ERROR!





Thanks for helping!

[Edited by - jakesee on November 11, 2010 10:52:17 AM]

Share this post


Link to post
Share on other sites
Advertisement
Assuming you don't have version 3.0 or later core profile rendering context, you can just pass the index pointer to glDrawElements. Your code, however, is wrong, since you put the vertex arrays in an element array buffer. You need to put them in an array buffer (you want GL_ARRAY_BUFFER, not GL_ELEMENT_ARRAY_BUFFER; the latter is for the index array).

Share this post


Link to post
Share on other sites
Hi Brother Bob,

I have re-tried the whole thing and I updated my original post to reflect the latest (more correct) version of the code.

I followed closely to http://www.opengl.org/wiki/VBO_-_just_examples
but still seem unable to figure it out.

I have ensured that vboid is non-zero (1) and all the pointers are non-NULL as well. But as it crashes at glDrawElements(), and there are so many possbile things that can go wrong, is there anyway I can let openGL tell me which item is wrong?

Share this post


Link to post
Share on other sites
@Brother Bob,

indices is an array of unsigned int, indexing into data.

unsigned int indices[index_count];

indices and index_count are generated by an LOD algorithm.

The indices should be correct because everything works without VBO.
When VBO is used, i.e. glBindBuffer, glDrawElements() crashes.

Share this post


Link to post
Share on other sites
On a different note, what you are doing does not make much sense. One uses vertex buffer objects to make use of faster uploads and to coalesce the PCIe (or whatever bus) transfer latency with async transfers (and async draw calls).

Using client indices will make your application stall in the same way client vertex data would in the first place. It does not matter a terrible lot whether some of the data is transferred via VBO if the index data is not.

Using a client side array means that the driver is forced to do the data transfer at the exact time of the draw call and block your application until the transfer is 100% finished.
That is because since the client owns the data, so the server does not know when the data is valid, except at this exact time. The data could become invalid the next microsecond.

Share this post


Link to post
Share on other sites
@samoth

Well.. I have already considered I might be doing it wrong, but I did not think storing multiple LOD indicies as VBO makes justifiable sense either. In any case, to test the theory requires me to first get the VBO working in the first place!

Now, back to the present time, I managed to get the VBO working! Thanks to Bob's confirmation that the code was not wrong. And for your information, i tested, and find that storing just the vertices alone does give significant performance increase, like, a lot.

@Brother Bob
Thanks again for your help/confirmation. It turns out that the main problem, I believe, was that I was rendering to/from 2 different contexts (I had 2 windows) when I ran the code. I initially also did not use glew/glee/etc extension, instead defined my own according to NeHe's tutorial, although not sure if this had affected anything.

I have since switched to glew and is able to make it work in single window mode. However, I would still like to get it working for more than one window.

Would you (or anyone) mind drop a reply saying if glewInit() is init for each wglCreateContext or wglMakeCurrent?

Thanks for all the help!

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!