Trouble with glDrawRangeElements

Started by
6 comments, last by Mantear 18 years, 9 months ago
I've been having trouble using glDrawRangeElements as I thought I should be able to. I have the VBO generated and populated. I can draw the entire object just fine. Now, I want to be able to draw, say, only the first half or second half of the object. So I tried modifying the start, end, and count parameters. In my original code that draws the entire object properly, I have:

glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, Triangles.VBOName);
glDrawRangeElements(GL_TRIANGLES, 0, Triangles.IndexCurrentSize, Triangles.IndexCurrentSize, GL_UNSIGNED_INT, 0);
I'm trying to modify it to do something like:

glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, Triangles.VBOName);
glDrawRangeElements(GL_TRIANGLES, start_buffer, stop_buffer, stop_buffer - start_buffer, GL_UNSIGNED_INT, 0);
start_buffer is where in the buffer I want to start drawing, stop_buffer is where I want to end, and stop_buffer - start_buffer should be the count, the size of the buffer to draw. I've ensured that start and stop are multiples of 3. When I try this, stuff draws, but not the triangles I expect. Any ideas?
Advertisement
You misunderstand the usage, start specifies the minimum index and end the maximum index. They dont specify the range you want drawn.

If you only want to draw half then simply set the index pointer to the middle of the indicies (0 is the first index) and divide the count by 2.

HTH
So, to draw the first half of the object, would I do:
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, Triangles.VBOName);glDrawRangeElements(GL_TRIANGLES, 0, Triangles.IndexCurrentSize, Triangles.IndexCurrentSize / 2, GL_UNSIGNED_INT, Triangles.IndexCurrentSize / 2);

I'm a bit confused, because I think one of the books I have misrepresents what these parameters do.
Quote:Original post by Mantear
So, to draw the first half of the object, would I do:
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER, Triangles.VBOName);glDrawRangeElements(GL_TRIANGLES, 0, Triangles.IndexCurrentSize, Triangles.IndexCurrentSize / 2, GL_UNSIGNED_INT, Triangles.IndexCurrentSize / 2);

I'm a bit confused, because I think one of the books I have misrepresents what these parameters do.


Setting the pointer like that probably wont work, try this instead (what you want is the offset in bytes and not the index of the array)

(char*)NULL + ((Triangles.IndexCurrentSize / 2) * sizeof(unsigned int))


There is a macro for this too

#define BUFFER_OFFSET(i) ((char *)NULL + (i))


HTH
Great, that worked just like I wanted. Thanks.

I'm still a little confused as to what some of the parameters are supposed to accomplish. If the last parameter lets you give the offset, that's pretty much saying where to start, and if the count parameter (parameter 4) says how much to draw, what's the purpose of the start and end parameters (parameters 2 and 3)?
'count' gives the number of indices in the indice array, however the index array might well access an index more than once.

The memory offset gives you a start point in memory for index 0 to reference.

Now, the start and end parameters give details on the min and max indices which are in use in the index array.

For example, if you were drawing a cube with quads your index array would look like this :

(each 4 indices making up the quads in the order: front, top, back, bottom, left, right)
[0,1,2,3],[0,3,5,4],[5,4,7,6],[6,1,2,7],[0,4,7,1],[3,5,6,2]

Now, that would give you a start of 0 and an end of 7 but a count of 24.

The reason it's done this way is indeed subtle, and worthwhile to know. Short answer, it's a performance hit.

Obviously, knowing the minimum and maximum indexes used can be really easily computed by OpenGL, just by iterating over the relevant portion of the index array, using min and max macros. The reason OpenGL doesn't do this for you is for performance.

As long as you provide the min and max values, it knows exactly what portion of the vertex array will be used, without ever having to read the vertex array from memory. OpenGL can then set up a DMA session to send that portion of the array straight from memory to the GPU, without ever passing through the CPU.

Specifying 0 for the min, and #-of-vertices for the max will never lead to incorrect results, it just may cause a minor performance hit as the GPU reads more data than may be necessary. For ease of debugging, I highly suggest not trying to limit these values until later, once everything else is working.
My reason for doing this was so I could load multiple versions of an object into one set of buffers. During run time, I can choose which version to display by going to the offset in the buffers where that object starts, and then only draw from the buffers until that verion of the object is done. I may not easily know which vertex data I'm doing to be using, so I end up just specifying the whole vertex array anyways, but I can still have it just draw the section I want.

Thanks for the info guys.

This topic is closed to new replies.

Advertisement