Using VBOs with multiple glDrawArrays

Started by
19 comments, last by web383 11 years, 10 months ago
Hello,

I'm trying to use VBOs but realized the theory is not quite clear to me. I was hoping somebody could answer a few basic questions. As an example, suppose I want to draw 2 triangles: on the first I want to enable the coordinates and the colors; on the second I want to enable the coordinates, textures, and normals.

My thought was to pack everything into a single buffer large enough to contain all data. The coordinates, texture coordinates, normals, and colors are all in separate arrays. Since the two triangles do not use the same information, my coordinates array contains xyz for 6 points; my colors array contains rgb for 3 points; the normals array also contains information for 3 points; and the texture coordinates also contains information for 3 points.

The "stride" in all calls to glXXXPointer is set to zero as coordinates, normals, etc. are all packed.

1) Do I need to assume that all vertices share all attributes - i.e., coordinates, normals, textures, etc, as far a copying data to the VBO with, say, glBufferSubData?

2) My plan is to call glDrawArrays twice: once for first triangle and once for the second triangle. But the glDrawArrays function requires a starting point. Does this mean I have to "make room" in the VBO for coordinates, colors, normals, and textures for ALL points in the model, even if they won't ever be referenced (because, say, texcoord won't be enabled when drawing the first triangle)?

3) Of course, my model is much more complicated than just 2 triangles - I usually have on the order of about 1M geometric primitives, with all types of combinations for normals, textures, colors, etc. Sometimes some are on, and sometimes they are off. I would like to copy all my data to the VBO so I can quickly rotate the model that is already in the GPU. I'm just not clear whether I have to make a worst-case scenario assumption that all vertices always have all attributes.

4) I could also have separate buffers for each attribute: one buffer for coordinates; one for textures; one for color, etc. But if I have textures on a single triangle out of 1M triangles, would this mean I have to assume that all triangles have textures so that glDrawArrays can find the texture properly?

I hope I made my problem clear. If somebody could clear this for me I would appreciate it.

Thanks!
Advertisement
Do I need to assume that all vertices share all attributes - i.e., coordinates, normals, textures, etc, as far a copying data to the VBO with, say, glBufferSubData?[/quote]

Yes.
Stride, in glXXXPointer is used to specify the byte offset between consecutive vertices, yes, but a vertex in glXXXPointer's case is the trio: position, color, and normal. Correct usage is as such, assuming you have a vertex struct in the style of:


typedef struct vertex_t
{
float x, y, z;
float nx, ny, nz;
unsigned char r, g, b, a;
} Vertex;


You can specify the byte offset for each attribute of the vertex by using glXXXPointer as such (28 = sizeof(Vertex)):

// glVertexPointer is doing exactly this: read the next 3 floats at pos (ptr),
// then move to the next position data, 28 bytes ahead
glVertexPointer(3, GL_FLOAT, 28, ptr);
// glNormalPointer forces normal vectors to hold 3 numbers,
// does the exact same as glVertexPointer
// however, the first 12 bytes in the array hold the position data,
// so you tell OpenGL to look 12 bytes ahead in the array
// to mark the location of the normal data in the array, then move on
// 28 bytes to the next normal vector
glNormalPointer(GL_FLOAT, 28, ptr+12);
// same as before, read the next 4 bytes as color data,
// and skip now 12 bytes for the position and 12 bytes
// for the normal data, then move on 28 bytes ahead
glColorPointer(4, GL_UNSIGNED_BYTE, 28, ptr+24);


In the case for VBO's, you can replace "ptr+##" with "((char*)NULL + (##))" as a macro.

To draw, simply enable what you want to be drawn on the client side before you issue your drawing command.
So, in your triangle example:


glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);

// draw first triangle

glEnableClientState(GL_NORMAL_ARRAY);

// draw second triangle

glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);


EDIT: Missed your 3rd question.

I'm just not clear whether I have to make a worst-case scenario assumption that all vertices always have all attributes.[/quote]

If you were drawing objects without color, or normals, or etc, just disable them on the client side before you draw. The empty data will be skipped over, and never looked at. If you have, say, two different structs for vertices you use, like ColorPoint and BlankPoint, just specify different byte offsets and positions when calling glXXXPointer. It helps when you're worried about data consumption, no reason to allocate a ton of space if you aren't going to use any of it, otherwise, just skip it. :)
[size=2]hopper.dustin@gmail.com
glVertexPointer (3, GL_FLOAT, 28, ptr);

Don't do this.

There are a number of reasons why, including compiler packing rules and the size of a float on your chosen platform, not to mention that if you ever want to add another member to your Vertex struct you will need to go through EVERY - SINGLE - PLACE where it might be used in a gl*Pointer call and change the stride.

Yuck yuck yuck.

Do this instead:

glVertexPointer (3, GL_FLOAT, sizeof (Vertex), ptr);

Safe, portable, clean, robust, has been used in OpenGL code all over the world for 16+ years.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

You can specify the byte offset for each attribute of the vertex by using glXXXPointer as such (28 = sizeof(Vertex)):[/quote]
[size=2]hopper.dustin@gmail.com

You can specify the byte offset for each attribute of the vertex by using glXXXPointer as such (28 = sizeof(Vertex)):

[/quote]
So why not put it in your sample code and keep everything unambiguous then? The OP has already expressed some confusion about the usage here - providing sample code containing hard-coded vertex sizes (and creating a risk that the mistake will be propagated via copy/paste) is not wise in the circumstances.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

First of all, thanks to everybody that responded! I truly appreciate it!

Now to the specifics:

1) I actually have multiple calls - possibly thousands or even millions! - to store the data in the vbos. Then, during the rendering, I need to traverse this data. The model is usually the same, but the view may change. I had thought of creating just a single struct that could potentially hold everything like you described. My problem is that during the storage phase the data is not packed: I have coordinates in one array, normals in another, texture coordinates in another, etc. So if I'm going to store the data in a uniform way then I need to use CPU time to move the data around into my struct format. I have found that sometimes my machines end up CPU-bound - NOT GPU bound! - so this moving actually consumes a lot of time.

So what I did was the following:

(a) For each "packet" of information sent to be stored to the VBO I store in the heap a few flags - packed bits - that indicate what is present and an int with the number of points. I then call glBufferSubData to copy that piece into the GPU. I do this sequentially for the coordinates, normals, colors, etc. without having to actually touch the data. I also keep track of the offset from the beginning of the vbo for each packet and store that information along with the flags and number of points.

(b) During the rendering I only enable the clients for whatever I will need, then call glSetXXXPointer for each entity using the offset I stored before, assume a stride of zero, followed by glDrawArrays. I would say that the average size of each packet is around 1000 points (at least I make an effort to buffer the incoming data this way before sending it to the vbo, as long as the flags do not change).

Wouldn't this be a reasonable way of accomodating the different options?

2) Another "related" question: I'm using both a Windows and a Linux machine. Both have NVIDIA cards that, according to NVIDIA, support OpenGL 4 or higher. My question: the "default" location for the header files and the libraries are such that, on both machines, I only see OpenGL version 1.1. Is there a way to get header files and libraries I can use that would automatically support the version of OpenGL claimed by the NVIDIA card? I would assume that once the card is installed these libaries would be made available, but I haven't been able to find them anywhere. If anybody has a clue to this problem I would appreciate any help. I'm currently stuck on Windows and I have to use hacks and OpenGL extensions on Linux to test all this. There has to be a better, cleaner way.

Thanks again to all!!

glVertexPointer (3, GL_FLOAT, 28, ptr);

Don't do this.

Do this instead:

glVertexPointer (3, GL_FLOAT, sizeof (Vertex), ptr);

Safe, portable, clean, robust, has been used in OpenGL code all over the world for 16+ years.

Don't do this, glVertexPointer() is deprecated (but the use of sizeof is of course still valid). You should use at least OpenGL 3, which means using glVertexAttribPointer() instead. That also means you have to define your own shaders. While it is some extra work, you can probably get better performance out of it.


1) I actually have multiple calls - possibly thousands or even millions! - to store the data in the vbos. Then, during the rendering, I need to traverse this data. The model is usually the same, but the view may change. I had thought of creating just a single struct that could potentially hold everything like you described. My problem is that during the storage phase the data is not packed: I have coordinates in one array, normals in another, texture coordinates in another, etc. So if I'm going to store the data in a uniform way then I need to use CPU time to move the data around into my struct format. I have found that sometimes my machines end up CPU-bound - NOT GPU bound! - so this moving actually consumes a lot of time.

It is the common way, to reorganize data before sending to the VBO. If packing all vertex attribute together, near each other in memory for each vertex, then you usually get better performance.
One idea to solve the CPU-bound problem is to use two processes (threads). One thread that controls the OpenGL, and one thread that prepares data. Notice however that OpenGL in itself is not thread safe, so all OpenGL calls should only go through one thread.

2) Another "related" question: I'm using both a Windows and a Linux machine. Both have NVIDIA cards that, according to NVIDIA, support OpenGL 4 or higher. My question: the "default" location for the header files and the libraries are such that, on both machines, I only see OpenGL version 1.1. Is there a way to get header files and libraries I can use that would automatically support the version of OpenGL claimed by the NVIDIA card? I would assume that once the card is installed these libaries would be made available, but I haven't been able to find them anywhere. If anybody has a clue to this problem I would appreciate any help. I'm currently stuck on Windows and I have to use hacks and OpenGL extensions on Linux to test all this. There has to be a better, cleaner way.
[/quote]
This is the way OpenGL works. Newer versions in the API are not defined in the standard headers. I also have a project running in both Windows and Linux, and I use the glfw library to help me set-up an OpenGL context (works for both Linux and Windows). To get access to later versions of OpenGL, I would recommend to use glew. You include that instead of gl.h. This package is also available for both Linux and OpenGL.

See more information at http://www.opengl.org/wiki/Getting_Started
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
larspensjo,

Thanks for your comments. I streamlined my data and downloaded glew. All gl calls work fine up to the point that I try to use the very first VBO-related function: glGenBuffers. I get a crash inside this function. Given that up to this point everything worked fine I can only assume that there's a problem with this function in glew-1.6.0 Win64.
Since glew's responsibility is only to load function pointers, you can easily see if glew is to blame: check if the function pointer is null even though you have a rendering context that provides the function. Otherwise the error is in your code.
Brother Bob,

Thanks for the help. What I noticed is that if I print the value of glBegin before and after glewInit the value doesn't change. So I assume the glBegin pointer I'm using is still the default Windows pointer. The symbol glGenBuffers does not exist.

Clearly, I'm doing something wrong. The interesting thing is that I have no missing symbols when I link my application. I understand I'm still supposed to link with opengl32.lib. I am also including the libraries glew32.lib glew32mx.lib glew32mxs.lib glew32s.lib.

Given that the function glewInit is being called, would you have any clue what I may be doing wrong?

Thanks.

This topic is closed to new replies.

Advertisement