Jump to content

  • Log In with Google      Sign In   
  • Create Account


Using VBOs with multiple glDrawArrays


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
20 replies to this topic

#1 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 21 May 2012 - 01:06 PM

Hello,

I'm trying to use VBOs but realized the theory is not quite clear to me. I was hoping somebody could answer a few basic questions. As an example, suppose I want to draw 2 triangles: on the first I want to enable the coordinates and the colors; on the second I want to enable the coordinates, textures, and normals.

My thought was to pack everything into a single buffer large enough to contain all data. The coordinates, texture coordinates, normals, and colors are all in separate arrays. Since the two triangles do not use the same information, my coordinates array contains xyz for 6 points; my colors array contains rgb for 3 points; the normals array also contains information for 3 points; and the texture coordinates also contains information for 3 points.

The "stride" in all calls to glXXXPointer is set to zero as coordinates, normals, etc. are all packed.

1) Do I need to assume that all vertices share all attributes - i.e., coordinates, normals, textures, etc, as far a copying data to the VBO with, say, glBufferSubData?

2) My plan is to call glDrawArrays twice: once for first triangle and once for the second triangle. But the glDrawArrays function requires a starting point. Does this mean I have to "make room" in the VBO for coordinates, colors, normals, and textures for ALL points in the model, even if they won't ever be referenced (because, say, texcoord won't be enabled when drawing the first triangle)?

3) Of course, my model is much more complicated than just 2 triangles - I usually have on the order of about 1M geometric primitives, with all types of combinations for normals, textures, colors, etc. Sometimes some are on, and sometimes they are off. I would like to copy all my data to the VBO so I can quickly rotate the model that is already in the GPU. I'm just not clear whether I have to make a worst-case scenario assumption that all vertices always have all attributes.

4) I could also have separate buffers for each attribute: one buffer for coordinates; one for textures; one for color, etc. But if I have textures on a single triangle out of 1M triangles, would this mean I have to assume that all triangles have textures so that glDrawArrays can find the texture properly?

I hope I made my problem clear. If somebody could clear this for me I would appreciate it.

Thanks!

Sponsor:

#2 trotlinebeercan   Members   -  Reputation: 186

Like
2Likes
Like

Posted 22 May 2012 - 10:31 PM

Do I need to assume that all vertices share all attributes - i.e., coordinates, normals, textures, etc, as far a copying data to the VBO with, say, glBufferSubData?


Yes.
Stride, in glXXXPointer is used to specify the byte offset between consecutive vertices, yes, but a vertex in glXXXPointer's case is the trio: position, color, and normal. Correct usage is as such, assuming you have a vertex struct in the style of:

typedef struct vertex_t
{
  float x, y, z;
  float nx, ny, nz;
  unsigned char r, g, b, a;
} Vertex;

You can specify the byte offset for each attribute of the vertex by using glXXXPointer as such (28 = sizeof(Vertex)):
// glVertexPointer is doing exactly this: read the next 3 floats at pos (ptr),
// then move to the next position data, 28 bytes ahead
glVertexPointer(3, GL_FLOAT, 28, ptr);
// glNormalPointer forces normal vectors to hold 3 numbers,
// does the exact same as glVertexPointer
// however, the first 12 bytes in the array hold the position data,
// so you tell OpenGL to look 12 bytes ahead in the array
// to mark the location of the normal data in the array, then move on
// 28 bytes to the next normal vector
glNormalPointer(GL_FLOAT, 28, ptr+12);
// same as before, read the next 4 bytes as color data,
// and skip now 12 bytes for the position and 12 bytes
// for the normal data, then move on 28 bytes ahead
glColorPointer(4, GL_UNSIGNED_BYTE, 28, ptr+24);

In the case for VBO's, you can replace "ptr+##" with "((char*)NULL + (##))" as a macro.

To draw, simply enable what you want to be drawn on the client side before you issue your drawing command.
So, in your triangle example:

glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);

// draw first triangle

glEnableClientState(GL_NORMAL_ARRAY);

// draw second triangle

glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);

EDIT: Missed your 3rd question.

I'm just not clear whether I have to make a worst-case scenario assumption that all vertices always have all attributes.


If you were drawing objects without color, or normals, or etc, just disable them on the client side before you draw. The empty data will be skipped over, and never looked at. If you have, say, two different structs for vertices you use, like ColorPoint and BlankPoint, just specify different byte offsets and positions when calling glXXXPointer. It helps when you're worried about data consumption, no reason to allocate a ton of space if you aren't going to use any of it, otherwise, just skip it. :)

Edited by trotlinebeercan, 22 May 2012 - 10:36 PM.

hopper.dustin@gmail.com

#3 mhagain   Crossbones+   -  Reputation: 7422

Like
0Likes
Like

Posted 23 May 2012 - 06:03 AM

glVertexPointer (3, GL_FLOAT, 28, ptr);

Don't do this.

There are a number of reasons why, including compiler packing rules and the size of a float on your chosen platform, not to mention that if you ever want to add another member to your Vertex struct you will need to go through EVERY - SINGLE - PLACE where it might be used in a gl*Pointer call and change the stride.

Yuck yuck yuck.

Do this instead:

glVertexPointer (3, GL_FLOAT, sizeof (Vertex), ptr);

Safe, portable, clean, robust, has been used in OpenGL code all over the world for 16+ years.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#4 trotlinebeercan   Members   -  Reputation: 186

Like
0Likes
Like

Posted 23 May 2012 - 09:21 AM

You can specify the byte offset for each attribute of the vertex by using glXXXPointer as such (28 = sizeof(Vertex)):


hopper.dustin@gmail.com

#5 mhagain   Crossbones+   -  Reputation: 7422

Like
0Likes
Like

Posted 23 May 2012 - 10:03 AM

You can specify the byte offset for each attribute of the vertex by using glXXXPointer as such (28 = sizeof(Vertex)):

So why not put it in your sample code and keep everything unambiguous then? The OP has already expressed some confusion about the usage here - providing sample code containing hard-coded vertex sizes (and creating a risk that the mistake will be propagated via copy/paste) is not wise in the circumstances.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#6 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 23 May 2012 - 12:10 PM

First of all, thanks to everybody that responded! I truly appreciate it!

Now to the specifics:

1) I actually have multiple calls - possibly thousands or even millions! - to store the data in the vbos. Then, during the rendering, I need to traverse this data. The model is usually the same, but the view may change. I had thought of creating just a single struct that could potentially hold everything like you described. My problem is that during the storage phase the data is not packed: I have coordinates in one array, normals in another, texture coordinates in another, etc. So if I'm going to store the data in a uniform way then I need to use CPU time to move the data around into my struct format. I have found that sometimes my machines end up CPU-bound - NOT GPU bound! - so this moving actually consumes a lot of time.

So what I did was the following:

(a) For each "packet" of information sent to be stored to the VBO I store in the heap a few flags - packed bits - that indicate what is present and an int with the number of points. I then call glBufferSubData to copy that piece into the GPU. I do this sequentially for the coordinates, normals, colors, etc. without having to actually touch the data. I also keep track of the offset from the beginning of the vbo for each packet and store that information along with the flags and number of points.

(b) During the rendering I only enable the clients for whatever I will need, then call glSetXXXPointer for each entity using the offset I stored before, assume a stride of zero, followed by glDrawArrays. I would say that the average size of each packet is around 1000 points (at least I make an effort to buffer the incoming data this way before sending it to the vbo, as long as the flags do not change).

Wouldn't this be a reasonable way of accomodating the different options?

2) Another "related" question: I'm using both a Windows and a Linux machine. Both have NVIDIA cards that, according to NVIDIA, support OpenGL 4 or higher. My question: the "default" location for the header files and the libraries are such that, on both machines, I only see OpenGL version 1.1. Is there a way to get header files and libraries I can use that would automatically support the version of OpenGL claimed by the NVIDIA card? I would assume that once the card is installed these libaries would be made available, but I haven't been able to find them anywhere. If anybody has a clue to this problem I would appreciate any help. I'm currently stuck on Windows and I have to use hacks and OpenGL extensions on Linux to test all this. There has to be a better, cleaner way.

Thanks again to all!!

#7 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 04 June 2012 - 07:09 AM

glVertexPointer (3, GL_FLOAT, 28, ptr);

Don't do this.

Do this instead:

glVertexPointer (3, GL_FLOAT, sizeof (Vertex), ptr);

Safe, portable, clean, robust, has been used in OpenGL code all over the world for 16+ years.

Don't do this, glVertexPointer() is deprecated (but the use of sizeof is of course still valid). You should use at least OpenGL 3, which means using glVertexAttribPointer() instead. That also means you have to define your own shaders. While it is some extra work, you can probably get better performance out of it.

1) I actually have multiple calls - possibly thousands or even millions! - to store the data in the vbos. Then, during the rendering, I need to traverse this data. The model is usually the same, but the view may change. I had thought of creating just a single struct that could potentially hold everything like you described. My problem is that during the storage phase the data is not packed: I have coordinates in one array, normals in another, texture coordinates in another, etc. So if I'm going to store the data in a uniform way then I need to use CPU time to move the data around into my struct format. I have found that sometimes my machines end up CPU-bound - NOT GPU bound! - so this moving actually consumes a lot of time.

It is the common way, to reorganize data before sending to the VBO. If packing all vertex attribute together, near each other in memory for each vertex, then you usually get better performance.
One idea to solve the CPU-bound problem is to use two processes (threads). One thread that controls the OpenGL, and one thread that prepares data. Notice however that OpenGL in itself is not thread safe, so all OpenGL calls should only go through one thread.

2) Another "related" question: I'm using both a Windows and a Linux machine. Both have NVIDIA cards that, according to NVIDIA, support OpenGL 4 or higher. My question: the "default" location for the header files and the libraries are such that, on both machines, I only see OpenGL version 1.1. Is there a way to get header files and libraries I can use that would automatically support the version of OpenGL claimed by the NVIDIA card? I would assume that once the card is installed these libaries would be made available, but I haven't been able to find them anywhere. If anybody has a clue to this problem I would appreciate any help. I'm currently stuck on Windows and I have to use hacks and OpenGL extensions on Linux to test all this. There has to be a better, cleaner way.

This is the way OpenGL works. Newer versions in the API are not defined in the standard headers. I also have a project running in both Windows and Linux, and I use the glfw library to help me set-up an OpenGL context (works for both Linux and Windows). To get access to later versions of OpenGL, I would recommend to use glew. You include that instead of gl.h. This package is also available for both Linux and OpenGL.

See more information at http://www.opengl.org/wiki/Getting_Started
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#8 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 04 June 2012 - 12:17 PM

larspensjo,

Thanks for your comments. I streamlined my data and downloaded glew. All gl calls work fine up to the point that I try to use the very first VBO-related function: glGenBuffers. I get a crash inside this function. Given that up to this point everything worked fine I can only assume that there's a problem with this function in glew-1.6.0 Win64.

#9 Brother Bob   Moderators   -  Reputation: 7779

Like
0Likes
Like

Posted 04 June 2012 - 12:36 PM

Since glew's responsibility is only to load function pointers, you can easily see if glew is to blame: check if the function pointer is null even though you have a rendering context that provides the function. Otherwise the error is in your code.

#10 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 04 June 2012 - 01:55 PM

Brother Bob,

Thanks for the help. What I noticed is that if I print the value of glBegin before and after glewInit the value doesn't change. So I assume the glBegin pointer I'm using is still the default Windows pointer. The symbol glGenBuffers does not exist.

Clearly, I'm doing something wrong. The interesting thing is that I have no missing symbols when I link my application. I understand I'm still supposed to link with opengl32.lib. I am also including the libraries glew32.lib glew32mx.lib glew32mxs.lib glew32s.lib.

Given that the function glewInit is being called, would you have any clue what I may be doing wrong?

Thanks.

#11 Brother Bob   Moderators   -  Reputation: 7779

Like
0Likes
Like

Posted 04 June 2012 - 02:15 PM

What do you mean that the symbol does not exist? You said it compiled fine so clearly the symbol, as defined in the context of a programming language, exists, or the program wouldn't even compile. Or do you mean that you get a null pointer and thus conclude that the function does not exist? In that case, as I mentioned in my previous post, you need to ensure that you have a rendering context that provides the function when you initialize glew. That is probably one of the most common errors though, but without much description of what you're doing, I can only guess.

#12 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 04 June 2012 - 03:09 PM

I use glfw (another excellent cross-platform support library for OpenGL) to create my context. My main looks as follows:
    glfwOpenWindowHint(GLFW_OPENGL_DEBUG_CONTEXT, gDebugOpenGL);

    if (!glfwOpenWindow(fWindowWidth, fWindowHeight, 0, 0, 0, 0, 16, 1, fFullScreen ? GLFW_FULLSCREEN : GLFW_WINDOW)) {
	    glfwTerminate();
	    fprintf(stderr, "Failed to open GLFW window\n");
	    exit(EXIT_FAILURE);
    }
    glfwSetWindowTitle("Test");
    // Initialize glew
    GLenum err=glewInit();
    if(err!=GLEW_OK) {
	    //problem: glewInit failed, something is seriously wrong
	    printf("Fail to init glew: Error: %s\n", glewGetErrorString(err));
        return EXIT_FAILURE;
    }
    // Check version of OpenGL
    int major, minor, revision;
    glfwGetGLVersion(&major, &minor, &revision);
    if (....)


Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#13 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 04 June 2012 - 03:18 PM

Brother Bob,

What I mean is that, in Visual Studio, if I try to display the value of "glGenBuffers" I get the message that the symbol does not exist. Most probably this means that glGenBuffers is a macro defined somewhere in glew that will eventually point to a function pointer in its definition. And I wouldn't be surprised that you are right that the (eventual) pointer to the function that is supposed to resolve the symbol glGenBuffers is not being properly set.

I compiled my application using the /E option - this resolves all macros - and the final code shows a call to __glewGenBuffers:

__glewGenBuffers (1,&id);

But if I try to print the value of __glewGenBuffers I get

__glewGenBuffers CXX0017: Error: symbol "__glewGenBuffers" not found

Any ideas?

#14 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 04 June 2012 - 03:26 PM

Hmm...

Some more information: it turns out glewInit was returning an error and I was checking its return: GLEW_ERROR_NO_GL_VERSION (value = 1).

Do I still need to include GL/gl.h? This appears to be an include file error...

#15 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 04 June 2012 - 04:12 PM

Solved... I was calling glewInit before creating the context.

#16 Brother Bob   Moderators   -  Reputation: 7779

Like
0Likes
Like

Posted 04 June 2012 - 04:21 PM

I was just looking into some options but apparently you solved it meanwhile. What bothers me though is that in the code you showed, you do call it after creating the window, and consequencly the rendering context. Was that not the code you really had?

#17 amtri   Members   -  Reputation: 175

Like
0Likes
Like

Posted 04 June 2012 - 04:53 PM

Brother Bob,

I never really posted any code. I'm not calling glut, so I had to place this call after the proper wgl function call.

#18 Brother Bob   Moderators   -  Reputation: 7779

Like
0Likes
Like

Posted 04 June 2012 - 05:07 PM

Oh, sorry, larspensjo got a piece of code in the middle of our posts that I assumed was yours. My mistake.

#19 web383   Members   -  Reputation: 736

Like
0Likes
Like

Posted 05 June 2012 - 10:52 AM

I'd like to make some suggestions based on my experience.

Fist of all, I'm a bit confused because you are stating that you are calling glDrawArrays with VBO's, which is incorrect.
With a VBO, you will
1: be creating a VBO via glGenBuffers()
2: bind the VBO via glBindBuffer()
3: copy data to the VBO via glBufferData() or glBufferSubData()
4: render the VBO via glDrawElements or glDrawRangeElements


Do I need to assume that all vertices share all attributes - i.e., coordinates, normals, textures, etc, as far a copying data to the VBO with, say, glBufferSubData

If you want to render everything with a single draw call, then yes. If not, then you can copy multiple vertex types within a single buffer. And you will be calling
glDrawRangeElements() instead of glDrawElements(). I personally don't use glXXXPointer, but instead call glEnableVertexAttribArray(), and glVertexAttribPointer(). I'm using shaders... I'm not sure if you are.


My problem is that during the storage phase the data is not packed: I have coordinates in one array, normals in another, texture coordinates in another, etc. So if I'm going to store the data in a uniform way then I need to use CPU time to move the data around into my struct format.

You don't HAVE to do this. In fact, you can keep all vertex attributes in a separate buffer... glXXXPointer, or glVertexAttribPointer just need to be pointing at the correct location in memory, with a stride only if applicable. Interleaved vertex data usually gives a little better performance, but realize you don't have to. It is particularly nice to keep position data separate so it can be used in a separate depth-only rendering pass. There is no reason to submit uv, normals, or colors during that pass.


You can try doing something like this:

// create a VBO
glGenBuffers()

// bind the vbo
glBindBuffer()

// copy your data with multiple vertex types
glBufferData()

// render
foreach geometry to render
{
	 // bind appropriate vertex attribute
	 foreach vertex attribute
	 {
	 	 if(HasAttribute())
	 	 {
	 	 	 glEnableVertexAttribArray()
	 	 	 glVertexAttribPointer()
	 	 }
	 	 else
	 	 {
	 	 	 glDisableVertexAttribArray()
	 	 }
	 }

	 // render it
	 glDrawRangeElements()
}

I hope this helps.

Edited by web383, 05 June 2012 - 02:57 PM.


#20 mhagain   Crossbones+   -  Reputation: 7422

Like
0Likes
Like

Posted 05 June 2012 - 03:45 PM

....you are stating that you are calling glDrawArrays with VBO's, which is incorrect.


Ehhh - no. glDrawArrays is perfectly 100% legal to use with a VBO. You may be referring to a GL_ELEMENT_ARRAY_BUFFER in which case, yes, we're talking about glDraw(Range)Elements, but if no GL_ELEMENT_ARRAY_BUFFER is currently bound (or even if one is) you can still use glDrawArrays.

glEnableVertexAttribArray (0);
glBindBuffer (GL_ARRAY_BUFFER, vbo);
glVertexAttribPointer (0, ....);
glDrawArrays (....);

"Eppur si muove".

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS