I need help on setting up VBO's in my code

Started by
47 comments, last by MARS_999 20 years, 7 months ago
quote:Original post by MARS_999
Do I need to call glVertexPointer() everytime in my rendering function? I call it in my initialization function.


in theory the driver is allowed to move the buffer around as much as he likes. so it would be safer even though i wouldnt think thats the problem (as long as you set the pointer AFTER finishing setting up the buffer and not before or in the middle of it).

quote:
And no my indices isn''t below 65k its 132k so what am I to do about that? My vertex array isn''t either.


then it would be time to check if your card can handle indices higher than 65k. on of those glGet functions will return the max index.

quote:
Vertex terrain[256][256] = {0};


i might be wrong, but i think this would set the first element to 0 and ignore the rest.

quote:
So my indices would have to be reduced to 65k how is that going to work?


your terrain is made of more than 130k triangles and you probably shouldnt render it all at once anyway (how would you remove the parts you cant see?). and having smaller parts requires lower indices.
f@dzhttp://festini.device-zero.de
Advertisement
Technically VBO doesn''t have index limits. However, the NV_vertex_array_range(2) extension did. It seems that the nVidia drivers are still a bit buggy when it comes to indices in VBO that are over the VAR limits. For clarity, the limits are:

65536 for <= GF2.
1048576 or there abouts for >= GF3.

Newer drivers should eliminate this problem.

Trenco:

You can just cull indices. So effectively, you have one large vertex array, and a quadtree ( or whatever ) of indices. But I agree with reducing the size of vertex arrays. Split the terrain up into multiple "chunks", each with it''s own vertex array.

You have to remember that you''re unique, just like everybody else.
If at first you don't succeed, redefine success.
Wait... I just noticed that you''re using ATI hardware. I have no idea if it even has index limits with VAO, if so, then it could have the same problem with VBO as nVidia does.



You have to remember that you''re unique, just like everybody else.
If at first you don't succeed, redefine success.
quote:Original post by python_regious
Trenco:

You can just cull indices. So effectively, you have one large vertex array, and a quadtree ( or whatever ) of indices. But I agree with reducing the size of vertex arrays. Split the terrain up into multiple "chunks", each with it''s own vertex array.


id probably just leave the vertex buffer as it is, adapt the index buffer and change the offset into the vertex buffer. but i thought id rather not talk about how to do it as it might be a little confusing. also im not sure if i should advice saving memory by "spreading" your indices over half the vertex buffer.

f@dzhttp://festini.device-zero.de
glGetBufferParameterivARB(GL_ARRAY_BUFFER_ARB, GL_BUFFER_SIZE_ARB, &temp);

After using this function I got this 786,432 as a value. Well that is the value of my struct 256*256*12 = 786,432. What does glGetBufferParameterivARB do for sure? Does it tell me the size of my object or the max amount my object can be? Also should I be using this for my index array? glIndexPointer()? I thought that this was for colors? I seen an example now that is using it for index arrays? Because I am not using it. Thanks

[edited by - Mars_999 on August 20, 2003 2:10:57 AM]
Now when I run my code through debug mode and reduce my array size to 128x128 I am well within 65k now and my program doesn''t crash at glDrawElements() anymore but crashes when it comes to SwapBuffers()??? I completely lock my system up now. Any ideas? Here is my code for

void CTerrain::DrawTerrain(unsigned int *texture){	glBindBufferARB(GL_ARRAY_BUFFER_ARB, vertex_buffer);	glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, index_buffer);	/*	glActiveTextureARB(GL_TEXTURE0_ARB);	glEnable(GL_TEXTURE_2D);	glBindTexture(GL_TEXTURE_2D, texture[GRASS]);	glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);		glActiveTextureARB(GL_TEXTURE1_ARB);	glEnable(GL_TEXTURE_2D);	glBindTexture(GL_TEXTURE_2D, texture[GRASS_DETAIL]);	glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_ARB);	glTexEnvf(GL_TEXTURE_ENV, GL_RGB_SCALE_ARB, 2);*/	glEnable(GL_LIGHTING);	glEnable(GL_LIGHT1);	glEnable(GL_RESCALE_NORMAL);	glEnableClientState(GL_VERTEX_ARRAY);//	glEnableClientState(GL_TEXTURE_COORD_ARRAY);	glEnableClientState(GL_NORMAL_ARRAY);	glVertexPointer(3, GL_FLOAT, 0, terrain);//	glTexCoordPointer(2, GL_FLOAT, 0, tex_coord);	glNormalPointer(GL_FLOAT, 0, normal);	//	glClientActiveTextureARB(GL_TEXTURE0_ARB);	//	glClientActiveTextureARB(GL_TEXTURE1_ARB);		for(int z = 0; z < MAP_Z - 1; z++)		glDrawElements(GL_TRIANGLE_STRIP, MAP_X * 2, GL_UNSIGNED_INT, &indexs[z * MAP_X * 2]);//	glDisable(GL_TEXTURE_2D);//	glActiveTextureARB(GL_TEXTURE0_ARB);//	glDisable(GL_TEXTURE_2D);	glDisable(GL_NORMALIZE);	glDisable(GL_LIGHT1);		glDisable(GL_LIGHTING);	glDisableClientState(GL_VERTEX_ARRAY);	glDisableClientState(GL_NORMAL_ARRAY);//	glDisableClientState(GL_TEXTURE_COORD_ARRAY);}


Here is my rendering function
void Render(CTerrain &terrain){	float radians = float(PI * (move_x - 90.0f) / 180.0f);	camera_x = look_x + sin(radians) * move_y;	camera_y = look_y + move_y / 2.0f;	camera_z = look_z + cos(radians) * move_y;	glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);	glLoadIdentity();	//glRotatef(30.0f, 1.0f, 0.0f, 0.0f);  //top down	gluLookAt(camera_x, camera_y, camera_z, look_x, look_y, look_z, 0.0, 1.0, 0.0);	glCallList(terrain.skybox_list);	terrain.DrawTerrain(textures);	terrain.SetupLighting();	SwapBuffers(ghdc);}


Thanks
after a bindbuffer, use NULL as the pointer to indices.

So as it is written so shall it be done.
oss.sgi.com registry examples.
Game Core
I am now staring to get pissed! Why in the ***k is this so hard to implement? We are talking like 3 functions here and it like way harder to get working than vertex arrays. I am not sure but think I seen my cards max indexs is 65k so this would mean 65k elements for my index array right? Well that shouldn''t be a problem anymore beings I reduced my map to 64x64 and now when I run my code my screen is totally hosed up. Like my index array isn''t being called correctly? Get this now my fps is 10fps!!!! WTF anyone who is willing to look over my code I am more than happy to send you it. Maybe someone with a lot more experience can figure it out but I am about to dump VBO''s because I am not seeing a fps increase but a HUGE decrease. And yes I have looked over at the extension registry. The only thing I am not doing from what I can see is using dynamic memory. Thanks
never ever look at the fps as long as your screen is just displaying garbage. if youre out of bounds with your indices or whatever then youre lucky it doesnt crash, expecting it to be fast would be too much.
f@dzhttp://festini.device-zero.de
quote:Original post by Trienco
never ever look at the fps as long as your screen is just displaying garbage. if youre out of bounds with your indices or whatever then youre lucky it doesnt crash, expecting it to be fast would be too much.


But I am not out of bounds anymore. I reduced from 256x256 to 64x64 to be way under the 65k limit. The program will run but when I exit I get a crash. When its running I go to wireframe mode and I see that all my polygons want to start from a center point and render out from that point? This makes no sense because I haven't changed nothing since I got them to work with vertex arrays? Any ideas? Thanks

[edited by - Mars_999 on August 21, 2003 11:08:28 AM]

This topic is closed to new replies.

Advertisement