How do I determine the maximum VBO size?

Started by
7 comments, last by TheChubu 11 years, 1 month ago

I was playing around with some old maze generation code and tried to generate a maze 10000 x 10000 cells in size. The algorithm had no trouble generating the maze, but sending the VBO with 400,000,000+ vertices to the graphics card caused a crash. I didn't think of it at the time, but with 24 bytes per vertex, that totaled over 9.6 billion bytes of data. This exercise was, of course, unnecessary and a waste of time, but it did bring up something that I hadn't considered before.... How does one determine if the data being sent will fit into the graphics card's RAM prior to sending it? If sending too much data will cause a crash, then it is important to know before hand. I know that my graphics card has xx total VRAM, but not all of that space is available to me.

Does OpenGL provide some mechanism to get this information? I am aware that it would be dependent on the card manufacturer, but the driver should make this information available to the API, I would think. I've search OpenGL's documentation, but I cannot seem to find anything. Maybe this is something that is only available through the OS?

Advertisement

Does OpenGL provide some mechanism to get this information? I am aware that it would be dependent on the card manufacturer, but the driver should make this information available to the API, I would think. I've search OpenGL's documentation, but I cannot seem to find anything. Maybe this is something that is only available through the OS?

There isn't really a fixed amount that is available.

The card has some fixed total storage, given by the amount of video RAM. From that you need to subtract the space taken by shaders, textures, vertex buffers, and any framebuffers. And each of those is per application which shares the GPU.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

I googled this and didn't find any documentation regarding a limit on VBO size. I also read that if there's not enough memory for a VBO, then it will be placed in system memory.

Also, I wanted to ask you this, are those 400,000,000+ primitives always within your viewing frustum? I really hope you're culling everything you're camera isn't facing! wink.png

There are several ways to get information about memory allocation through the API. NVX_gpu_memory_info and ATI_meminfo are some of the extensions for that.

I tried to summarize main aspects of those extensions in OpenGL Insights, Chapter 38, pg.535-540.

There is a limit in size of objects in graphics card memory. It depends on graphics card memory size, driver policy, and probably architecture. Different vendors also expose different policies. NV, for example, won't draw objects that cannot fit into the dedicated graphics memory, while AMD allows drawing directly from the shared system memory. This shouldn't be accepted as absolute truth, but that's just the behavior of drivers and cards I had tested.

In any case, splitting gigantic VBOs into smaller chunks enables more efficient memory management. At the cost of reducing performance, the sum of dedicated and shared memory can be used for storing graphics objects.

Thanks for the replies. This was me being bored and not an actual project. If it had been an actual project I would have divided up the maze into zones and only displayed the zones visible in the frustum.

I am curious why the program crashed at glBufferData with a bad_alloc exception? If the VBO could have been placed in system memory, of which I have 16 GB, there shouldn't be an issue. Odd. Maybe a driver issue?

If it was a 32-bit program then I'd expect a failure, but definitely not a crash; even if 64-bit you should not have crashed - glBufferData is specified to generate GL_OUT_OF_MEMORY if the requested size can't be allocated. Almost certainly a driver bug (although I don't expect that there are too many people creating > 9gb buffers so the code path for this may not be robustly tested in any driver!)

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

If it was a 32-bit program then I'd expect a failure, but definitely not a crash; even if 64-bit you should not have crashed - glBufferData is specified to generate GL_OUT_OF_MEMORY if the requested size can't be allocated. Almost certainly a driver bug (although I don't expect that there are too many people creating > 9gb buffers so the code path for this may not be robustly tested in any driver!)

That is what I was thinking. Thanks for the answer.

Thanks for the replies. This was me being bored and not an actual project. If it had been an actual project I would have divided up the maze into zones and only displayed the zones visible in the frustum.

I am curious why the program crashed at glBufferData with a bad_alloc exception? If the VBO could have been placed in system memory, of which I have 16 GB, there shouldn't be an issue. Odd. Maybe a driver issue?

That 9GB VBO cannot be allocated by any mean. Shared system memory is not the same as system memory. Take a look at your graphics card's control panel. It is probably less than 2GB. Second, transferring data from CPU memory to GPU memory goes through two phases - copying from application memory space to driver memory space, and copying from driver memory space to device. Allocating objects that are bigger than a dedicated or shared graphics memory is nonsense by any mean. That's probably why vendors (you didn't mentioned which) have "forgotten" to catch the exception, as mhagain said.

Well... I tried to allocate 16 million vertices, 11 floats for each, 704Mb, and my GTX 560 Ti with 1Gb failed miserably :D So I wouldn't even try with 400 million.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

This topic is closed to new replies.

Advertisement