Jump to content

  • Log In with Google      Sign In   
  • Create Account

How do I determine the maximum VBO size?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 MarkS   Prime Members   -  Reputation: 882

Like
0Likes
Like

Posted 21 February 2013 - 06:22 PM

I was playing around with some old maze generation code and tried to generate a maze 10000 x 10000 cells in size. The algorithm had no trouble generating the maze, but sending the VBO with 400,000,000+ vertices to the graphics card caused a crash. I didn't think of it at the time, but with 24 bytes per vertex, that totaled over 9.6 billion bytes of data. This exercise was, of course, unnecessary and a waste of time, but it did bring up something that I hadn't considered before.... How does one determine if the data being sent will fit into the graphics card's RAM prior to sending it? If sending too much data will cause a crash, then it is important to know before hand. I know that my graphics card has xx total VRAM, but not all of that space is available to me.

 

Does OpenGL provide some mechanism to get this information? I am aware that it would be dependent on the card manufacturer, but the driver should make this information available to the API, I would think. I've search OpenGL's documentation, but I cannot seem to find anything. Maybe this is something that is only available through the OS?



Sponsor:

#2 swiftcoder   Senior Moderators   -  Reputation: 9994

Like
2Likes
Like

Posted 21 February 2013 - 09:31 PM

Does OpenGL provide some mechanism to get this information? I am aware that it would be dependent on the card manufacturer, but the driver should make this information available to the API, I would think. I've search OpenGL's documentation, but I cannot seem to find anything. Maybe this is something that is only available through the OS?

There isn't really a fixed amount that is available. 

 

The card has some fixed total storage, given by the amount of video RAM. From that you need to subtract the space taken by shaders, textures, vertex buffers, and any framebuffers. And each of those is per application which shares the GPU.


Tristam MacDonald - Software Engineer @Amazon - [swiftcoding]


#3 blueshogun96   Crossbones+   -  Reputation: 916

Like
1Likes
Like

Posted 22 February 2013 - 01:48 AM

I googled this and didn't find any documentation regarding a limit on VBO size.  I also read that if there's not enough memory for a VBO, then it will be placed in system memory.  

 

Also, I wanted to ask you this, are those 400,000,000+ primitives always within your viewing frustum?  I really hope you're culling everything you're camera isn't facing! wink.png


Follow Shogun3D on the official website: http://shogun3d.net

 

blogger.png twitter.png tumblr_32.png facebook.png

 

"Yo mama so fat, she can't be frustum culled." - yoshi_lol


#4 Aks9   Members   -  Reputation: 861

Like
2Likes
Like

Posted 22 February 2013 - 03:38 PM

There are several ways to get information about memory allocation through the API. NVX_gpu_memory_info and ATI_meminfo are some of the extensions for that.

I tried to summarize main aspects of those extensions in OpenGL Insights, Chapter 38, pg.535-540.

 

There is a limit in size of objects in graphics card memory. It depends on graphics card memory size, driver policy, and probably architecture. Different vendors also expose different policies. NV, for example, won't draw objects that cannot fit into the dedicated graphics memory, while AMD allows drawing directly from the shared system memory. This shouldn't be accepted as absolute truth, but that's just the behavior of drivers and cards I had tested.

 

In any case, splitting gigantic VBOs into smaller chunks enables more efficient memory management. At the cost of reducing performance, the sum of dedicated and shared memory can be used for storing graphics objects.



#5 MarkS   Prime Members   -  Reputation: 882

Like
0Likes
Like

Posted 22 February 2013 - 04:45 PM

Thanks for the replies. This was me being bored and not an actual project. If it had been an actual project I would have divided up the maze into zones and only displayed the zones visible in the frustum.

I am curious why the program crashed at glBufferData with a bad_alloc exception? If the VBO could have been placed in system memory, of which I have 16 GB, there shouldn't be an issue. Odd. Maybe a driver issue?

#6 mhagain   Crossbones+   -  Reputation: 7976

Like
2Likes
Like

Posted 22 February 2013 - 05:10 PM

If it was a 32-bit program then I'd expect a failure, but definitely not a crash; even if 64-bit you should not have crashed - glBufferData is specified to generate GL_OUT_OF_MEMORY if the requested size can't be allocated.  Almost certainly a driver bug (although I don't expect that there are too many people creating > 9gb buffers so the code path for this may not be robustly tested in any driver!)


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#7 MarkS   Prime Members   -  Reputation: 882

Like
0Likes
Like

Posted 22 February 2013 - 05:10 PM

If it was a 32-bit program then I'd expect a failure, but definitely not a crash; even if 64-bit you should not have crashed - glBufferData is specified to generate GL_OUT_OF_MEMORY if the requested size can't be allocated.  Almost certainly a driver bug (although I don't expect that there are too many people creating > 9gb buffers so the code path for this may not be robustly tested in any driver!)

That is what I was thinking. Thanks for the answer.

#8 Aks9   Members   -  Reputation: 861

Like
0Likes
Like

Posted 23 February 2013 - 05:31 AM

Thanks for the replies. This was me being bored and not an actual project. If it had been an actual project I would have divided up the maze into zones and only displayed the zones visible in the frustum.

I am curious why the program crashed at glBufferData with a bad_alloc exception? If the VBO could have been placed in system memory, of which I have 16 GB, there shouldn't be an issue. Odd. Maybe a driver issue?

That 9GB VBO cannot be allocated by any mean. Shared system memory is not the same as system memory. Take a look at your graphics card's control panel. It is probably less than 2GB. Second, transferring data from CPU memory to GPU memory goes through two phases - copying from application memory space to driver memory space, and copying from driver memory space to device. Allocating objects that are bigger than a dedicated or shared graphics memory is nonsense by any mean. That's probably why vendors (you didn't mentioned which) have "forgotten" to catch the exception, as mhagain said.



#9 TheChubu   Crossbones+   -  Reputation: 4353

Like
0Likes
Like

Posted 23 February 2013 - 05:58 AM

Well... I tried to allocate 16 million vertices, 11 floats for each, 704Mb, and my GTX 560 Ti with 1Gb failed miserably :D So I wouldn't even try with 400 million.


"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS