I was playing around with some old maze generation code and tried to generate a maze 10000 x 10000 cells in size. The algorithm had no trouble generating the maze, but sending the VBO with 400,000,000+ vertices to the graphics card caused a crash. I didn't think of it at the time, but with 24 bytes per vertex, that totaled over 9.6 billion bytes of data. This exercise was, of course, unnecessary and a waste of time, but it did bring up something that I hadn't considered before.... How does one determine if the data being sent will fit into the graphics card's RAM prior to sending it? If sending too much data will cause a crash, then it is important to know before hand. I know that my graphics card has xx total VRAM, but not all of that space is available to me.
Does OpenGL provide some mechanism to get this information? I am aware that it would be dependent on the card manufacturer, but the driver should make this information available to the API, I would think. I've search OpenGL's documentation, but I cannot seem to find anything. Maybe this is something that is only available through the OS?