Sign in to follow this  
__SKYe

OpenGL Calculate GPU memory usage

Recommended Posts

__SKYe    1784

So, i'm thinking of calculating the VRAM in use.

 

Now, i don't intend to track every single byte in the VRAM (nor do i think it's possible), i was just thinking of keeping track of how much memory

I upload to the GPU, to get an estimate of the VRAM that is in use (by keeping track of the amount of memory textures, VBOs,buffers, etc occupy).

 

Of course i'd have to keep in mind packing, compression, etc. My question is mostly about the color/depth/stencil/etc buffers.

 

 

Say i create a window, single buffered, with a 32-bit (RGBA) color depth and a 16-bit depth buffer.

 

I know that the color buffer occupies 4 bytes per pixel, and has the same size as the window (say 800x600).

The depth buffer occupies 2 bytes per pixel and also has has the same size as the window.

 

My first question would be. If i switch to fullscreen mode with a bigger resolution (say 1280x800), am i correct to assume that they are expanded to match the new window size?

And if i return to windowed mode again, assuming the buffers are, once again, resized to match the window size, to the memory the buffers occupy in the VRAM actually shrinks, or is just a part of the memory that was used for the larger (fullscreen) size is unused?

 

Also, for double buffering, the window requires 2 color buffers, basically doubling the memory needed for the color buffers, right?

I also assume that using double buffering and support for stereoscopic 3D means 4 color buffers (back left & right, front left & right).

 

Another question would be, does the stencil buffer uses it's own memory, or is it shared with another buffer (say the depth buffer)?

 

Also, i've read somewhere that multisample requires a buffer per sample (say a 4x Multisample would require 4 buffers). If anyone could enlighten me at this, i'd appreciate it very much.

 

I'm nol yet accounting for user created buffers (like geometry buffers user for deferred shading, or a custom, floating point, depth buffer for HDR).

 

As a last note, is it correct to assume that all this things that can be created by the graphics API (OpenGL in this case) reside on the VRAM, or are there exceptions?

 

If something is not clear, then i'll be happy to explain.

 

Thanks in advance.

Edited by __SKYe

Share this post


Link to post
Share on other sites
__SKYe    1784

Oh no, i'm talking about calculating the amount of VRAM used, from within the game/application.

Something like what you do with your custom memory manager to keep track of how much RAM you've allocated, and where allocations are going.

 

GPU-Z is an offline utility to get GPU information right?

Edited by __SKYe

Share this post


Link to post
Share on other sites
__SKYe    1784

Thanks, that's helpful to know if you're within VRAM budget.

 

But while it is helpful to know the amount of memory used, it still won't help much with what i'm after.

 

You see, what i mean to do is basically to keep track of how much memory i allocate for the various types of data that resides in the VRAM.

This can be textures, vertex/normal/texcoords data, the color/depth/stencil buffers, etc.

 

Again, i don't mean to track every single byte of memory that is in the GPU, i just want to keep an "in-house" estimate of the VRAM used.

 

As an example, think of how you would create your own memory (normal RAM) tracker, to get stats like:

 

General:   2312083KB

Audio:         195421KB

MAP:       29383467KB

 

etc...

 

And the VRAM equivalent could be:

 

Color Buffers:     503918214KB

Depth Buffer:        78216928KB

Textures:        23487654303KB

Geometry:          489279345KB

 

etc...

 

Note that i don't mean to query the GPU for the amount of data i sent, or the memory in use.

What i'd do is to calculate the amount of memory a resource would use when i create/upload it in/to the GPU.

 

Again, image i load a R8G8B8A8 128x128 (uncompressed) texture from disk, and upload it to the GPU. Here i'd calculate the memory required for the texture (which would be 128*128*4 -- assuming the internal storage is also RGBA, etc) and perhaps add it to a counter of the total texture memory used.

 

I know that the GPU can remove textures from the VRAM if you upload more textures than the available memory, among other issue, so it's just an estimate.

 

Sorry if the explanation isn't clear.

 

Thanks again.

Share this post


Link to post
Share on other sites
Matias Goldberg    9582

The logic behind all your reasoning is correct.

 

However the driver and hw internals can screw your calculations completely.

 

Even though you request one Front buffer and 2 backbuffers, the GPU may actually internally have like 3 copies of them for pipelining and avoiding stalls, etc; and you have no idea to know what the driver is doing behind your back (that's the main point about the yet-unreleased Mantle, the API doesn't do stuff behind your back).

The duplication thing may even further if the GPU is in Crossfire/SLI mode.

 

For example, in many GeForce architectures (actually, AFAIK all of them) if you request an FBO of Float R16; internally the driver will use a Float F32 since their GPUs don't natively support R16 (but this may change in the future, and you don't always know this kind of stuff; this is only known because F32/F16 are both a very used and abused format, and NVIDIA has stated this explicitly many times in their GDC courses).

 

As for 32-bit colour framebuffers with 16-bit depth buffers, forget about it. It's been a very long time since I've seen a GPU that can mix colour and depth buffers of different bitdepths (use all 16-bit or all 32-bit). It either fails or internally just uses the largest bpp.

 

Same issues with the stencil buffer: In some architectures it will be shared with the depth buffer, i.e. 24 bits for the depth, 8 bits for stencil.

In other archs, they're separate (32 for depth or 24 for depth + 8 unused, and then another buffer with 8 for stencil)

 

As for MSAA, this will enlighten you (basically, 4x antialiasing = 4x the resolution of a colour and depth buffer; plus a buffer at 1x resolution to resolve. How many resolve buffer depends on how you tell the card to do it).

 

So bottom line, you can calculate your "theoretical" requirement, but the actual, real number in a particular system is completely dependent on the driver and the GPU architecture.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now