Sign in to follow this  

Is there a way i can find how much video memory my program is using and...

This topic is 3407 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

A quick and dirty way to find the amount of VRAM available when your program starts is to make 1 meg textures/render targets until it fails. If you can make 64 1meg textures, your card is 64MB.

Share this post


Link to post
Share on other sites
Quote:
Original post by beebs1
nVidia PerfHUD should be able to tell you this, and a lot more besides.

GPU PerfStudio is a similar tool from ATI/AMD.


I use an nvidia card and certainly looked at PerfHUD but that's only for D3D applications. I could not find a total amount of memory used by the program with gDebugger either(could have missed the feauture though, there is a lot of info there)

Share this post


Link to post
Share on other sites
Quote:
Original post by Numsgil
A quick and dirty way to find the amount of VRAM available when your program starts is to make 1 meg textures/render targets until it fails. If you can make 64 1meg textures, your card is 64MB.


I tested an application that was using(about 70mb) more than the maximum video memory of the video card(32mb) and it worked but very slowly, the textures were loaded okay, only it worked very slowly(guess it was running in software). So i'm not sure this may be of help...

[Edited by - Deliverance on August 12, 2008 4:15:23 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Numsgil
A quick and dirty way to find the amount of VRAM available when your program starts is to make 1 meg textures/render targets until it fails. If you can make 64 1meg textures, your card is 64MB.


Ya that won't work. Your graphics driver will regularly move textures in and out of vram. So it is possible to allocate more texture memory that there is vram.

Your method would probably work if you used a feature that is guaranteed to be located in vram, such as pbuffers. However, that doesn't make this technique an less offensive (for obvious reasons).

Share this post


Link to post
Share on other sites
There's something that will fail if you're out of VRAM. I though it was textures but I might be wrong. Try creating a hardware buffer. I know for sure those fail. The trick is to find something that can only be done in VRAM, and create a couple of them until it fails, and then calculate the amount of VRAM from that.

Incidentally this trick comes from Game Coding Complete 2nd Edition. If anyone can think of a better way to have a nice dialog pop up when the user tries to run your game that says "sorry chump, you need more VRAM", I'd like to hear it, too :)

Share this post


Link to post
Share on other sites
I tried nviddia PerfSDK, because the HUD is only for D3D. The problem is it cannot display the amount of video memory used. I says 0 all the time. Here's a similar post with the same problem:http://developer.nvidia.com/forums/index.php?showtopic=233. So are we both doing something wrong? I tried collecting the info with Perfmon solution and it too does say 0 for video memory usage.

Share this post


Link to post
Share on other sites
Quote:
Original post by Numsgil
There's something that will fail if you're out of VRAM. I though it was textures but I might be wrong. Try creating a hardware buffer. I know for sure those fail. The trick is to find something that can only be done in VRAM, and create a couple of them until it fails, and then calculate the amount of VRAM from that.
Speaking for DirectX (I don't know about OpenGL), there's nothing that can only be created in hardware - the driver can do whatever the hell it likes with resources.

Quote:
Original post by Numsgil
Incidentally this trick comes from Game Coding Complete 2nd Edition. If anyone can think of a better way to have a nice dialog pop up when the user tries to run your game that says "sorry chump, you need more VRAM", I'd like to hear it, too :)
That's a really bad idea, and it's things like that which cause compatibility problems. What happens for people who have integrated graphics cards that don't have any VRAM at all? They won't be able to play your game at all, even if they have the equivalent of a 8800 GTS. If someone has an old card, they should still be allowed to play your game, even if it does run slowly.

Share this post


Link to post
Share on other sites
Quote:
Original post by Deliverance
So, is there a valid solution to querying the amount of video memory used by an opengl application or not? :D
You can just total up the size of all your resources. A 256x256, 32-bit texture would be 256*256*4 = 262144 bytes = 256KB.

Whether or not the resource is actually in video memory or not entirely depends on the driver - that sort of number is only possible to get direct from the driver, if your driver manufacturer provides some mechanism to access it (An API or control panel applet or whatever).

Share this post


Link to post
Share on other sites
Quote:
Original post by Deliverance
So, is there a valid solution to querying the amount of video memory used by an opengl application or not? :D

No. The term "video memory" may not have a meaningful definition, and even if it does, is not a particularly useful way of figuring out which of several ways to do things. Instead, allow the user to explicitly choose their texture quality, geometry detail, etc.

EDIT: as a shipping product that is. If you're just testing your app, use PerfSDK. Yes, it does work with OGL... I think it's GLExpert that gives you that info.

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
Quote:
Original post by Numsgil
There's something that will fail if you're out of VRAM. I though it was textures but I might be wrong. Try creating a hardware buffer. I know for sure those fail. The trick is to find something that can only be done in VRAM, and create a couple of them until it fails, and then calculate the amount of VRAM from that.
Speaking for DirectX (I don't know about OpenGL), there's nothing that can only be created in hardware - the driver can do whatever the hell it likes with resources.


In theory the driver can do whatever it likes. In practice, there are resources that must be created in hardware, and if you've filled your VRAM already trying to create it will fail. For our game at work, we were crashing the game because we were out of VRAM (yes, it's a DX game). Changing the load order (loading textures last) fixed that problem. I ran in to it a few years ago also when I was creating a vertex buffer for deformable terrain. There are definitely things that no amount of system RAM can handle.

Quote:

Quote:
Original post by Numsgil
Incidentally this trick comes from Game Coding Complete 2nd Edition. If anyone can think of a better way to have a nice dialog pop up when the user tries to run your game that says "sorry chump, you need more VRAM", I'd like to hear it, too :)
That's a really bad idea, and it's things like that which cause compatibility problems. What happens for people who have integrated graphics cards that don't have any VRAM at all? They won't be able to play your game at all, even if they have the equivalent of a 8800 GTS. If someone has an old card, they should still be allowed to play your game, even if it does run slowly.


If your customer is running integrated Intel, VRAM is the least of your worries :) Last I heard integrated graphics cards were broken under OpenGL, and Intel had no intention of fixing them any time soon. Maybe that's changed? All your checking is that the game won't crash if it tries to create X MBs of resource type Y. It's just a sanity check, and you can do whatever you like with the information from that sanity check (even something crazy like let the initialization routines continue).

Also, old cards can't always run new games. Shader models in particular make it impossible (well, there might be a way to force the driver in to software emulation mode, but no one can play your game like that) to run new games. Sure, you could just program your game to blithely start initializing things and handle an error after 10 seconds of loading to let the user know that you can't create the mesh for SamuraiWarrior92 for some reason. I just don't want to be the one to man the tech support stations when everyone with a 256MB card that tries to run your game complains that their SamuraiWarrior92 mesh is broken. A nice message that says "this game requires a 512 MB, Shader Model 4 card, and your card appears to be 256 MB, Shader Model 4, which is insufficient" is always far friendlier than whatever error messages programmers write for themselves inside the loading routines ("Error creating mesh "BigHonkinSplosion": invalid alternate tree mask") :)

Share this post


Link to post
Share on other sites
Quote:
Original post by fpsgamer
Quote:
Original post by Numsgil
A quick and dirty way to find the amount of VRAM available when your program starts is to make 1 meg textures/render targets until it fails. If you can make 64 1meg textures, your card is 64MB.


Ya that won't work. Your graphics driver will regularly move textures in and out of vram. So it is possible to allocate more texture memory that there is vram.

Your method would probably work if you used a feature that is guaranteed to be located in vram, such as pbuffers. However, that doesn't make this technique an less offensive (for obvious reasons).


Not to mention that the GPU can also handle things a bit differently than you: nothing prevent the driver or even the PGU to compress some textures or (for example) the z-buffer. So this method is not working great.

If you are using D3D, there is a function in the Direct3DDevice9 interface that gets an approximate value of the onboard memory size. It's not very trustworthy, but at least you'll get an idea.

Share this post


Link to post
Share on other sites
Quote:
Original post by Numsgil
In theory the driver can do whatever it likes. In practice, there are resources that must be created in hardware, and if you've filled your VRAM already trying to create it will fail. For our game at work, we were crashing the game because we were out of VRAM (yes, it's a DX game). Changing the load order (loading textures last) fixed that problem. I ran in to it a few years ago also when I was creating a vertex buffer for deformable terrain. There are definitely things that no amount of system RAM can handle.
What if your application is alt+tabbed away from and the OS decides to take over the VRAM (As Vista does)? What if the next version of Windows introduces virtual VRAM in the same way we have for system RAM?

Quote:
Original post by Numsgil
If your customer is running integrated Intel, VRAM is the least of your worries :) Last I heard integrated graphics cards were broken under OpenGL, and Intel had no intention of fixing them any time soon. Maybe that's changed? All your checking is that the game won't crash if it tries to create X MBs of resource type Y. It's just a sanity check, and you can do whatever you like with the information from that sanity check (even something crazy like let the initialization routines continue).
That's true. My point was more about future proofing your code though. What if Intel fixes their integrated chipsets and they become the equivalent of a 8800 GTS, while still not having any real VRAM?

Quote:
Original post by Numsgil
Also, old cards can't always run new games. Shader models in particular make it impossible (well, there might be a way to force the driver in to software emulation mode, but no one can play your game like that) to run new games. Sure, you could just program your game to blithely start initializing things and handle an error after 10 seconds of loading to let the user know that you can't create the mesh for SamuraiWarrior92 for some reason. I just don't want to be the one to man the tech support stations when everyone with a 256MB card that tries to run your game complains that their SamuraiWarrior92 mesh is broken.
Your code should always check each graphics allocation to check that it succeeds. There's many, many things that could go wrong; total resource size is just one of them. If something fails and you don't handle it properly, there's a good chance your customer will see a lovely "The application has performed an illegal operation and will be terminated" dialog box, which is never a good thing.

Quote:
Original post by Numsgil
A nice message that says "this game requires a 512 MB, Shader Model 4 card, and your card appears to be 256 MB, Shader Model 4, which is insufficient" is always far friendlier than whatever error messages programmers write for themselves inside the loading routines ("Error creating mesh "BigHonkinSplosion": invalid alternate tree mask") :)
Again, VRAM size has nothing to do with it. Checking supported shader version is a seperate issue, and is perfectly fine to do. If you fail to create a mesh for some reason, you may be able to find out why (D3D gives you multiple error codes you can check), and if not, you can simply display a message that doesn't have to be quite as low level as that. In any case, if it's failing to load a mesh, that's going to be a more serious problem - allocating a buffer for vertices shouldn't really be a problem (Again, in D3D it's in system memory till you use it, I assume OpenGL has something similar).

Quote:
Original post by Emmanuel Deloget
Not to mention that the GPU can also handle things a bit differently than you: nothing prevent the driver or even the PGU to compress some textures or (for example) the z-buffer. So this method is not working great.

If you are using D3D, there is a function in the Direct3DDevice9 interface that gets an approximate value of the onboard memory size. It's not very trustworthy, but at least you'll get an idea.
GetAvailableTextureMem. Although that includes AGP memory and all sorts of other stuff. I've seen this report more than double the actual VRAM size before.


My main point is that you should never refuse to run unless you either have an error, or you've identified something that you absolutely need and you're absolutely sure the card can't do (E.g. shader version). Doing otherwise is quite likely to break your game on future hardware.

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
Quote:
Original post by Numsgil
In theory the driver can do whatever it likes. In practice, there are resources that must be created in hardware, and if you've filled your VRAM already trying to create it will fail. For our game at work, we were crashing the game because we were out of VRAM (yes, it's a DX game). Changing the load order (loading textures last) fixed that problem. I ran in to it a few years ago also when I was creating a vertex buffer for deformable terrain. There are definitely things that no amount of system RAM can handle.
What if your application is alt+tabbed away from and the OS decides to take over the VRAM (As Vista does)? What if the next version of Windows introduces virtual VRAM in the same way we have for system RAM?


It shouldn't take any appreciable time to create 512 1 meg vertex buffers (or whatever), so alt+tabbing away during that 1 ms start up test is something of an edge case you can ignore. If in the future applications use virtual VRAM for everything, then your test won't fail and the user will be able to play your game on potentially inferior technology. Nothing lost, just nothing gained either. At least you can be sure you won't ever get a "could not create hardware buffer" error during loading or (heaven forbid) inside your game loop. At least not from something like not enough VRAM (removing the video card during play is probably a source of error regardless :))

It's not a perfect test; I said it was down and dirty. It just lets you protect against cards you know your game won't run on.

Quote:
That's true. My point was more about future proofing your code though. What if Intel fixes their integrated chipsets and they become the equivalent of a 8800 GTS, while still not having any real VRAM?


Again, your test will pass (512 1 meg hardware buffers created successfully). Worst case is it's a false negative.

Quote:

Your code should always check each graphics allocation to check that it succeeds. There's many, many things that could go wrong; total resource size is just one of them. If something fails and you don't handle it properly, there's a good chance your customer will see a lovely "The application has performed an illegal operation and will be terminated" dialog box, which is never a good thing.


Yes, you should definately always check against errors. But unless you're somehow different from any coder I've seen, your error messages are not meant for human eyes. If you can do some quick and dirty tests at program start time to check for common problems, you can save at least a large portion of your customers a lot of headache. "Oh, I just need a better video card. That sucks, but I know what that means" vs. "What the crap is arggm.qhm? What's a tree mask? This game sucks and I'm going to spam every forum I know about to let people know not to waste their money on this crap."

Quote:
Original post by Numsgil
Again, VRAM size has nothing to do with it. Checking supported shader version is a seperate issue, and is perfectly fine to do. If you fail to create a mesh for some reason, you may be able to find out why (D3D gives you multiple error codes you can check), and if not, you can simply display a message that doesn't have to be quite as low level as that. In any case, if it's failing to load a mesh, that's going to be a more serious problem - allocating a buffer for vertices shouldn't really be a problem (Again, in D3D it's in system memory till you use it, I assume OpenGL has something similar).


They're all just different sides of the same issue. I'm just saying you should do a sanity check before you try loading levels, because error messages meant for you, a programmer, and error messages meant for me, a gamer, are not the same thing. Keep your customers away from coder-written error messages. If your sanity checks fail to indicate an error on a specific system where there is one, that's too bad but something you can live with. Just make sure the card/driver won't fail doing things you know your game will ask it to do. Especially if that error occurs during gameplay. Then your game is suddenly buggy when it was the hardwares fault and something you could have tested for at program start (or game init dialog, wherever you want it).

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
Quote:
Original post by Deliverance
So, is there a valid solution to querying the amount of video memory used by an opengl application or not? :D
You can just total up the size of all your resources. A 256x256, 32-bit texture would be 256*256*4 = 262144 bytes = 256KB.

Whether or not the resource is actually in video memory or not entirely depends on the driver - that sort of number is only possible to get direct from the driver, if your driver manufacturer provides some mechanism to access it (An API or control panel applet or whatever).


A quick gotcha on NVIDIA cards: if your texture is not a power of 2, NVIDIA will round it up to the nearest power of two once it gets on the card. Meaning the size you calculate the texture to take up and what it actually takes up will be different.

Share this post


Link to post
Share on other sites
Quote:
Original post by Numsgil
It shouldn't take any appreciable time to create 512 1 meg vertex buffers (or whatever), so alt+tabbing away during that 1 ms start up test is something of an edge case you can ignore. If in the future applications use virtual VRAM for everything, then your test won't fail and the user will be able to play your game on potentially inferior technology. Nothing lost, just nothing gained either. At least you can be sure you won't ever get a "could not create hardware buffer" error during loading or (heaven forbid) inside your game loop.

It's not a perfect test; I said it was down and dirty. It just lets you protect against cards you know your game won't run on.

Quote:
That's true. My point was more about future proofing your code though. What if Intel fixes their integrated chipsets and they become the equivalent of a 8800 GTS, while still not having any real VRAM?


Again, your test will pass (512 1 meg hardware buffers created successfully). Worst case is it's a false negative.
Ok, fair points [smile]

Quote:
Original post by Numsgil
Yes, you should definately always check against errors. But unless you're somehow different from any coder I've seen, your error messages are not meant for human eyes. If you can do some quick and dirty tests at program start time to check for common problems, you can save at least a large portion of your customers a lot of headache. "Oh, I just need a better video card. That sucks, but I know what that means" vs. "What the crap is arggm.qhm? What's a tree mask? This game sucks and I'm going to spam every forum I know about to let people know not to waste their money on this crap."
I have two error messages. At the point the error occurs, detailed information is logged to a file, and a user-displayable error message is set. The function then returs an error code, which eventually finds it's way back up to the main / load / init code, where the human readable error message is read, and it could add "See error.log for technical details", or something similar.

Quote:
Original post by Numsgil
They're all just different sides of the same issue. I'm just saying you should do a sanity check before you try loading levels, because error messages meant for you, a programmer, and error messages meant for me, a gamer, are not the same thing. Keep your customers away from coder-written error messages. If your sanity checks fail to indicate an error on a specific system where there is one, that's too bad but something you can live with. Just make sure the card/driver won't fail doing things you know your game will ask it to do. Especially if that error occurs during gameplay. Then your game is suddenly buggy when it was the hardwares fault and something you could have tested for at program start (or game init dialog, wherever you want it).
As above, the errors can be niceified.

I see your points, I'm not sure what I was really thinking. My point is (As you seem to agree), that false negatives are ok, but false positives really aren't - that's what breaks compatibility. Checking how much VRAM is available is something that is likely to report a false positive (Saying you have 0MB of VRAM when you have an integrated chipset for instance), and is of no use to the programmer. This is exactly why GetAvailableTextureMem() doesn't just give you VRAM - it gives you the amount of texture memory available (As reported by the driver, and rounded) which is much more useful, but is not the same as the amount of VRAM available, and can be completely unrelated.

Share this post


Link to post
Share on other sites
Quote:
Original post by Evil Steve
Checking how much VRAM is available is something that is likely to report a false positive (Saying you have 0MB of VRAM when you have an integrated chipset for instance), and is of no use to the programmer.


Yeah, it would be nice if the APIs were set up to give you performance bottleneck points. Ignoring the whole issue of VRAM, AGP memory etc., just say something like: when your assets go north of 256MB, expect performance degradation. And you are currently using X MB of resources.

Quote:

This is exactly why GetAvailableTextureMem() doesn't just give you VRAM - it gives you the amount of texture memory available (As reported by the driver, and rounded) which is much more useful, but is not the same as the amount of VRAM available, and can be completely unrelated.


Yeah, GetAvailableTextureMem() is approximately worthless.

Share this post


Link to post
Share on other sites

This topic is 3407 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this