Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

InsaDe

Video memory usage

This topic is 5330 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
Guest Anonymous Poster
Not possible.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Let me elaborate. It is not possible, nor is it neccessary. Whether you are using OpenGL or Direct3D, you don''t talk directly to the video card. You always talk to the driver, and it is the driver''s responsibility to make put things in the proper memory. If video memory fills up, the driver will start putting things in AGP memory. If AGP fills up or is not available, it will start looking at system memory, and so on. Your only responsibility is to keep your resources within a reasonable limit. Almost any card made within the last year or so will have at least 64 megs. So if you want to target newer machines and be in a safe limit, keep your resources at around 64 megs. If you want to support even older machines (TNT2), keep it around 32 megs.

Share this post


Link to post
Share on other sites
"Almost any card made within the last year or so will have at least 64 megs. So if you want to target newer machines and be in a safe limit, keep your resources at around 64 megs. If you want to support even older machines (TNT2), keep it around 32 megs."

not if your engine can scale itself to the hardware, in that case, it _needs_ to know..

if you just do that: "If you want to support even older machines (TNT2), keep it around 32 megs"
it will run on older hardware, but won't take any advantage of the modern hardware.

I'm interested in this too, haven't looked deeply at it yet, but I read somewhere on these boards you could use windows functions to ask the driver directly, or something similar (can't remember)

perhaps a search on msdn would help

EDIT:
no offence but this statement: "It is not possible, nor is it neccessary"
is all but true... (at least for the last part , and I'm almost sure that you _can_ gain information on the first one, at least the driver knows. and I've seen a few programs giving you these infos)

[edited by - sBibi on January 12, 2004 11:09:15 AM]

EDIT(2):
"Whether you are using OpenGL or Direct3D, you don't talk directly to the video card. You always talk to the driver, and it is the driver's responsibility to make put things in the proper memory"

but you might _not_ want the driver to put some "things" somewhere else than in vram...

[edited by - sBibi on January 12, 2004 11:11:33 AM]

Share this post


Link to post
Share on other sites
I''m pretty sure (in DX8 atleast) there was a call in either the D3D object, or the D3DDevice for querying texture memory... but it included AGP memory, and other memory. There was no call to get the video card''s actual memory size.

In DDraw however, I''m pretty sure there IS a call to get the memory... If I remember correctly, the value is shown in Caps Viewer under DDraw. (checked... yup it''s there)

People have mentioned that it''s annoying to not be able to query the amount of memory, and MS apparently has listened, and will add an appropriate call in future versions of DX (I think HerbM mentioned this a long time ago). Until then you can init DX7 DDraw, query the memory, kill DDraw, then start D3D8 or D3D9.

Share this post


Link to post
Share on other sites
I might be completely wrong but with OpenGL at least you could try the following:


int numtex=0;
GLuint t[10000];
GLuint t1[10000];
glGenTextures(10000,t);

while(numtex<10000 && glAreTexturesResident(numtex,t,t1))
{
glBindTexture(GL_TEXTURE_2D,t[numtex]);
// not sure if this works if not you might have to specify a fake tex image

// also the GL_RGB16 and 256x256 are chosen to work on almost any hardware

// down to a voodoo (at least i think so)

glTexImage2D(GL_TEXTURE_2D,0,GL_RGB16,256,256,0,GL_RGB,GL_BYTE,0);
numtex++;
}

glDeleteTextures(numtex,t);
//each is an 256x256 texture 2 byte each (GL_RGB16)

int freevidmeminbytes=numtex*256*256*2;


I think i saw a glAreTexturesResident demo on delphi3d.net once, and it worked perfectly for me, so this should give you quite exact results.

Keep in mind though that you need a valid OpenGL RC and textures enabled once calling this codepiece.

edit: sp

[edited by - LousyPhreak on January 12, 2004 12:18:09 PM]

Share this post


Link to post
Share on other sites
If you're using DirectX if I remember correctly you could call GetCaps() in the direct draw interface to get the total memory and memory free for the video card. However, don't make too many assumptions with what you can do with this memory. All of it may or may not be available for textures, vertecies, etc... i.e. it tells you that memory is free but not what type of memory it is.



[edited by - mauman on January 12, 2004 12:43:07 PM]

Share this post


Link to post
Share on other sites
DirectX''s value is just an estimation and might as well be wrong.

The problem with the question, is how do you define video memory ?

If you''re thinking to texture storage, it can be spread on many types of memory, AGP (or what about integrated chipsets?), in many formats, compressed or not, 16 bits or 32 bits, you don''t know what the driver is doing. It''d be completely useless, except for a very, very, very rough estimation.

If you want your engine to scale with your video card, i''d rather base it on the type or family of video card, rather than trying to find the amount of video memory.

Y.

Share this post


Link to post
Share on other sites
I think there''s no point checking how much there is.

What if the card doesn''t use the conventional notion of VRAM, AGP RAM and System RAM, or their associated traits?

What about onboard graphics that don''t have integrated VRAM? Perhaps the driver will return the amount of system ram there is. Do you really want to fill that up?

If you want to make a scalable engine, you could try the Neverwinter Nights approach and have different texture sets for different amounts of VRAM. And ask the user which one to use.

I can''t stress that enough - ask the user.


Why you shouldn''t use iostream.h - ever! | A Good free online C++ book

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!