Jump to content
  • Advertisement
Sign in to follow this  
madgallagher

OpenGL Strange Video Memory Increase

This topic is 1937 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Guys

 

I have a weird issue on opengl. I am rendering a lot of tris (millions) using vertex arrays.

Here is the issue.

 

If I load up the tris and display as shaded with a constant color, the video memory is showing around 600MB.

 

If I then change the color array values to be varied at different vertex, the memory usage increases to around 1GB.

If I then spin the model around, the memory usage slowly decreases to around 600MB (after a couple of minutes).

 

But, if I load up the model straight away with varied shading, the memory is only 600MB.

 

This is on linux, so is it a driver issue ? Or is there something more obvious I am neglecting.

 

I am using nvidia-smi to check the graphics memory usage.

 

Cheers in advance for any help !

Share this post


Link to post
Share on other sites
Advertisement

My first question is "Are you using the Linux drivers from your graphics card manufacturer or Nouveau?" If you are Nouveau I recommend you get the official Linux drivers for you video card before moving on.

 

Once thats all set and done and it still doesn't work, they my next question is "Are you using any type of culling?" If you are, its possible that when switching from the constant color to the varied shading the culling gets reset and isn't re-called until you move the camera (spin the model) or load the model right away with the varied shading. You'll need to find way to make sure that the culler is "always active". If you are attempting to use deferred shading rendering, there could also be some issues there, but I wouldn't be able to help you much as I'm just starting to learn about using defShading with OpenGL.

 

Also, have you tried testing it with a lower-poly model (maybe in the thousands of tris)?

 

Lastly, what version of OpenGL are you using and what is your graphics card?

Share this post


Link to post
Share on other sites

Hi

 

I am using the latest linux driver from Nvidia. The card is pretty old. Its an FX3800, but we see the same issue on an FX4000.

The same routines are used for both types of shading. I'm simply changing the values in the color array. Nothing else.

 

We get the same "doubling" of memory usage on smaller models too. Its like when the color array is updated, the memory

is not re-allocated on the card properly.

Share this post


Link to post
Share on other sites

I'm simply changing the values in the color array.

How are you doing this? Is it a VBO or a client-side vertex array? What hints was it created with?

 

the memory usage slowly decreases to around 600MB (after a couple of minutes).

Is this actually a problem? If the driver doesn't need that memory for other tasks right now, they it's ok for it to delay releasing it.

Edited by Hodgman

Share this post


Link to post
Share on other sites

OK, the render code is basically as follows:

 

glShadeModel(GL_SMOOTH);

glEnable(GL_COLOR_MATERIAL)

glColorMaterial(GL_FRONT_AND_BACK,GL_AMBIENT);

glColorMaterial(GL_FRONT_AND_BACK,GL_SPECULAR);

glColorMaterial(GL_FRONT_AND_BACK,GL_DIFFUSE);

 

glEnableClientState(GL_VERTEX_ARRAY);

glVertexPointer(....);

glEnableClientState(GL_NORMAL_ARRAY);

glNormalPointer(...);

glEnableClientState(GL_COLOR_ARRAY);

glColorPointer(....);

glDrawElements(........);

 

So, I am just using arrays, not VBOs. All I do is update the values in the color array that glColorPointer points to before I come into this routine.

 

The problem is, on a 1GB graphics card, if I am using 600MB and then change the color, it swamps the card and

the system starts badly lagging because it is trying to double the memory usage.

Share this post


Link to post
Share on other sites

By you use of glColorPointer() and the FX3000/4000, I'm guessing your using OpenGL 2.0. I personally don't have too much experience with anything earlier than OpenGL 3.1, so please take my help as a grain of salt.

 

Do you disable client state after you are finished writing to it or at program close? If you use the later, try disabling it after you are finished drawing everything. I'm not sure weather or not it will help, but its worth a shot in my opinion.

 

Is it possible for you to use VBOs rather than arrays? My understanding is that even back in OpenGL 2, it was more efficient to use buffers rather than arrays.

Share this post


Link to post
Share on other sites

Yes, sorry I forgot to add in the glDisableClientState commands. These are all deleted after the tris are drawn each time.

 

I added in VBOs and got the same effect even when deleting the VBO. Even simply using glBegin/End for the rendering (shock horror !!) causes the same effect.

 

It is kind of driving me crazy..................

Share this post


Link to post
Share on other sites

It's driving me crazy that I don't know how else I can help you...blink.png

 

Just one quick question... Are you using C++ or C or another language? Also, if you are using C++ or C, are you using any libraries like GLEW or SDL?

Share this post


Link to post
Share on other sites

Its all in C baby. I am not using anything like GLEW or SDL. Its pretty basic stuff.

 

Do you think it is a driver issue which is out of my control ??

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!