When you load a texture into openGL as a 32-bit image but running in a 16-bit display mode, is the texture in openGL''s memory still 32-bit? I am working on a project and we are loading up a large (1024x1024 32-bit) image for the terrain and rendering is somewhat slow. We are only using that one texture for the terrain. When we scale down the image, the game runs faster. We aren''t doing any mip-mapping either. We are on GeForce 2''s with 32MB memory, so I am sure the texture should be making it into video RAM. Does anyone know what the problem might be? A 1024x1024 image is quite large I am aware, but with these cards should it not be a problem? I''m not the one working on the rendering API, however... so pleace excuse any ignorance. Any thoughts or tips would be great.
depends on what internal format u ask for (which the driver mightnt obey anyways but it tries to get close) if u load it with GL_RGB in 16bit itll most likely give u a rgb5 texture. try a higher number eg internal format of GL_RGB8