ATI inital rendering delay up to multiple seconds

Started by
5 comments, last by vNeeki 12 years ago
This is a problem I'm facing since a long time and I could not find a solution for it. The situation is the following:

1) Create some textures with glTexImage2D fill with image data
2) Render a scene where not all objects are immediatly visible (culling)

Now the first time a bunch of previously not visible objects turn visible the frame update hangs up to multiple seconds depending the number of objects now rendered that didn't before. I narrowed down the problem to the textures. If an object is rendered with a texture that has not been used for rendering yet this delay happens. If I force a texture already loaded no such delay happens at all.

Thus on ATI for some reason the first time a texture is used for rendering an awful delay happens.

Has anybody an idea what could cause this delay? It doesn't show up on nVidia at all with the same code. Is there some call I have to make to force ATI to ready the texture for rendering even if the texture is not yet used? I really need a solution for this because random delays up to seconds during gameplay is just unacceptable.

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine

Advertisement
That's like magic... whenever I post here a question after weeks of looking for an answer I stumble across the solution soon after.

Figured out what causes the problem. I disabled mip mapping while using compression (hence GL_COMPRESSED_RGB_ARB respectivly GL_COMPRESSED_RGB_ARBA) and then the initial delay vanished. So the conclusion is that if a compression format is used with mip mapping ATI doesn't compress the data while using glTexImage2D but delays the compression of the data until the texture is rendered for the first time. With either compression disabled or mip mapping disabled the delay vanished. So you can't do both (compression and mip mapping) on ATI without running into the delay problem.

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine


That's like magic... whenever I post here a question after weeks of looking for an answer I stumble across the solution soon after.

Figured out what causes the problem. I disabled mip mapping while using compression (hence GL_COMPRESSED_RGB_ARB respectivly GL_COMPRESSED_RGB_ARBA) and then the initial delay vanished. So the conclusion is that if a compression format is used with mip mapping ATI doesn't compress the data while using glTexImage2D but delays the compression of the data until the texture is rendered for the first time. With either compression disabled or mip mapping disabled the delay vanished. So you can't do both (compression and mip mapping) on ATI without running into the delay problem.


This is interesting as i had a similar problem.May i ask if there is a solution for doing descent mip mapping via software method ?
I would recommend always performing the creation of mip-maps and the compression of your texture data at data-build time, instead of at run-time.
There are many libraries to help with this task, such as:
http://code.google.com/p/nvidia-texture-tools/
http://code.google.com/p/libsquish/
http://developer.amd.com/tools/compress/Pages/default.aspx

I would recommend always performing the creation of mip-maps and the compression of your texture data at data-build time, instead of at run-time.
There are many libraries to help with this task, such as:
http://code.google.c...-texture-tools/
http://code.google.com/p/libsquish/
http://developer.amd...es/default.aspx


Thanks although it doesn't explain how to perform that particular task rolleyes.gif

I would recommend always performing the creation of mip-maps and the compression of your texture data at data-build time, instead of at run-time.
There are many libraries to help with this task, such as:
http://code.google.c...-texture-tools/
http://code.google.com/p/libsquish/
http://developer.amd...es/default.aspx

Figured that much out myself too. Looks like I have to do the compression software side. I had this plan anyways so that's not much of a bummer. Downloaded that libsquish already but could not yet put it to use. Coming soon.

@vNeeki:
Create all mip map levels in software either using your own software or some other software. Since it's down-sampling by 2 (hence 4 pixels into 1) you can use a simple box filter for that. Once done you can upload each level using glTexImage2D specifying 0, 1, ... , N as the first parameter to write the appropriate mip map level.

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine


[quote name='Hodgman' timestamp='1332739676' post='4925260']
I would recommend always performing the creation of mip-maps and the compression of your texture data at data-build time, instead of at run-time.
There are many libraries to help with this task, such as:
http://code.google.c...-texture-tools/
http://code.google.com/p/libsquish/
http://developer.amd...es/default.aspx

Figured that much out myself too. Looks like I have to do the compression software side. I had this plan anyways so that's not much of a bummer. Downloaded that libsquish already but could not yet put it to use. Coming soon.

@vNeeki:
Create all mip map levels in software either using your own software or some other software. Since it's down-sampling by 2 (hence 4 pixels into 1) you can use a simple box filter for that. Once done you can upload each level using glTexImage2D specifying 0, 1, ... , N as the first parameter to write the appropriate mip map level.
[/quote]

Thanks. May i ask if you could recommend any descent library for the box filter mipmap generation ? All of my textures have POT dimensions (if that helps).

This topic is closed to new replies.

Advertisement