Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Mr Lane

Internal Texture Colour Depth

This topic is 5227 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am having a bit of a debate with someone regarding the colour depth API''s like D3D and OpenGL load textures into RAM as. We were talking about how the Unreal Engine uses both 8-bit Palletted images and 32-bit RGBA images (and DXTC, but thats another issue). Tutorials about using textures in Unreal always say that you should use Palletted images when you dont need an alpha channel and when 256 colours are sufficient... My friend claims that you should always use 32-bit colour images because internally D3D and OpenGL store textures 32-bit anyway, so 8-bit images take up as much RAM as 32-bit...just 75% less disk space... Is this true? Can you load images in D3D (and OpenGL) as just 8-bit, or will they always end up being 32-bit in RAM anyway? Also, it is my understanding that DXTC textures are only decompressed into the video RAM, not system RAM...is this also correct?

Share this post


Link to post
Share on other sites
Advertisement
DirectX and OpenGL both store textures in any format that the card supports. You can specifically ask DirectX e.g. to create a texture in a pixel format like R5G6B5, X8R8G8B8 or many more.

Any half decent app will be able to convert the formats if not DirectX or OpenGL do that already, since not all cards support all formats. If a card supports 8-bit paletted format (which not many do) the app should convert it. Now to use 32bit would waste a lot of memory for no extra gain, so i''d guess a app would try to choose the next available format with the lowest possible depth.

If a card stores any format smaller than 32bit it uses less memory.

AFAIK DXTC are stored in compressed format on the card, but only if the card supports it. If not, the D3DX functions will decompress it and it''s stored in full size.

Share this post


Link to post
Share on other sites
Even if D3D does convert to 32-bit when it loads the file, you will still get smaller files by saving them as paletted images. The point that I think the Unreal guy is trying to make is that in many cases you gain NOTHING by saving an image as full 24/32-bit as opposed to paletted. The latter also (often) retains MORE color data than 16-bit or other formats.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!