Jump to content
  • Advertisement
Sign in to follow this  
C0lumbo

Why don't modern GPUs support palettized textures

This topic is 1094 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm just removing support for palettized textures from an engine at the moment as we're unlikely to release a game on any hardware that supports them.

 

It got me thinking though... why have palettized textures lost so much popularity that they're effectively non-existent these days?

 

I get that block compression is going to produce better quality and size results in typical cases, and I get that you could cobble together your own inefficient texture LUT support in a fragment shader.

 

But it seems to me that there are some textures where a 256 colour 8888 palette would do a much better job than, e.g. a DXT5 or a PVRTC4. There are also some cases where palette switching tricks would be a handy memory saving trick, e.g. for applying team colours to a sprite.

 

And GPU support for palettization seems simple (from my naive standpoint as a user of GPUs rather than a designer of one!) a 256 colour 32-bit palette would only take 1K of texture cache, texture unit hardware copes with far more complex encodings than a straightforward lookup table.

 

I guess I understand why texture palettes fell out of vogue, I don't understand why they are completely dead. I think I'm missing something.

Share this post


Link to post
Share on other sites
Advertisement
Doing it in a shader sounds perfectly reasonable and not inefficient at all. Where did you get that impression from?

Share this post


Link to post
Share on other sites

Well, if it was done in HW, you could get bilinear filtering/etc 'for free', whereas in the shader, you need to perform 4 index lookups, then 4 palette lookups, then calculate the bilinear weights and combine the palette entries... Let alone trilinear or anisotropic filtering...

But I guess enough games stopped using them, that demand for the HW was weak enough to phase those formats out.

Share this post


Link to post
Share on other sites

Well, if it was done in HW, you could get bilinear filtering/etc 'for free', whereas in the shader, you need to perform 4 index lookups, then 4 palette lookups, then calculate the bilinear weights and combine the palette entries... Let alone trilinear or anisotropic filtering...

But I guess enough games stopped using them, that demand for the HW was weak enough to phase those formats out.

 

That is why they need to introduce a 'texture shader' stage.  A shader stage that would feed unfiltered texels into the texture cache, which a shader could then sample from.

Share this post


Link to post
Share on other sites

 

That is why they need to introduce a 'texture shader' stage.  A shader stage that would feed unfiltered texels into the texture cache, which a shader could then sample from.

We don't necessarily need a new stage for that, access to groupshared (LDS) memory outside of compute shaders would go a long way sad.png

 

 

That would help too, and something that would be necessary for a texture shader stage (things like dct/wavelet/block decompression almost require it...) IMHO.  But having something integrated with the texture cache.  So the texture shader outputs unfiltered texels, the shader doing the sample gets filtered texels, the cache is the link between that keeps you from having to recompute the same texel repeatedly.  Also the driver has better knowledge of when to stall a core/SMP/whatever they're calling it now to execute a texture shader, and how much cache it should allocate, etc...  Then you could do all sorts of cool real-time decompression scheme's and procedural texturing.

Share this post


Link to post
Share on other sites

Doing it in a shader sounds perfectly reasonable and not inefficient at all. Where did you get that impression from?

 

As Hodgman says, filtering would be an issue.

 

Also, I work mainly on mobile GPUs, and dependent texture reads are a lot more expensive than non-dependent texture reads on common hardware. A dependent texture read is one where the UVs are affected by work done in the fragment shader. Non-dependent texture reads are faster because the GPU fetches the texture data before invoking the shader to avoid waiting on texture data. I believe it's not an issue on desktop GPUs because they're good at hiding stalls regardless of whether or not the fragment shader generates/messes around with UVs.

Share this post


Link to post
Share on other sites

I suppose texture size could be a factor too in the death of texture palettes. As texture sizes get larger and larger, the quality of block compression remains unchanged, but the ability of a 256 colour palette to do a reasonable job diminishes. Especially in the fact of texture atlases where unconnected textures with very different colours are munged together.

Share this post


Link to post
Share on other sites

I suppose texture size could be a factor too in the death of texture palettes. As texture sizes get larger and larger, the quality of block compression remains unchanged, but the ability of a 256 colour palette to do a reasonable job diminishes. Especially in the fact of texture atlases where unconnected textures with very different colours are munged together.

Exactly.

A 64x64 texture would probably gain more by using palettes over BC1 compression. But as the resolution goes higher; the compressed version will almost always win. There's no way a 2048x2048 palette texture would be better than a compressed version.
Let's remember that paletted textures were popular for their small size. A full 256x256x32bpp texture is 0.25MB. Considering today's GPUs with +1GB VRAM and >100GB/s in bandwidth; the extra transistor space dedicated for decoding paletted textures is totally not worth it.

Not to mention mipmapping is an issue (the only downfilter to produce good results is a point filter; as any other filter will generate new colours) Edited by Matias Goldberg

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!