Why don't modern GPUs support palettized textures

Started by
10 comments, last by Infinisearch 8 years, 4 months ago

A 64x64 texture would probably gain more by using palettes over BC1 compression. But as the resolution goes higher; the compressed version will almost always win. There's no way a 2048x2048 palette texture would be better than a compressed version.
Well, it would be better if it encodes for example the 16 customizable colors (in 16 shades each) that make up a character's personalized clothes, and you draw 300 characters each with individually colored clothes, only indexing into a different part of the palette for each, and they all use the exact same texture.

But... that is admittedly a very contrieved case which not a lot of people will need every day (and if they need it, they can implement it in the shader!).

Advertisement

That is why they need to introduce a 'texture shader' stage. A shader stage that would feed unfiltered texels into the texture cache, which a shader could then sample from.

We don't necessarily need a new stage for that, access to groupshared (LDS) memory outside of compute shaders would go a long way sad.png

I'd be really happy with either :(

Didn't hardware palettized textures also have dedicated memory for the lookup table? That would help out with performance as well.


We don't necessarily need a new stage for that, access to groupshared (LDS) memory outside of compute shaders would go a long way

Trying to make async shaders harder to implement I see.

-potential energy is easily made kinetic-

This topic is closed to new replies.

Advertisement