Indexed textures with ATI cards
It seems that the glColorTableEXT extension is unsupported by the ATI cards (even recent ones).
Can anyone conifrm this?
If it's true, is there a way to use paletted textures without rebuilding a regular buffer?
I found this for you, which suggests that for ATI Radeon cards you need a workaround.
3. ATI Radeon deficiency
The porting was a success though a few changes were made to overcome ATI's lack of glColorTableExt. This deficiency requires four times more texture memory or a complete reload of the texture whenever the color map changes.
HardVR had implemented the changing of the color map using call to glColorTableExt. This allowed us to specify only the intensity value for a 3D texture. With glColorTableExt and GL_INTENSITY8 (one byte) specification to glTexImage3DEXT enabled, a 3d texture can be specified at a memory cost of one fourth of what's required currently by the Radeon. Radeon requires that each texel of the 3d texture be described by four bytes (red, green, blue and alpha values). Minor modifications had to be made to incorporate the new requirement.
The specification of four byte per texel is a serious drawback since Radeon's memory is limited to 64MB. Theoretically, high performance could only be achieved with data sets that are of 256x256x256 size or less. This implementation suggests that a 512x512x512 data set would require at least 512MB of memory (512x512x512x4).
Bajaj et al. introduced an easy implementation of a wavelet-based encoding that provides fast decoding to random data access as well as fairly high compression rates [6]. Since our implementation has 3D textures with each texel mapping to a set of constant 256 colors, a good compression ratio can be achieved from packing the four bytes per channel 3D texture. {Does ATI support compression?}
Source: http://www.kovey.com/research/volviz/3dpc.html
3. ATI Radeon deficiency
The porting was a success though a few changes were made to overcome ATI's lack of glColorTableExt. This deficiency requires four times more texture memory or a complete reload of the texture whenever the color map changes.
HardVR had implemented the changing of the color map using call to glColorTableExt. This allowed us to specify only the intensity value for a 3D texture. With glColorTableExt and GL_INTENSITY8 (one byte) specification to glTexImage3DEXT enabled, a 3d texture can be specified at a memory cost of one fourth of what's required currently by the Radeon. Radeon requires that each texel of the 3d texture be described by four bytes (red, green, blue and alpha values). Minor modifications had to be made to incorporate the new requirement.
The specification of four byte per texel is a serious drawback since Radeon's memory is limited to 64MB. Theoretically, high performance could only be achieved with data sets that are of 256x256x256 size or less. This implementation suggests that a 512x512x512 data set would require at least 512MB of memory (512x512x512x4).
Bajaj et al. introduced an easy implementation of a wavelet-based encoding that provides fast decoding to random data access as well as fairly high compression rates [6]. Since our implementation has 3D textures with each texel mapping to a set of constant 256 colors, a good compression ratio can be achieved from packing the four bytes per channel 3D texture. {Does ATI support compression?}
Source: http://www.kovey.com/research/volviz/3dpc.html
ATI supports S3TC, which offers a constant 1:8 compression with the basic variant (1 bit alpha), though this won't address the color table change issue...
Quote:If it's true, is there a way to use paletted textures without rebuilding a regular buffer?
try this create a tetxure with your pallette eg 256x1 sized texture, use GL_NEAREST filtering.
bind it,
now u can do glTexCoord1i( indexvalue ) // i assumes thats right if not use glTexCoord1f( 1.0/float(index) )
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement