best texture appareance

Started by
7 comments, last by Aks9 12 years, 11 months ago
Hello.
I must display some architectural models , and i would display at best , in opengl 2.0 i'm starting to use mip map textures, but what are the other "secrets" for display better , in particular the textures.
Thanks.
Advertisement
Take a look at: multisampling, anisotropic filtering and DXT1 (S3TC) compression. :)
Ways to improve visual quality of textures (specifically NOT including shading, as this is a separate topic):

* Anisotropic filtering is very important to avoid blurry textures on textures that are rendered at a shallow view angle. Although anisotropic filtering may reduce performance, the extreme increase in visual quality is more often than not worth it. You can select different levels of anisotropy, try to find a tradeoff between performance and quality. Beyond anisotropic filtering, other filtering techniques can be implemented in shaders, often higher order alternatives to the usual bilinear filter the hardware provides (for example bi-cubic or spline/polynomial filters). These can dramatically increase the texture quality when it is being magnified, but will hit very hard on performance.

* Colour space. This is still too often ignored, yet can make a considerable difference especially when working on a HDR render engine. If available, and if your engine is appropriately calibrated, you should consider the use of sRGB textures and either sRGB or FP framebuffers. The net result will be a much better representation of dark areas in the textures and less banding artifacts on darker gradients.

Multisampling does not affect the quality of textures at all. And DXT texture compression will reduce texture quality in favour of lower memory requirements and less bandwidth usage.
Multisampling affects the overall quality of the rendering. Of course that it has nothing with textures only. It was a general advice.
DXT is a lossy compression, so it does reduces the quality, BUT he can use 4x larger texture with less memory consumption and it will be more pleasant looking texture than lower resolution one. By the way, textures used in most cases are created from jpeg or similar compressed formats, so their quality is already reduced.

We agree on other suggestions. ;)


DXT is a lossy compression, so it does reduces the quality, BUT he can use 4x larger texture with less memory consumption and it will be more pleasant looking texture than lower resolution one.

Yes, this is often done and generally a good advice. However, he could just as well use 4x larger textures without DXTC. He specifically asked about texture quality, so resolution is a given. On same resolution, DXTC will (sometimes considerably) reduce image quality.

If compression is acceptable for the OP, it might be worth looking into BC7 (aka BPTC under OpenGL) rather than DTXC, if the extension is available (it has often significantly higher quality).


By the way, textures used in most cases are created from jpeg or similar compressed formats, so their quality is already reduced.

It's a bad idea to transcode one lossy compression to another. Artifacts are not generally cumulative, but can exhibit exponential behaviour. Jpeg removes information based on a human psychovisual model. The reconstruction of this missing information creates very specific patterns which, while being mostly invisible to the human eye, can seriously throw off the image analyzing stage of subsequent lossy compressions, especially when they operate with a totally different algorithm (such as DXTC does). The result being suboptimal use of available bits as the DXTC compressor tries to encode what are essentially jpeg decoding artifacts. Although it highly depends on the type of input material, using a jpeg as DXTC source can result in significantly reduced texture quality compared to using a lossless source image. And that even if the jpeg and the lossless source images look identical to the human eye.
Every game I've worked on has either stored the textures directly in DXT format or in a lossless compression format for non-compressed textures, generated at build-time from PSDs or uncompressed TGAs.

Decoding JPEGs and encoding DXT at runtime results in bad quality and long loading times, so should only be used if you're desperate to reduce your storage costs as much as possible (in which case you should probably use one of the JPEG successors like JPEG XR or JPEG2000 or a custom DCT+huffman format etc ;) )
This thread starts with a pretty naive question, but it turns to be a very interesting one! ;)

However, he could just as well use 4x larger textures without DXTC. He specifically asked about texture quality, so resolution is a given. On same resolution, DXTC will (sometimes considerably) reduce image quality.


Using 4x larger texture without compression results in 4x grater memory footprint and lower speed (smaller footprint enables more efficient caching).


If compression is acceptable for the OP, it might be worth looking into BC7 (aka BPTC under OpenGL) rather than DTXC, if the extension is available (it has often significantly higher quality).


Although BPTC is a more advanced algorithm and improves quality of the compressed textures with sharp edges (fast transitions), it is very rarely supported. I'm using the latest NV hardware and I have no problem with that, but ... Take a look at the following site: report


Only 6% of the tested platforms support BPTC, while 94% support S3TC (alias DXT)


ARB_texture_compression_bptc - 6%
GL_EXT_texture_compression_s3tc - 94% :cool:

It's a bad idea to transcode one lossy compression to another. Artifacts are not generally cumulative, but can exhibit exponential behaviour. Jpeg removes information based on a human psychovisual model. The reconstruction of this missing information creates very specific patterns which, while being mostly invisible to the human eye, can seriously throw off the image analyzing stage of subsequent lossy compressions, especially when they operate with a totally different algorithm (such as DXTC does). The result being suboptimal use of available bits as the DXTC compressor tries to encode what are essentially jpeg decoding artifacts. Although it highly depends on the type of input material, using a jpeg as DXTC source can result in significantly reduced texture quality compared to using a lossless source image. And that even if the jpeg and the lossless source images look identical to the human eye.



I have to make the reason for this "transcode" more obvious. Well, being involved in using very large textures that are available through some network connection, we have to deal with already compressed images. I'm working on massive terrain rendering engines, and textures easily can exceed tens of GB. In this case, storage space, as well as update speed is of crucial importance. Source data is usually in ECW, MrSid or JPEG format. On the other hand, GPUs cannot deal with such formats directly, so it have to be decoded or transcoded in order to be sent to GPU.



Not necessarily. There are several fast DXT compression algorithms using:

- CPU (Real-Time DXT Compression by J.M.P. van Waveren, 2006)

- GPU and OpenGL (Compress YCoCg-DXT, NVIDIA OpenGL SDK 10 Code Samples, 2008)

- GPU and CUDA (High Quality DXT Compression using CUDA by Ignacio Castaño, 2007) - not as fast as previous two but with higher quality compression

Although DXT compression has a constant compression ratio, the quality depends on the chosen coefficients. Better estimations require more time. But even presented real-time approaches have pretty good quality.

[quote name='Hodgman']...so should only be used if you're desperate to reduce your storage costs as much as possible (in which case you should probably use one of the JPEG successors like JPEG XR or JPEG2000 or a custom DCT+huffman format etc )


Yes, JPEG XR has a lot of better quality. I'll try the speed of the decompression of the libraries I got. Thank you for the advice! ;)
Thanks .
summarizing my problem is:
1)I have only .jpg textures and .png when i necessitate alpha(semitransparent textures), i would investigate to jpg xr , is an external tools? what is it precisely?
2)Is hard to create and use an ansiotropic filter with opengl also without compression? there are some c++ examples in opengl for start?
3)the compression / decompression of texture for my point of view is crucial specialy with the current increase of gpu hardware power but i find poor documentation on it is the same for directx?
can you advices to me something of easy for start and a book or link for study in deep?


very thanks

Thanks .
summarizing my problem is:
1)I have only .jpg textures and .png when i necessitate alpha(semitransparent textures), i would investigate to jpg xr , is an external tools? what is it precisely?
2)Is hard to create and use an ansiotropic filter with opengl also without compression? there are some c++ examples in opengl for start?
3)the compression / decompression of texture for my point of view is crucial specialy with the current increase of gpu hardware power but i find poor documentation on it is the same for directx?
can you advices to me something of easy for start and a book or link for study in deep?

1) For JPEG XR check Wikipedia or Google. They'll explain it much better than me. :rolleyes:


2) No! It is not hard, but it is computation intensive, so choose anisotrophy factor wisely. The maximal supported value can be retrieved with the following query:

glGetIntegerv(GL_MAX_TEXTURE_MAX_ANISOTROPY_EXT, &MaxAnisotrophy);

The setting is as simple as the following:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_ANISOTROPY_EXT, MaxAnisotrophy);

Anisotrophy have nothing in common with compression.


3) I don't agree that there is only poor documentation. There are many books, white papers, tutorials, etc. Just use Google...

This topic is closed to new replies.

Advertisement