Jump to content

  • Log In with Google      Sign In   
  • Create Account

Mipmaps gamma correction


Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.


  • You cannot reply to this topic
8 replies to this topic

#1 DwarvesH   Members   -  Reputation: 510

Like
0Likes
Like

Posted 13 March 2014 - 10:37 AM

I decided to try out run time mipmap creation. Saving files in some other format than DDS, with or without mipmaps results in smaller install and faster load time, since unpacking the file is faster that reading a large file from disk and generating the mipmaps should take virtually zero time if the GPU applies a linear filter. I do hope that his is done on the GPU. Is there any absolute information on this topic? I couldn't find a clear result on Google.

 

Rendering with gamma correction is fairly easy. All you need to do is make sure that your diffuse textures are read as sRGB textures. All generated non-diffuse textures should be read as normal textures (linear content like normal maps and generated gloss maps). When writing anything but a debug mode for a linear sampler, you should also write to sRGM textures.

 

This has worked out perfectly until I started generating my mipmaps at run time. Gamma correction must happen before the texture is filtered trilinearly. Yet all my test images failed and looked exactly how you would expect them if the mips were generated and then you sample them with gamma correction.

 

I tried all the combinations for texture creation flags and bellow in my load texture function you can see the only combination that has resulted in correct rendering:

        public Texture LoadTextureMIP(string path, bool gc) {
            Usage u = Usage.AutoGenerateMipMap;
            Filter f = gc ? Filter.Linear | Filter.SrgbIn : Filter.Linear;
            path = Paths.Textures + path;
            var a = ImageInformation.FromFile(path);
            return Texture.FromFile(Device, path, a.Width, a.Height, 1, u, a.Format, Pool.Managed, f, f, 0);
        }

The gc parameter controls if the mips should be generated with gamma correction. I find it counter intuitive that using just Filter.SrgbIn resulted in correct results, so I may be doing something wrong. I was expecting to have to use Filter.SrgbOut too.

 

There is one caveat though: the textures created by this function can no longer be sampled as sRGB textures. They look too dark, as if they were gamma corrected twice. They must be sampled as linear and written as sRGB.

 

Which lead me to believe that the textures are now stored in linear format in VRAM. Which makes sense since you read them as sRGB and do not write them as sRGB. Which is a massive no no. There is a good reason for the problem of gamma correction not having the solution of using linear maps. Storing linear maps in 8 bit/channel will result in a loss of needed precision for deep blacks.

 

So did I butcher my deep black precision in order to get correctly filtered mip levels? Or did I misunderstand the situation. What is the best way to auto-generate mips with perfect gamma behavior.

 

Or should I just avoid this headache and force the content importer to generate all mips offline?



#2 Mona2000   Members   -  Reputation: 1656

Like
1Likes
Like

Posted 13 March 2014 - 10:52 AM

Generate them offline. Saves you all this trouble, gives more control to the artist and allows you to use better filtering (Kaiser).



#3 Ohforf sake   Members   -  Reputation: 2046

Like
0Likes
Like

Posted 13 March 2014 - 03:09 PM

Keep in mind that the entire mip tail is only 33% of the original size so you are only saving very little by computing it at run time. You usually also want to compress your textures (that is all mip levels) using dxt1 or something similar, which is also a lot easier if performed offline.

#4 DwarvesH   Members   -  Reputation: 510

Like
0Likes
Like

Posted 13 March 2014 - 03:10 PM

Doing them offline is the obvious choice. But if I go with precomputed, I would still rather do them live and cache the result, to take into account the exact sRGB LUT of the monitor that the shader version would use. So I'll have to write a generator that takes the full size image as a sRGB input, computes manually the linear sample using centroid sampling (need to determine the exact formula the standard linear uses) and renders that to a quarter size sRGB texture. And repeat the process until I get a 1x1 texture. Ideally, the highest resolution map would be used to sequentially compute each mip level, not the nearest but larger in size to avoid error propagation.

 

Way too much work.

 

Everything I have read says that mipmapping filtering should work with proper gamma. So I am either doing something wrong, the Intel HD sucks or it is yet again a DirectX 9 issue :).



#5 Adam_42   Crossbones+   -  Reputation: 3208

Like
1Likes
Like

Posted 13 March 2014 - 05:59 PM

If you're worried about file sizes what you almost certainly want to do is compress the textures with either DXT1 or DXT5. DXT1 doesn't support alpha, DXT5 does but is double the size. A DXT1 texture is effectively 4 bits per pixel, and stays compressed on the GPU which means it can also improve performance by saving memory bandwidth.

 

Since they are relatively slow to compress, you want to compress them offline using something like squish. If you're really worried about file size they will also benefit from being put in a zip file as the texture compression is only done in blocks of 4x4 pixels for ease of hardware decompression.

 

The only downside is that the compression is lossy so it may not be suitable for some textures. In my experience the compression artefacts are most noticeable on detailed 2D textures.



#6 DwarvesH   Members   -  Reputation: 510

Like
0Likes
Like

Posted 14 March 2014 - 05:19 AM

My primary concern is gamma correctness.

 

Getting a small boost to load times is at most a welcome yet unlikely bonus.

 

Since I'm concerned with render quality, I won't go with texture compression unless I go over my minimum VRAM requirement. DXT1 has horrible quality and I don't want to bother with YCoCg and its possible gamma ramifications. That is just asking for trouble. And I can't find a 2.1 compiled nvidia texture tools.

 

And combining a high quality SSAA with texture compression seems like a self defeating action quality wise.

 

I did not manage to fine tune my auto mip creation parameters to give good results. And auto miped textures are surprisingly volatile. So I'm going with a content pipeline that enumerates resources, their properties and their origin. It is up to pipeline to determine if the conversion should be done at runtime or loaded from a cache.

 

I tried implementing this using C# and nvidia texturing tools, but the C# bindings are not up to date. The DLL I got after installing 2.0.8 does not export nvttCreateCompressor and nvttDestroyCompressor. I really don't have time to contact the team with that right now.

 

So I'm going with offline generation for now. At least the results are correct. I also managed to call the compressor as a process from code.



#7 cgrant   Members   -  Reputation: 1434

Like
0Likes
Like

Posted 14 March 2014 - 12:09 PM

If you are using DX9, there was no directive as to whether linearization should occur post / prefiltering so you may end up with incorrect result depending on how the driver implement the texture fetch and filter.



#8 L. Spiro   Crossbones+   -  Reputation: 23975

Like
1Likes
Like

Posted 14 March 2014 - 09:59 PM

Since I'm concerned with render quality, I won't go with texture compression unless I go over my minimum VRAM requirement.

You know that Battlefield 3 and Battlefield 4 use DXT quite heavily (possibly exclusively) right?


And I can't find a 2.1 compiled nvidia texture tools.

My LSDxt DXT Compressor tool is heavily based off the nVidia Texture Tools as far as options go (the compression algorithm is completely new).
It supports most features from nVidia Texture Tools, including Kaiser mipmap filtering and the ability to save in other formats besides DXT* if you do not like the quality of DXT compression.
It also handles gamma correction for you.
It is offline and command-line so you can easily integrate it into your tool-chain.


L. Spiro

#9 DwarvesH   Members   -  Reputation: 510

Like
0Likes
Like

Posted 15 March 2014 - 10:25 AM

 

Since I'm concerned with render quality, I won't go with texture compression unless I go over my minimum VRAM requirement.

You know that Battlefield 3 and Battlefield 4 use DXT quite heavily (possibly exclusively) right?


And I can't find a 2.1 compiled nvidia texture tools.

My LSDxt DXT Compressor tool is heavily based off the nVidia Texture Tools as far as options go (the compression algorithm is completely new).
It supports most features from nVidia Texture Tools, including Kaiser mipmap filtering and the ability to save in other formats besides DXT* if you do not like the quality of DXT compression.
It also handles gamma correction for you.
It is offline and command-line so you can easily integrate it into your tool-chain.


L. Spiro

 

 

Nice tool you have there! If it works with all those parameters it is a lot better than NVidia texturing tools! For now I have integrated that but I will test yours out too.

 

As for DXT, I'll probably start using it once I go over 1GiB of texture space.






Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.




PARTNERS