png decoder and the gAMA chunk

Started by
5 comments, last by RonHiler 12 years, 10 months ago
Hey guys,

So I have written a ping file decoder plugin for my engine. I tested it using the official test suite (http://www.schaik.com/pngsuite2011/pngsuite.html) and of the 174 files, all but 18 work just fine. So it's basically done.

The last 18 have to do with testing the gAMMA ancillary chunk. I'm not sure what to do with this one. In the context of loading images for use as parts of materials, do I want to correct the images for gamma? Or should I just ignore the chunk? My gut feeling is I should leave the IDAT data alone, I don't think the artists would want their images modified like that, but I'm not 100% sure. What do you guys think?
Creation is an act of sheer will
Advertisement
If you are using libpng, to handle gAMA chunks, you can use the following method:


double FileGamma;
double DisplayGamma = 2.2; //Normally you need to query this from the OS, but we'll use a common value

if (!png_get_gAMA(PngPtr, InfoPtr, &FileGamma)) {FileGamma = 0.45455;} //Note: Fall back to a default file inverse gamma (1 / 2.2) if it's missing in PNG file, or alternatively, don't call png_set_gamma()
png_set_gamma(PngPtr, DisplayGamma, FileGamma);


Works quite well.
Latest project: Sideways Racing on the iPad

If you are using libpng, to handle gAMA chunks, you can use the following method:

I am not. For various reasons, I rolled my own decoder. The only library I used was zlib to decompress the IDAT chunk. And even that I heavily modified.

It's not a question of how to decode the chunk (that's trivial). It's a question of whether or not I should. If you were a game artist and created an image saved as a png file, would you want the game engine to modify the raw RGB data based on the gamma chunk on a "per machine" basis, or would you rather the engine used the image (as a part of a material) as created? That's what I'm asking.
Creation is an act of sheer will
Ah, gotcha. Personally, I think it's a very good idea to apply gamma after decompresion. The colours will look somewhat off if the artist used a platform with a different gamma compared to the target system. I remember the headaches with Apple's old gamma (1.8) vs. PC gamma (2.2), which created a noticeable difference when gamma correction was not handled properly. More importantly, artists are more likely to calibrate the gamma on their machines, resulting in "non-default" exponent values - assuming their software stores calibrated gAMA in the first place. And herein lies the weakness the PNG format; the application in question must know about the system gamma before saving the file, and so does the decoder on the other end, which means certain applications just cheat and use whatever constant they see fit. However, you as a decoder, you can not make any assumptions, other than treating gAMA as correct.

However there are some interesting problems, particularly with texturing with PNG images. If the rendering device has its own custom gamma applied on the frame buffer, then should we apply the same display gamma on textures immediately after decompression? This could result in a doubled-up gamma correction. Or should we render textures without display gamma and let the display driver take care of it? What if the texture is in a fog? Or behind a transparent object? I think in this case we need to revert the artist/file gamma and transform the image into sRGB, and let the display take care of the rest.
Latest project: Sideways Racing on the iPad

Ah, gotcha. Personally, I think it's a very good idea to apply gamma after decompresion. The colours will look somewhat off if the artist used a platform with a different gamma compared to the target system. I remember the headaches with Apple's old gamma (1.8) vs. PC gamma (2.2), which created a noticeable difference when gamma correction was not handled properly. More importantly, artists are more likely to calibrate the gamma on their machines, resulting in "non-default" exponent values - assuming their software stores calibrated gAMA in the first place. And herein lies the weakness the PNG format; the application in question must know about the system gamma before saving the file, and so does the decoder on the other end, which means certain applications just cheat and use whatever constant they see fit. However, you as a decoder, you can not make any assumptions, other than treating gAMA as correct.

However there are some interesting problems, particularly with texturing with PNG images. If the rendering device has its own custom gamma applied on the frame buffer, then should we apply the same display gamma on textures immediately after decompression? This could result in a doubled-up gamma correction. Or should we render textures without display gamma and let the display driver take care of it? What if the texture is in a fog? Or behind a transparent object? I think in this case we need to revert the artist/file gamma and transform the image into sRGB, and let the display take care of the rest.
Latest project: Sideways Racing on the iPad

Ah, gotcha. Personally, I think it's a very good idea to apply gamma after decompresion. The colours will look somewhat off if the artist used a platform with a different gamma compared to the target system. I remember the headaches with Apple's old gamma (1.8) vs. PC gamma (2.2), which created a noticeable difference when gamma correction was not handled properly.


Okay, fair enough. I'll go ahead and add in a chunk handler for this in the decoder. Thanks for the advice.
Creation is an act of sheer will
The standard at all the places I've worked is that all textures (except "data textures", like normal maps) are authored and stored in SRGB (~gamma 2.2).

When you load the texture into the game, the data should be unchanged (still be in gamma 2.2), because if you change it at this point, you'll lose precision and get banding. The reason that we store textures in gamma-space is that it's a kind of compression -- it lets us get perceptually better results with fewer bits. If we stored our textures in linear space, we'd need to use 16-bit channels instead of 8-bit channels to get them looking as good.

When rendering, the shaders/texture-samplers read from the texture and convert the texture-data from SRGB (~gamma 2.2) into linear (gamma 1.0). All the shading math is done in linear space, and then the rendered results are converted from linear (gamma 1.0) into the users gamma-space (usually SRGB / ~gamma 2.2).

So, I wouldn't want my texture loader to go and "correct" my textures -- that should be done by the renderer.
Thanks Hodgman. I hadn't considered adjusting the gamma settings in the shader, but that's a pretty good idea. At first I was a bit concerned about any performance penalties associated with doing pixel conversions in the shader, but upon further investigation, it looks like both OGL and D3D natively support sRGB, and it's even supported in the video hardware. Good stuff.

I'll have to do more research (this is very definitely not my area of expertise), but it seems like this is the way to go.

[EDIT]
Okay, after a bit more thought, here is my tentative plan. Hodgman, you said your standard where you worked was to author in sRGB. Sadly, I cannot make that assumption. So what I propose is this:

If the png has no gAMA chunk, I will assume an sRGB format and leave the RGB alone (what else can you do, right?). If it does have a gAMA chunk, and this gamma is already 2.2, we again leave it alone. If it is anything else, we convert the RGB data over to sRGB. In all cases, we end up with sRGB to feed to the shaders.

Does that seem a reasonable plan?

That may cause issues with data channels. The decoder has no way of knowing what is data and what is color. I'm not really sure how to handle that. Any suggestions?
Creation is an act of sheer will

This topic is closed to new replies.

Advertisement