Reading the following article: http://john-chapman-graphics.blogspot.hu/2013_01_01_archive.html
The last two paragraphs made me curious:
For a deferred renderer there is a pitfall which programmers should be aware of. If we linearize a non-linear input texture, then store the linear result in a g-buffer prior to the lighting stage we will lose all of the low-intensity precision benefits of having non-linear data in the first place. The result of this is just horrible - take a look at the low-intensity ends of the gradients in the left image below:
Clearly we need to delay the gamma correction of input textures right up until we need them to be linear. In practice this means writing non-linear texels to the g-buffer, then gamma correcting the g-buffer as it is read at the lighting stage. As before, the driver can do the work for us by using an sRGB format for the appropriate g-buffer targets, or correcting them manually.
From all the other resources I've read you should declare your input diffuse textures as *_SRGB and your gbuffer color texture as non-SRGB which is contradictory to this.
Which is correct? Should the diffuse textures be sRGB or the gbuffer color texture be sRGB (effectively delaying the conversion untill sampling it in the lighting stage)?