• Create Account

### #ActualTelanor

Posted 27 August 2012 - 08:29 PM

Regarding SRGB and LogLUV: when using an SRGB texture, it will go through a linear->gamma transform when writing, and a gamma->linear transform when sampling, so it shouldn't have much of an impact, but it can mean that the value that you sample later is slightly different to the value that you wrote originally. This transform is only designed to be used to store RGB "color" information in a perceptually efficient way (distributing more bits to darker colors) where it doesn't matter if there is a slight change in the values.
However, LogLUV splits the L component over 2 8-bit channels, and if either of those channels is slightly modified, then when you reconstruct the original 16-bit value, it will be wildly different. This makes it more of a "data" texture than a "color" texture, and so SRGB encoding should not be used.

Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?

Also, this one part of MJP's sample has me very confused. The hardware downscale function takes an RT and then renders it to another RT which is 1/2 the size of the source RT 3 times. The comments say it's meant to downsample to an RT 1/16th the size, so whats the purpose of rendering 3 1/2 size RTs?

/// <summary>
/// Downscales the source to 1/16th size, using hardware filtering
/// </summary>
/// <param name="source">The source to be downscaled</param>
/// <param name="result">The RT in which to store the result</param>
protected void GenerateDownscaleTargetHW(RenderTarget2D source, RenderTarget2D result)
{
IntermediateTexture downscale1 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(source, downscale1.RenderTarget, scalingEffect, "ScaleHW");

IntermediateTexture downscale2 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale1.RenderTarget, downscale2.RenderTarget, scalingEffect, "ScaleHW");
downscale1.InUse = false;

IntermediateTexture downscale3 = GetIntermediateTexture(source.Width / 2, source.Height / 2, source.Format);
PostProcess(downscale2.RenderTarget, downscale3.RenderTarget, scalingEffect, "ScaleHW");
downscale2.InUse = false;

PostProcess(downscale3.RenderTarget, result, scalingEffect, "ScaleHW");
downscale3.InUse = false;
}


### #2Telanor

Posted 27 August 2012 - 07:42 PM

Regarding SRGB and LogLUV: when using an SRGB texture, it will go through a linear->gamma transform when writing, and a gamma->linear transform when sampling, so it shouldn't have much of an impact, but it can mean that the value that you sample later is slightly different to the value that you wrote originally. This transform is only designed to be used to store RGB "color" information in a perceptually efficient way (distributing more bits to darker colors) where it doesn't matter if there is a slight change in the values.
However, LogLUV splits the L component over 2 8-bit channels, and if either of those channels is slightly modified, then when you reconstruct the original 16-bit value, it will be wildly different. This makes it more of a "data" texture than a "color" texture, and so SRGB encoding should not be used.

Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?

### #1Telanor

Posted 27 August 2012 - 07:42 PM

Regarding SRGB and LogLUV: when using an SRGB texture, it will go through a linear->gamma transform when writing, and a gamma->linear transform when sampling, so it shouldn't have much of an impact, but it can mean that the value that you sample later is slightly different to the value that you wrote originally. This transform is only designed to be used to store RGB "color" information in a perceptually efficient way (distributing more bits to darker colors) where it doesn't matter if there is a slight change in the values.
However, LogLUV splits the L component over 2 8-bit channels, and if either of those channels is slightly modified, then when you reconstruct the original 16-bit value, it will be wildly different. This makes it more of a "data" texture than a "color" texture, and so SRGB encoding should not be used.

Just to clarify then. If I want to keep the "gamma-correctness", I need to sample from an SRGB surface, do my calculations, LogLuv encode the result, and then after the HDR processing is done, write to an SRGB surface?

<img id="scriptishLightboxImage" />

PARTNERS