Jump to content

  • Log In with Google      Sign In   
  • Create Account

Emulating SetGammaRamp in a shader


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 Necrolis   Members   -  Reputation: 1370

Like
0Likes
Like

Posted 20 March 2014 - 02:06 PM

I've been trying to figure out a way to map a gamma ramp generation function I have (obtained through RE) to an HLSL/GLSL function/operator, in an effort to emulate the "look and feel" of an older game I fiddle with in my spare time. 

 

however, I'm failing to get anywhere because I'm not sure how the gamma ramp set by D3DDevice9::SetGammaRamp gets used when outputting a pixel. what I'm looking for is: if I have the RGB tuple "x" what operations are performed on x's channels using the ramp that yield the final pixel rendered to the back buffer?

 

The ramp generation looks like so if it helps in any way:

void GenGammaRamp(long dwGamma, double fContrast, double* pRamp)
{
    double fGamma = (double)dwGamma;
    double fFractGamma = 0.01 * fGamma;
    double fGammaPercent = 100.0 / fGamma;
    double fRampEnd = 255.0;
    double fMaxGamma = 65535.0;
    double fGammaFactor = 1.0 / fRampEnd;
    for(double fRamp = 0.0; fRamp < fRampEnd; fRamp += 1.0)
    {
        double fGammaStep = fGammaFactor * fRamp * fMaxGamma * fFractGamma;
        fGammaStep = fGammaStep > fMaxGamma ? fMaxGamma : fGammaStep;
        double fFinalGamma = (pow(fGammaFactor * fRamp,fGammaPercent) * fMaxGamma * (100.0 - fContrast) + fGammaStep * fContrast) * 0.01;

        pRamp[(int)fRamp] = fFinalGamma;
    }
}

(the values get converted to back to 8/16/32 bit integers just before the are sent off to the driver).



Sponsor:

#2 Nik02   Crossbones+   -  Reputation: 2937

Like
0Likes
Like

Posted 21 March 2014 - 02:39 AM

I remember seeing an article (written within the last year or so) by AMD or NVidia which states that modern hardware generally implements the output gamma correction as piecewise linear approximation in order to get fast performance.

 

But yes, in general, gamma curve is implemented by raising the input signal to the power of the gamma value.


Edited by Nik02, 21 March 2014 - 02:45 AM.

Niko Suni


#3 Hodgman   Moderators   -  Reputation: 31968

Like
2Likes
Like

Posted 21 March 2014 - 06:06 AM

How does your pRamp/GenGammaRamp stuff get used? The D3DGAMMARAMP is based around WORD values, not doubles, plus you seem to be generating an array of 255 values instead of 256.
 
Usually it would look something like:

D3DGAMMARAMP ramp;
for( int i=0; i!=256; ++i )
{
  double f = i/255.0;//scale from 0-255 to 0-1 range
 
  f = pow(f,2.2);//some kind of 'gamma' modification done here. Could be anything!
 
  int truncated = (int)(f*65535);//scale from 0-1 range to 0-65535 range
  WORD w = truncated > 65535 ? 65535
                             : (truncated < 0 ? 0 : truncated);
  ramp.red[i] = ramp.green[i] = ramp.blue[i] = w;
}
device.SetGammaRamp(..., &ramp);
 
If you wanted to do this in a shader, you'd take the array of 256 'gamma' values, and store them in a 256px * 1px texture (you could use D3DFMT_R32F and store them as floats in the 0-1 range). You'd then use a post-processing shader like this:

float3 coords = myColor.rgb;//treat the colour as a texture coordinate
coords = coords * 0.99609375 + 0.00390625;
//^^^ this is necessary to map 0.0 to the center of the left texel, and 1.0 to the center of the right texel
//sample the texture 3 times to convert each channel to the value in the gamma ramp:
myColor.r = tex2D( theTexture, float2(coords.r, 0) );
myColor.g = tex2D( theTexture, float2(coords.g, 0) );
myColor.b = tex2D( theTexture, float2(coords.b, 0) );

Edited by Hodgman, 21 March 2014 - 06:13 AM.


#4 Necrolis   Members   -  Reputation: 1370

Like
0Likes
Like

Posted 21 March 2014 - 07:33 AM

 

How does your pRamp/GenGammaRamp stuff get used? The D3DGAMMARAMP is based around WORD values, not doubles, plus you seem to be generating an array of 255 values instead of 256.
 

It gets mapped back to a WORD before being used to filled the D3DGAMMARAMP (the reason for the size mapping is cause there is also a palette-based software renderer, but it ignores the gamma ramp -.-), as for the off by one error, thats probably my fault along the line somewhere, so thanks for catching that smile.png

 

As for how it gets mapped: its literally casted to a WORD as the range mapping is done inside GenGammaTable (I folded in the range value, fMaxGamma, cause I'm only concerned with the D3D gamma, originally this was a parameter):

double GammaTable[256];
D3DGAMMARAMP ramp;
GenGammaRamp(myGammaValue,myContractValue,GammaTable);

for(int i = 0; i < 256; i++)
{
    WORD Gamma = (WORD)GammaTable[i];
    ramp.red[i] = ramp.green[i] = ramp.blue[i] = Gamma; 
}
pD3ddevice->SetGammaRamp(0,D3DSGR_NO_CALIBRATION,&ramp);

 

If you wanted to do this in a shader, you'd take the array of 256 'gamma' values, and store them in a 256px * 1px texture (you could use D3DFMT_R32F and store them as floats in the 0-1 range). You'd then use a post-processing shader like this:

float3 coords = myColor.rgb;//treat the colour as a texture coordinate
coords = coords * 0.99609375 + 0.00390625;
//^^^ this is necessary to map 0.0 to the center of the left texel, and 1.0 to the center of the right texel
//sample the texture 3 times to convert each channel to the value in the gamma ramp:
myColor.r = tex2D( theTexture, float2(coords.r, 0) );
myColor.g = tex2D( theTexture, float2(coords.g, 0) );
myColor.b = tex2D( theTexture, float2(coords.b, 0) );

 

Ah so it is just a straight off "scaled-index", originally I had tried using "Out.Color = pow(In.Color,gGamma/2.2)" but I had no clue how to add in the contrast, this also washed out the colors very quickly as apposed to the original ramp.

 

I'm already using 1D LUT's to emulate 8-bit palettized color, so technically I should be able to remap the palette LUT to account for the gamma If I understand this correctly; though I think its probably best to first have it working with the double LUT. Your note about the texel centering reminds me that I didn't do this for my palette LUTs, so that fixes something else as well smile.png


Edited by Necrolis, 21 March 2014 - 07:35 AM.


#5 kauna   Crossbones+   -  Reputation: 2863

Like
1Likes
Like

Posted 21 March 2014 - 01:41 PM

You could use a small volume texture (32^3 for example) to perform all the required color corrections you want to do

 

check the sample here : http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter24.html

 

You can implement saturation, desaturation, hue, contrast, brightness, gamma etc with it and in the shader side it requires only one lookup to the volume texture. So it is kind of an extension to the ideas in the posts before. 

 

Cheers!






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS