Jump to content

  • Log In with Google      Sign In   
  • Create Account

Gamma from sRGB Texture to Human Brain


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
18 replies to this topic

#1 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 30 March 2013 - 01:45 PM

Trying to understand the full gamma pipeline from sRGB Texture to Human Brain



To simplify discussion I am using these terms:

LIGHTEN = gamma encode / correct = pow(linear, 1 / 2.2)

DARKEN = gamma decode = pow(linear, 2.2)

LINEAR = color data in color space where human perceives 50% as half way between 0% and 100%

LIGHT = gamma encoded color data
DARK = gamma decoded color data


The non-linear transfer functions in the pipeline are:


sRGB Gamma Encode = pow(linear, 1 / 2.2) // LIGHTEN

CRT Gamma Response = pow(corrected, 2.2) // DARKEN

Human Perception of Luminance = pow(luminance, 1 / 2.2) // LIGHTEN



The pipeline:

------------------------------------------------------------------------
ELEMENT                     CHANGE      STATE
------------------------------------------------------------------------
SRGB FILE                               LIGHT

GAMMA DECODING SAMPLER      DARKEN                      // sRGBTexture

LINEAR WORKSPACE                        LINEAR          // Interpolation and Blending

GAMMA CORRECTION            LIGHTEN                     // sRGBWrite

LIGHT LEAVING GPU                       LIGHT           // Gamma Corrected

LIGHT LEAVING CRT           DARKEN                      // CRT Gamma Response

LIGHT ENTERING EYE                      LINEAR          // Luminance

HUMAN PERCEPTION            LIGHTEN                     // We are sensitive to dark

BRAIN                                   LIGHT           // This should be LINEAR ...
------------------------------------------------------------------------

However I rearrange this pipeline I can't get LINEAR in both the BRAIN
and the LINEAR WORKSPACE



Here is another attempt:


------------------------------------------------------------------------
ELEMENT                     CHANGE      STATE
------------------------------------------------------------------------
SRGB FILE                               LINEAR

GAMMA DECODING SAMPLER      DARKEN                      // sRGBTexture

LINEAR WORKSPACE                        DARK            // THIS DOESN'T WORK NOW

GAMMA CORRECTION            LIGHTEN                     // sRGBWrite

LIGHT LEAVING GPU                       LINEAR          // Gamma Corrected

LIGHT LEAVING CRT           DARKEN                      // CRT Gamma Response

LIGHT ENTERING EYE                      DARK            // Luminance

HUMAN PERCEPTION            LIGHTEN                     // We are sensitive to dark

BRAIN                                   LINEAR          // This works!
------------------------------------------------------------------------


Can anybody illuminate things for me? Preferably gamma corrected ...

 


Edited by skytiger, 30 March 2013 - 01:52 PM.


Sponsor:

#2 Hodgman   Moderators   -  Reputation: 30387

Like
0Likes
Like

Posted 30 March 2013 - 05:22 PM

Where did you learn that human perception of light is a gamma 2.2 curve?
I haven't heard this before, so maybe this assumption is the key to your confusion?

Btw, It's not important to this discussion but the gamma 2.2 curve is slightly different to the sRGB curve ;)

#3 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 31 March 2013 - 02:24 AM

This is the best explanation I have found:

See "Intensity" and "Lightness"
http://www.poynton.com/PDFs/Rehabilitation_of_gamma.pdf



Human Perception works in units of (Photometric) Lightness
"Lightness" is Photometric Luminance adjusted for the non-linear
perception of "brightness"


sRGB Texture is in units proportional to Lightness

Rendering equation works in units proportional to (Radiometric) Radiance
which are almost identical to (Photometric) Luminance within the range
of visible light

The monitor has an overall non-linear response which is roughly the
inverse of the Luminance to Lightness transfer function


I now believe the 4 transfer functions to be:

1) Lightness, the Human perception of Luminance (LIGHTEN)

2) Monitor Response (DARKEN)

3) Gamma Correction (LIGHTEN)

4) Gamma Decoding (DARKEN)

(Very) roughly speaking they are all curves of pow(x, 2.2) or pow(x, 1 / 2.2)



New Pipeline:

Camera (LIGHTEN)
    Input    = Radiometric Radiance (Light)
    Output    = Photometric Lightness (sRGB Texture)

Gamma Decoding Sampler (DARKEN)
    Input    = Photometric Lightness (sRGB Texture)
    Output    = Radiometric Radiance

Shader works in units proportional to Radiometric Radiance
(which is functionally the same as Photometric Luminance)

Gamma Encoding (LIGHTEN)
    Input    = Radiometric Radiance
    Output    = Photometric Lightness

Monitor (DARKEN)
    Input    = Photometric Lightness
    Output    = Radiometric Radiance (Light)

Human Perception (LIGHTEN)
    Input    = Radiometric Radiance (Light)
    Output    = Photometric Lightness


Gamma Correction solves 3 problems:

1) Precision distribution of 8 bit per channel texture
2) Non-linear response of Monitor
3) Non-linear response of Human Perception

 

 


 



#4 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 31 March 2013 - 04:12 AM

Actually my previous pipeline is wrong, consider:

a human perceives a radiometric radiance that she calls "50% gray"
she records this with a camera that gamma corrects the value CORRECTED = pow(GRAY, 1 / 2.2)
now she views this sRGB Picture on her monitor
her monitor performs GRAY = pow(CORRECTED, 2.2)
and again she perceives "50% gray"

(this works because humans can adjust contrast based on context)

However as humans are more sensitive to differences between dark shades than light
we need to store more detail about the dark shades

THAT is where the human non-linear response is relevant to gamma correction
according to CIE the human response is approximately logarithmic
so it can be approximated with the same gamma of 2.2

Luckily gamma correction solves 2 problems with same solution:

1) Monitor Gamma
2) Human Sensitivity to Dark Shades

 


So sRGB can only be used by cameras because of a COINCIDENCE!

 

This then leads onto the discussion of whether a human can determine the
midpoint between "black" and "white" and how to construct a gradient
that LOOKS linear to a human being

MIDPOINT

50% Radiance        // Physically half way
18% Radiance        // Perceptually half way?

Photoshop creates a slightly S shaped "linear" gradient in linear color
which is then gamma corrected

Maybe Adobe think that whilst a human can adjust for their eyes' own
non-linearity they still need a bit of help ...

 

New Pipeline:

 

Camera (LIGHTEN)
    Input   = Radiometric Radiance (Light)
    Output  = Gamma Corrected Radiance (sRGB Texture)

Gamma Decoding Sampler (DARKEN)
    Input   = Gamma Corrected Radiance (sRGB Texture)
    Output  = Radiometric Radiance

Shader works in units proportional to Radiometric Radiance
(which is functionally the same as Photometric Luminance)

Gamma Encoding (LIGHTEN)
    Input   = Radiometric Radiance
    Output  = Gamma Corrected Radiance

Monitor (DARKEN)
    Input   = Gamma Corrected Radiance
    Output  = Radiometric Radiance (Light)

Human (NOCHANGE)
    Input   = Radiometric Radiance (Light)
    Output  = Estimation of Radiometric Radiance (Perception of Light)


Edited by skytiger, 31 March 2013 - 04:21 AM.


#5 galop1n   Members   -  Reputation: 226

Like
0Likes
Like

Posted 31 March 2013 - 04:58 AM

You are over thinking the thing.

 

Luminance is a physical value. If a surface receive an amount A of light from a source, and an amount B of light from a second source, the amount of luminance reflect will be A + B - epsilon ( some energy convert to heat ). That's our linear space when rendering and accumulating lighting.

 

Brightness is a perceptible value. It exist a bijective function between the brightness and the luminance. The purpose of brightness is to linearize the human perception of luminance. The best brightness formula is the one of L* from the cieLab color space ( a color space design to simplify color comparaison in regard to human perception ), the curve is split in two segment, a first one is a simple line, and then continue with a cube root. ( see wikipedia : http://en.wikipedia.org/wiki/Lab_color_space ).

 

pow( x, 1/2.2 ) and pow( x, 1/3 ) are quite similar, and the 2.2 value came from the physical phosphor response to electric stimulation. Our modern LCD just simulate the 2.2 gamma curve for compatibility purpose :(

 

the gamma curve was handy for two reason, first, it is near of the human brightness perception, this was handy and simplify the CRT hardware and second, because the human eye are better with low luminance ( and not brightness, remember that brightness is a linear mapping of your perception ), we need more bits to map the low luminance value. And applying a pow ( x, 1/2.2 ) before quantization to 8bits is solution :)

 

That's all.



#6 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 31 March 2013 - 05:43 AM

Yes

 

I agree with what you say

 

But I think that few people realise that:

 

"gamma correction" on a digital camera

is actually lightness (aka brightness) correction

it has NOTHING to do with monitor gamma

and is used to ensure sufficient precision for the dark shades in an 8 bpp image (to avoid banding)

 

whereas:

 

gamma correction on a gpu

is really gamma correction

it has NOTHING to do with human perception

and is used to solve the problem of non-linear monitor response

 

the fact that you can use the same gamma for 2 completely different purposes is purely COINCIDENCE

 

and has a useful side-effect that direct display of the gamma corrected image on a monitor looks OK

 

 

PS

 

"Luminance" is not a physical value, it is a Photometric value

"Radiance" is a physical value

 

They are almost identical within the visible spectrum of light

so for our purposes they are the same ...

 

But Luminance is adjusted for the non-linear response of human eye based on wavelength

 

Renderers work with Radiance - the mapping from Radiance to Luminance happens inside the human eye

 

Outside of the visible spectrum Luminance is always ZERO

 

luminance = radiance * is_visible(wavelength) ? 1 : 0;


Edited by skytiger, 31 March 2013 - 08:05 AM.


#7 wintertime   Members   -  Reputation: 1714

Like
0Likes
Like

Posted 31 March 2013 - 07:24 AM

That whole thing is so screwed up. There are a huge number of stages in the pipeline from light source to object to camera to camera electronics to computer input to input drivers to input application to image editor to application to texture to texture read to shader to framebuffer to signal from video card to monitor electronics to screen to light to eye to nerv signal to brain interpretation where some kind of gamma correction would be applied or deferred to a later stage, but not only out of need but also out of guesstimating what later stages could do and trying to reverse that. And at each point there is a different meaning applied to what it is out of some historical context or just because every OS wants to apply another gamma value for lookin purposes or the user changed the graphics driver setting cause something was looking wrong(very likely with all those stages and unlimited amount of image sources) and now everything else is wrong in the opposite direction.

 

One would first need to know where the texture comes from.

If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?

If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

Was there some stage to convert the image to SRGB space, did it correctly account for what gamma values were applied already?

And so on ...

 

 

 

 

 

 

 

 

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages and then have the LCD apply a correction for the actual response curve of the screen. It just wont happen soon.



#8 Hodgman   Moderators   -  Reputation: 30387

Like
2Likes
Like

Posted 31 March 2013 - 07:56 AM

  1. Author content in real world radiometric ratios (e.g. albedo).
  2. Encode content in 8-bit sRGB, because it happens to be a readily available 'compression' method for storage, which optimizes for human-perceptual difference in values.
  3. Decode content to floating-point linear RGB, so that we can do math correctly.
  4. Do a whole bunch of stuff in linear RGB, ending up with final linear radiometric RGB wavelength intensity values.
  5. Clamp these into a 0-1 range using a pleasing exposure function -- copy what cameras do so that the results feel familiar to the viewer.
  6. Encode these floating-point values using the colour-space of the display (let's say 8-bit "gamma 2.2" for a CRT -- x^1/2.2, but it could be sRGB, "gamma 1.8", or something else too!).
  7. The display then does the opposite transform (e.g. x^2.2) upon transmitting the picture out into the world.
  8. The resulting radiometic output of the display is then equivalent to the radiometric values we had at step 5 (but with a different linear scale).
  9. These values are then perceived by the viewer, in the same way that they perceive the radiometric values presented by the real-world.

The actual curve/linearity of their perception is irrelevant to us (us = people trying to display a radiometric image in the same was as the real world). All that matters to us is that the values that are output by the display are faithful to the radiometric values we calculated internally -- we care about ensuring that the display device outputs values that match our simulation. If the display does that correctly, then the image will be perceived the same as a real image.

The actual way that perception occurs is of much more interest to people trying to design optimal colour-spaces like sRGB wink.png

 

 

luminance = radiance * is_visible(wavelength) ? 1 : 0;

That should be:

luminance = radiance * weight(wavelength);

where weight returns a value from 0 to 1, depending on how well perceived that particular wavelength is by your eyes.

The XYZ colour space defines weighting curves for the visible wavelengths based on average human values (the exact values differ from person to person).

The weighting function peaks at the red, green and blue wavelengths (which are the individual peaks of the weighting functions for each type of cone cell), which is why we use them as our 3 simulated wavelengths. For low-light scenes, we should actually use a 4th colour, which is the wavelength where the rod-cell's peak responsiveness lies. For a true simulation, we should render all the visible wavelengths and calculate the weighted sum at the end for display, but that's way too complicated wink.png

 

One would first need to know where the texture comes from.
If its from a camera, was there some gamma applied to make it linear or was it obfuscated according to NTSC cause in the old time it was better to send analog signals in that way?
If its from an image editor, had the artist calibrated its computer and monitor correctly to get a linear response over all? Did the bitmap then get saved with the correct settings and correct gamma meta information added or did just some reverse guess of the image editing program which cant know driver settings or monitor settings get applied?

In a professional environment:
* When taking photos with a camera, a colour chart is included in the photo so that the colours can be correctly calibrated later, in spite of anything the camera is doing.
* Artists monitors are all calibrated to correctly display sRGB inputs (i.e. they perform the sRGB to linear curve, overall, across the OS/Driver/monitor).
* Any textures saved by these artists are then assumed to be in the sRGB curved encoding.

The best thing would be to forget about all this cruft and just have the camera translate light into a linear space, only use that in all intermediate stages

If we did all intermediate storage/processing in linear space, we'd have to use 16 bits per channel everywhere.
sRGB / gamma 2.2 are nice, because we can store images in 8 bits per channel without noticeable colour banding, due to them being closer to a perceptual colour space and allocating more bits towards dark colours, where humans are good at differentiation.

 

... but yes, in an ideal world, all images would be floating point HDR, and then only converted at the last moment, as required by a particular display device.


Edited by Hodgman, 31 March 2013 - 08:23 AM.


#9 galop1n   Members   -  Reputation: 226

Like
0Likes
Like

Posted 31 March 2013 - 08:18 AM

As a side note, with DXGI on windows we can now create swap chain accepting linear back buffer ( 64 bpp = 4 * half float ). So the final pow ( x, 1/2.2 ) is not require anymore, it will be the driver concern to do it if the monitor need it. The only care is then to create texture with a SRGB format to let the GPU linearize the value within the texture fetch. As hogdman said, everything is all about artist using calibrated monitors and photoshop set up with the right curve.



#10 Matias Goldberg   Crossbones+   -  Reputation: 3399

Like
0Likes
Like

Posted 31 March 2013 - 08:20 AM

LIGHT ENTERING EYE                      LINEAR          // Luminance

HUMAN PERCEPTION            LIGHTEN                     // We are sensitive to dark

BRAIN                                   LIGHT           // This should be LINEAR ...

Where did you get the brain works this way? Do you have some evidence?
Gamma correction was intended to be applied to CRT monitors to compensate for the non-linear relationship between voltage and brightness. It was never intended to be applied on how the human eye & brain interprets light particles/waves.

Not all crt monitors have a gamma of 2.2 (iirc old mac monitors had a 1.8 gamma) but it is believed to be a safe assumption to apply 2.2 to all of them. And LCD/LEDs just mimic that behavior for backwards compatibility.

It was a happy coincidence that saving the file in 8-bit per channel in gamma space preserved the range of colours we notice more, but in reality people did it because there was not enough processing power to make a linear->gamma conversion every time you wanted to open a picture (and also a lack of knowledge at the time by most of us).

#11 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 31 March 2013 - 08:33 AM

See the CIE definition of "Lightness"

 

http://www.poynton.com/PDFs/Rehabilitation_of_gamma.pdf

 

http://en.wikipedia.org/wiki/Lightness

 

>> It was never intended to be applied on how the human eye & brain interprets light particles/waves

 

Exactly my point ... but a texture sampled by DirectX (or a JPEG opened in Photoshop) does not need gamma correction - it needs lightness correction

 

The gamma correction is only needed later ...



#12 Hodgman   Moderators   -  Reputation: 30387

Like
0Likes
Like

Posted 31 March 2013 - 08:44 AM

Exactly my point ... but a texture sampled by DirectX (or a JPEG opened in Photoshop) does not need gamma correction - it needs lightness correction

Can you explain what you mean there?

 

We use "gamma correction" to mean going from a linear colour space to a curved one, or vice versa.

If the texture is saved in the (curved) sRGB colour space, but we want it to be in a linear RGB space so we can perform math on it, then we need "gamma correction" (meaning, we apply the sRGB->linear curve).

 

Later, we want to display out linear RGB values on a monitor, which likely uses a curved colour space, so we again need "gamma correction" (meaning, we apply a linear->sRGB curve, or a linear->"gamma2.2" curve, or a linear->"gamma1.8" curve, or a linear->"gamma2.4" curve, etc, depending on the monitor).


Edited by Hodgman, 31 March 2013 - 08:48 AM.


#13 Matias Goldberg   Crossbones+   -  Reputation: 3399

Like
0Likes
Like

Posted 31 March 2013 - 08:57 AM

But I think that few people realise that:
 
"gamma correction" on a digital camera
is actually lightness (aka brightness) correction
it has NOTHING to do with monitor gamma
and is used to ensure sufficient precision for the dark shades in an 8 bpp image (to avoid banding)

You do realize that...?
  • The GPU gems 3 article (main reference for many) talks about the banding when storing in an image file
  • DX Documentation also talks about it.
  • Digital Cameras store in 2.2 for compatibility; as like I said, historically we stored in gamma space because:
    • It just worked (lack of knowledge)
    • Lack of processing power (it's cheaper to store in gamma space then send directly to monitor 'as is')
    • We NOW still do that, but now consciously, because of the banding (lightness correction as you want to call it)
    • When a Digital Camera wants to be picky about this stuff (preserving colours, ignore compatibility) it uses a custom colour profile.
    • There's A LOT more involved in storing from an image captured by a ccd sensor to finally reaching the jpeg file? (ISO, Aperture, shutter speed, tone mapping, de-noising, come to mind)
  • The main way to deal with "preserving contrast and details" (aka. directly messing with lightness) is in the tone mapping. See Reinhard's original paper.


#14 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 31 March 2013 - 09:29 AM

Yes

 

I have read alot of stuff about gamma correction

and my own pipeline is meant to be gamma correct all the way through

 

The reason for my post was to pin down once and for all the exact and unambiguous terminology and reasons for everything - which I haven't seen in one place yet

 

When I was thinking about gamma correction and its relationship to an HDR pipeline I am working on ... it didn't make sense

 

Also when considering the concept of a "linear gradient" in multiple color spaces ... things didn't make sense

 

Now it does ... they key is understanding the difference between "lightness correction" and "gamma correction"

and the fact that there are more relevant color spaces than I first thought:

 

what the human brain perceives is clearly relevant, and this has been standardized as CIELAB

where the L stands for Lightness which is a key concept missing from most gamma correction texts

 

I believe that the S shaped "linear gradient" in Photoshop is related to CIELAB

 

Also that DCC tools shouldn't be gamma correcting content - they should be lightness correcting it

 

If you had asked me a month ago if I understood gamma correction I would have said "yes"

 

But I didn't *really*

 

And I still think the answer to "how to create a linear gradient (in the human brain)" is not straightforward at all ...



#15 samoth   Crossbones+   -  Reputation: 4783

Like
0Likes
Like

Posted 31 March 2013 - 09:30 AM

Color perception in the eye is no way linear, it is not even RGB. In reality, the three types of receptors work on something like "3 somewhat overlapping, wiggly, asymetric bell curves" of which one is roughly some kind of blue (S), one is a kind of green (M),and one os some greenish-yellow (L).

 

The brain transforms this information into what you see as "colors" (including red, green, and blue), but there is no simple (linear, logarithmic, or similar) mapping corresponding to perception.

 

What's of practical interest is what your brain thinks is the correct color in a typical situation, and what you can represent well on a computer (preferrably having the hardware do any necessary transforms). sRGB works well for "monitor in somewhat dim room", and that is why you want to use it.


Edited by samoth, 31 March 2013 - 09:31 AM.


#16 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 31 March 2013 - 09:57 AM

Can you explain what you mean there?



We use "gamma correction" to mean going from a linear colour space to a curved one, or vice versa.

If the texture is saved in the (curved) sRGB colour space, but we want it to be in a linear RGB space so we can perform math on it, then we need "gamma correction" (meaning, we apply the sRGB->linear curve).

 

The purpose of "gamma correcting" an albedo texture in Photoshop is to preserve detail in the dark areas which humans are sensitive to

it has NOTHING to do with monitor gamma

 

This human response curve is "Lightness" which is standardized by CIE as "how a standard human perceives Luminance"

and is the basis for the CIELAB color space

 

So DCC tools should be "Lightness correcting" to preserve information (avoid banding) according the the CIE transfer function for human perception

 

They don't have to concern themselves with Monitor gamma at all!

That is taken care of by sRGBWrite (or similar) at the end of the pipeline

 

HLSL shaders should be doing this:

 

float3 linear = LightnessDecode(tex2D(s, t));

 

(a sudden thought)

 

float3 linear = CIELABDecode(tex2D(s, t));

 

not this:

 

float3 linear = GammaDecode(tex2D(s, t));

 

 

The difference is probably so minor, nobody would notice

 

But at least gamma makes sense to me now ...



#17 skytiger   Members   -  Reputation: 258

Like
1Likes
Like

Posted 31 March 2013 - 10:51 AM

I created a real linear gradient in photoshop, then converted it to LAB color space, then saved

and compared with sRGB linear gradient ... there is a very slight difference in the darker shades

 

Also here is a blogpost about projecting LAB onto a Cylinder so you can create perceptually correct gradients:

 

http://vis4.net/blog/posts/avoid-equidistant-hsv-colors/



#18 Tasty Texel   Members   -  Reputation: 1295

Like
0Likes
Like

Posted 01 April 2013 - 11:14 PM

Ok, let's see if I finally got it right:

When I output a gradient which is the result of a mathematical linear interpolation between 0 and 1, then I'll perceive it to be linear since the sRGB cuve of the monitor (coincidentally) matches the human perception of lightness quite well.

As soon as I add gamma correction to this gradient I'll no longer perceive it to be linear, but instead it will be photometric linear, which leads to the effect that a value of 0.5 will be perceived as bright as a pattern made of alternating black and white stripes viewed from some distance, since (ideally) the monitor will emit the same amount of photons in the mean in both cases.


Edited by Bummel, 01 April 2013 - 11:15 PM.


#19 skytiger   Members   -  Reputation: 258

Like
0Likes
Like

Posted 02 April 2013 - 12:52 AM

Would the CIE standard observer recognise a radiometric linear gradient as such?

 

Would they perceive the midpoint radiance as such?

 

Currently CIE have no standard for gradient perception

 

I believe that a human can partially compensate for their own non-linearity

 

And that to create a perceptually linear gradient you need a mostly radiometric linear with a slight S (darks darker, midpoint at 50%, lights lighter)

 

This document calls radiometric linearity "uniformity"

and perceptual linearity "homegeneity"

http://oled100.eu/pdf/OLED100.eu_Newsletter_February_2010.pdf

 

At work we have quality control people trained to "see"

They can see stuff other people can't

 

I can't think of a reason for a normal human being to care about a linear gradient ... not in the way we care about predicting gravity (to catch a ball) etc.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS