Sign in to follow this  
Chris_F

Efficient 24/32-bit sRGB to linear float image conversion on CPU

Recommended Posts

Does anyone know of some efficient ways of converting 24/32-bit sRGB to linear floating point on the CPU? I haven't got access to a CPU with AVX2 instructions yet, but I am intrigued by the new gather instructions. I was thinking that these could possibly be used for this type of conversion, such as in this example below. The LUT would be 256x4 bytes, so I imagine it would fit entirely into L1 data cache.

__m256 RGBA8toRGBA32F(const char* pixel_data, const float* LUT)
{
    return _mm256_i32gather_ps(LUT, _mm256_cvtepu8_epi32(_mm_load_si128((__m128i*)pixel_data)), 4);
}
Edited by Chris_F

Share this post


Link to post
Share on other sites

read http://en.wikipedia.org/wiki/Gamma_correction#Windows.2C_Mac.2C_sRGB_and_TV.2Fvideo_standard_gammas and

 

"Unlike most other RGB color spaces, the sRGB gamma cannot be expressed as a single numerical value. The overall gamma is approximately 2.2, consisting of a linear (gamma 1.0) section near black, and a non-linear section elsewhere involving a 2.4 exponent and a gamma (slope of log output versus log input) changing from 1.0 through about 2.3."

 

in http://en.wikipedia.org/wiki/SRGB 2nd paragraph well it says that sRGB is not using a single gamma and gpu uses a table to change the out put RGB to sRGB and the link you sent itself uses 2.4 as gamma

Edited by IYP

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this