Image Processing

Started by
2 comments, last by ma_hty 16 years, 2 months ago
hey everybody 2 (simple) questions for the advanced ogl programmer: If I have a rendering context I am bound to the pixelformat. So I give in Floating Point values the are clamped to 0.1 So If I give in bytes everythin is fine. But how does the gl handle these values internally ? If I put in 0xFE for flat shaded polygon, and I read out the back buffer. Is it assured that I get the same values back ???? Or are there some roundings ? 2. Has somebody a simple example source sniipped for doing offscreen image processing with shaders. I only need the projection modelview and the coordinates for the screen aligned quad. and the appropriate shader to make texels = pixels. Or does somebody know an integratation / summerize shader ? I mean you put in a texture and get at the end a texture with size 1 and the sum of all pixel values from the source target. thanks!!
Advertisement
I'll only answer the first part:
For 0 and 255, you SHOULD get the same value back. Anything in between is not guarateed.
GPUs like Geforce 2 and 3 use to convert color values to 12 bit precision before interpolating. Nowadays, probably it's done with 32 bit precision on nVidia. ATI probably interpolated as 32 bit float as well.
I would never rely on a GPUs precision. They are for rendering.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Ok, I assumed that for interpolation. But If I am using flat shading with
only one color with 8bit each channel?
I implemented a shader program for dense matrix multiplication some years ago. It can speed up the calculation for about 20 times comparing with the CPU matrix multiplication. Therefore, I can tell you that it is feasible to do integratation using shader.

And, there are even more advanced research that using GPU to do matrix decomposition ( http://www.cs.unc.edu/~geom/Numeric/svd/ ). I remembered that they posted their paper together with the source code on their website. Try to search with google, maybe you can find it.

This topic is closed to new replies.

Advertisement