This one seems interesting. I need to render XYZ points into a 2D texture where each pixel represents a result. The data comes in as as GL_POINTS and is multiplied by a geometry shader across 6 pixels. Blending is GL_ONE/GL_ONE so summing up results hitting the same pixels. Now the problem is that the result is incorrect. Basically the following happens:

layout( points ) in;

layout( points, max_vertices=6 ) out;

ivec3 tc1U = inPoint % ivec3( pOutputWidth ); // ivec3 inPoint

ivec3 tc1V = inPoint / ivec3( pOutputWidth );

vTC1 = vec2( tc1U.x, tc1V.x ) * pTCTransform.xy + pTCTransform.zw;

// and so forth, 6 times

pTCTransform is (2/outputWidth, 2/outputHeight, -1, -1) so mapping pixel indices in the range (0,0)-(outputWidth,outputHeight) to (-1,-1)-(1,1). In one particular case I have outputSize=(256,37). Some rows have the correct result (compared to do the calculation on CPU) while other rows are incorrect (like 1 row correct, 2 rows incorrect 2 rows correct, and so forth). With some other outputHeight it works correctly with others again not.

Is OpenGL points rendering not pixel-precise? If so how can you do pixel precise rendering (hence render with one primitive to exactly one point at a predefined location (x,y) in pixels)?