Drawing single-pixel GL_POINT after 3D transformations

Started by
8 comments, last by deadc0deh 4 years, 10 months ago

Hi, opengl-beginner here.  I'm trying to visualize a cloud of points whose X,Y,Z coordinates are sent to a VBO on the GPU and drawn with glDrawArrays(GL_POINTS, ...)  A vertex shader applies the combined model-view-projection matrix that lets me rotate, place and scale the cloud, position and aim a camera and choose my field of view. After the mandatory perspective divide, the vertices/points end up where they should on the screen and the fragment shader provides the correct color.

But. opengl draws pixel-sized squares rather than points. Depending on how the fractional on-screen coordinates fall on or between the integer pixel coordinates, my points smear out to affect 1-4 pixels each. The points in my model are supposed to be very small and so should only affect a single pixel each.

What I have tried to solve the problem:
1) Adjust the coordinates before they are sent to the GPU: At first, when I was using matrices so simple that I was effectively doing 2D graphics, it was trivial to do minor adjustments to X and Y to make the points end up at the center of pixels, but I want to make full use of the possibilities of 3D and the model-view-projection trio.
2) Adjust the coordinates in the fragment shader. There is a gl_FragCoord.xyzw variable containing the on-screen coordinates, but it is read-only.
3) Looking for a programmable "rasterization shader" that runs between the vertex and fragment shader, but this step in the graphics pipeline seems to be hardwired.
4) Selecting a smaller point size with glPointSize(). Using an argument of 10 gives me bigger points, so I know the call has an effect. glGetFloatv with GL_POINT_SIZE_MIN gives 0 and GL_SMOOTH_POINT_SIZE_GRANULARITY gives 0.125, but setting the point size to 0 or 0.125 gives the same results as setting it to 1. glEnable(GL_POINT_SMOOTH) did not help either.

Graphics card: Geforce GT 560M from 2011, capable of openGL versions upto 4.6.  NVIDIA driver 390.77 on Linux.
Advertisement
7 hours ago, drhex said:
But. opengl draws pixel-sized squares rather than points. Depending on how the fractional on-screen coordinates fall on or between the integer pixel coordinates, my points smear out to affect 1-4 pixels each. The points in my model are supposed to be very small and so should only affect a single pixel each.

Not sure if here is a real problem, my eyes aren't that keen any more ?

In principle, if point size is set to 1.0, no multisampling, no anti aliasing, no use of gl_PointSize in the shader, perfectly fresh monitor, then a point on screen should be a single lit pixel. Most monitors have a square pixel matrix (when seen through a loupe or so), so that is not necessarily an error.

If you draw a single vertex with GL_POINTS, without any changes to OpenGL state, nothing enabled or disabled, no multsampling draw buffer, does the effect show ? Could it could even be a fried out monitor ...

*shrug*

Edit btw.: setting point size to 0 should generate an opengl error ...

In general this stuff happens if you accidentally calculated the window size wrongly and it ends up slightly scaling to the display surface. Make sure you use the calculation for the window's client rectangle, not the entire window.

The second most common error I know of is people using multi sampled render targets and well.. the resolve will then 'smear' it out (for lack of a better term and lazyness to not want to explain multi sampling.) 

It is important to keep the glPointSize at 1.0f as different sizes can have different results based on: https://www.khronos.org/registry/OpenGL-Refpages/gl2.1/xhtml/glPointSize.xml

 

My eyes are not what the used to be either : -)  Not blaming the monitor for smearing here -  I take a screenshot and magnify to verify the results.  The points of the model are eventually supposed to be stars that make up a galaxy. So there can be many points that fall on the same onscreen pixel and should then be added up. I suppose that can be handled with the proper glBlendFunc, but first the rasterizer has to be coerced to plot single pixels.

Following Green_Baron's hints, I'm using

glfwWindowHint(GLFW_SAMPLES, 1);
glDisable(GL_POINT_SMOOTH);
glPointSize(1);

As seen in the attached screenshot, points spread out to 2 pixels horizontally vertically or diagonally.

Here's a cleaned-up version of the source code:


// Include standard headers
#include <stdio.h>
#include <stdlib.h>

// Include GLEWf
#include <GL/glew.h>

// Include GLFW
#include <glfw3.h>
GLFWwindow* window;

// Include GLM
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
using namespace glm;

#include <common/shader.hpp>

const float USER_DISTANCE_MM = 440;  // eye to Screen
const float NEAR_MM = 100;
const float FAR_MM = 1000;
const float SIZE_PERCENT = 70;  // 100=fullScreen


struct Size {
    float width, height;   // pixels
    int width_mm, height_mm;
    void init(Size ref, float percent) {
        float p = percent/100;
        width = int(ref.width * p);
        height = int(ref.height * p);
        width_mm = ref.width_mm * p;
        height_mm = ref.height_mm * p;
    }
} Screen, Window;


int main( void )
{
    // Initialise GLFW
    if( !glfwInit() )
    {
        fprintf( stderr, "Failed to initialize GLFW\n" );
        getchar();
        return -1;
    }

    glfwWindowHint(GLFW_SAMPLES, 1);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

    GLFWmonitor *monitor = glfwGetPrimaryMonitor();
    const GLFWvidmode *return_struct = glfwGetVideoMode(monitor);
    Screen.width = return_struct->width;   Screen.height = return_struct->height;
    if (Screen.width < 800 || Screen.height > 8192) {
        fprintf(stderr, "Got weird display resolution %f x %f", Screen.width, Screen.height);
        Screen.width = 1024;  Screen.height = 576;
        fprintf(stderr, "Defaulting to %f x %f", Screen.width, Screen.height);
    }

    glfwGetMonitorPhysicalSize(monitor, &Screen.width_mm, &Screen.height_mm);
    if (Screen.width_mm < 120 || Screen.width_mm > 1000)
    {
        fprintf(stderr, "Got weird display size %d x %d mm", Screen.width_mm, Screen.height_mm);
        Screen.width_mm = 345;  Screen.height_mm = 194;
        fprintf(stderr, "Defaulting to %d x %d mm", Screen.width_mm, Screen.height_mm);
    }
    Window.init(Screen, SIZE_PERCENT);

    // Open a window and create its OpenGL context
    window = glfwCreateWindow(Window.width, Window.height, "$DR.HEX$ Galaxies", SIZE_PERCENT == 100 ? monitor : NULL, NULL);
    if( window == NULL ){
        fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" );
        getchar();
        glfwTerminate();
        return -1;
    }
    glfwMakeContextCurrent(window);

    // Initialize GLEW
    glewExperimental = true; // Needed for core profile
    if (glewInit() != GLEW_OK) {
        fprintf(stderr, "Failed to initialize GLEW\n");
        getchar();
        glfwTerminate();
        return -1;
    }

    // Ensure we can capture the escape key being pressed below
    glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);

    // Black background
    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);


    GLuint VertexArrayID;
    glGenVertexArrays(1, &VertexArrayID);
    glBindVertexArray(VertexArrayID);

    // Create and compile our GLSL program from the shaders
    GLuint programID = LoadShaders( "SimpleVertexShader.vertexshader", "PowFade.fragmentshader" );

    const int NSTORE = 16384;
    const int XSTART = 0*NSTORE;
    const int YSTART = 1*NSTORE;
    const int ZSTART = 2*NSTORE;

    GLfloat g_vertexes[3*NSTORE];


    GLuint vertexbuffer;
    glGenBuffers(1, &vertexbuffer);
    glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
    glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertexes), NULL, GL_DYNAMIC_DRAW);  // allocate memory on GPU

    GLuint MatrixID = glGetUniformLocation(programID, "MVP");
    GLuint BrightnessID = glGetUniformLocation(programID, "intrinsic_brightness");

    // Use our shader
    glUseProgram(programID);

    glUniform1f(BrightnessID, NEAR_MM*NEAR_MM);  // Brightness/NEAR_MM/NEAR_MM == 1.0 = maximum brightness
    glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);

    glDisable(GL_POINT_SMOOTH);
    glPointSize(1);  // smaller than 1 makes no difference

    float rot = 0;
    int i;
    int fcnt=0;
    const int DOFRAMES=600;

    do {

        // Clear the Screen
        glClear( GL_COLOR_BUFFER_BIT );

        // Model matrix : rotate slowly around Z axis
        glm::mat4 Model      = glm::rotate(glm::mat4(1.0f), rot/2, glm::vec3(0,0,1));

        // Camera matrix
        glm::mat4 View       = glm::lookAt(glm::vec3(0,0,USER_DISTANCE_MM), // Position of Camera in World Space
                                           glm::vec3(0, 0, 0), // What the camera is looking at
                                           glm::vec3(0,1,0)  // Head is up
                                          );

        // Projection matrix : Vertical Field of View, square pixel ratio, display range : 100 - 1000 mm
        glm::mat4 Projection = glm::perspective(asin(Window.height_mm/2/USER_DISTANCE_MM)*2, Window.width/Window.height, NEAR_MM, FAR_MM);

        // Our ModelViewProjection : multiplication of our 3 matrices
        glm::mat4 MVP        = Projection * View * Model; // Order matters

        glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);


        rot += 0.01f;


        // Create the model: a regular 2D array of points spaced 3 mm apart
        int NDRAW =0;
        for (int y=-60; y<=60; y+=3 ) {
            for (int x=-60; x<=60; x+=3) {
               g_vertexes[XSTART+NDRAW] = x;
               g_vertexes[YSTART+NDRAW] = y;
               g_vertexes[ZSTART+NDRAW] = 0;
               NDRAW++;
            }
        }


        // memcpy vertex coordinates to GPU
        for (i=0; i<3; i++) {
            glBufferSubData(GL_ARRAY_BUFFER, i*NSTORE*sizeof(float), NDRAW*sizeof(float), &g_vertexes[i*NSTORE]);
            glEnableVertexAttribArray(i);
            glVertexAttribPointer(
                i,                  // attribute 0. No particular reason for 0, but must match the layout in the shader.
                1,                  // size
                GL_FLOAT,           // type
                GL_FALSE,           // normalized?
                0,                  // stride
                (void*)(i*NSTORE*sizeof(float))            // array buffer offset
            );
        }


        // Draw the dots !
        glDrawArrays(GL_POINTS, 0, NDRAW);


        for (i=0; i<3; i++)
            glDisableVertexAttribArray(i);


        // Swap buffers
        glfwSwapBuffers(window);


        glfwPollEvents();

        fcnt++;
        if (fcnt==DOFRAMES)  break;

    } // Check if the ESC key was pressed or the window was closed
    while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS
#if SIZE_PERCENT != 100
         &&  glfwWindowShouldClose(window) == 0
#endif
           );


    // Cleanup VBO    
    glDeleteBuffers(1, &vertexbuffer);
    glDeleteVertexArrays(1, &VertexArrayID);
    glDeleteProgram(programID);

    // Close OpenGL window and terminate GLFW
    glfwTerminate();

    return 0;
}

And the vertex + fragment shaders look like this


#version 330 core

// Input vertex data, different for all executions of this shader.
layout(location = 0) in float myx;
layout(location = 1) in float myy;
layout(location = 2) in float myz;

uniform mat4 MVP;


void main()
{
    gl_Position = MVP * vec4(myx,myy,myz,1.0);
}

#version 330 core

uniform float intrinsic_brightness;

// Implicitly declared
// in vec4 gl_FragCoord;    w component is 1/w after projection matrix, i.e. Z distance from camera
out vec4 outcolor;

void main()
{
    // Divide by square of distance and compensate for monitor's SRGB gamma
    float brightness = pow(intrinsic_brightness * gl_FragCoord.w * gl_FragCoord.w, 1/2.2);
    outcolor = vec4(brightness, brightness, brightness, 1.0);
}

 

 

 

 

screenshot.png

Ok. Got it working with perfect points by changing some stuff i just out here for analysis. In principle, as has been said, it because of different resolutions of the frame buffer and the window size.

Here's the code:



// Include GLEWf
#include <GL/glew.h>
// Include GLFW
#include <GLFW/glfw3.h>
GLFWwindow* window;
// Include GLM
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
using namespace glm;
//#include <common/shader.hpp>
#include <iostream>
#include <vector>
#include <memory>
#include "Module.h"
#include "Program.h"

const float USER_DISTANCE_MM = 440.0f;  // eye to Screen
const float NEAR_MM = 100.0f;
const float FAR_MM = 1000.0f;
const float SIZE_PERCENT = 70.0f;  // 100=fullScreen

using namespace orf_n;

struct Size {
    float width, height;   // pixels
    int width_mm, height_mm;
    void init(Size ref, float percent) {
        float p = percent/100.0f;
        width = int(ref.width * p);
        height = int(ref.height * p);
        width_mm = ref.width_mm * p;
        height_mm = ref.height_mm * p;
    }
} Screen, Window;


int main( void )
{
    // Initialise GLFW
    if( !glfwInit() )
    {
        fprintf( stderr, "Failed to initialize GLFW\n" );
        getchar();
        return -1;
    }

    glfwWindowHint(GLFW_SAMPLES, 0);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
    glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
    glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

    GLFWmonitor *monitor = glfwGetPrimaryMonitor();
    const GLFWvidmode *return_struct = glfwGetVideoMode(monitor);
    Screen.width = return_struct->width;   Screen.height = return_struct->height;
    if (Screen.width < 800 || Screen.height > 8192) {
        fprintf(stderr, "Got weird display resolution %f x %f", Screen.width, Screen.height);
        Screen.width = 1024;  Screen.height = 576;
        fprintf(stderr, "Defaulting to %f x %f", Screen.width, Screen.height);
    }

    /*glfwGetMonitorPhysicalSize(monitor, &Screen.width_mm, &Screen.height_mm);
    if (Screen.width_mm < 120 || Screen.width_mm > 1000)
    {
        fprintf(stderr, "Got weird display size %d x %d mm", Screen.width_mm, Screen.height_mm);
        Screen.width_mm = 345;  Screen.height_mm = 194;
        fprintf(stderr, "Defaulting to %d x %d mm", Screen.width_mm, Screen.height_mm);
    }
    Window.init(Screen, SIZE_PERCENT);*/

    // Open a window and create its OpenGL context
	window = glfwCreateWindow( 800, 600, "Hi !", nullptr, nullptr );
    if( window == NULL ){
        std::cout << "Window jammed ..." << std::endl;
        glfwTerminate();
        return EXIT_FAILURE;
    }
    glfwMakeContextCurrent(window);

    // Initialize GLEW
    glewExperimental = true; // Needed for core profile
    if (glewInit() != GLEW_OK) {
        fprintf(stderr, "Failed to initialize GLEW\n");
        getchar();
        glfwTerminate();
        return -1;
    }

    // Ensure we can capture the escape key being pressed below
    glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE);
	glfwSwapInterval( 1 );

    // Black background
    glClearColor(0.0f, 0.0f, 0.0f, 0.0f);


    GLuint VertexArrayID;
    glGenVertexArrays(1, &VertexArrayID);
    glBindVertexArray(VertexArrayID);

    // Create and compile our GLSL program from the shaders
    std::vector<std::shared_ptr<Module>> modules;
    modules.push_back( std::make_shared<Module>( GL_VERTEX_SHADER, "src/vert.glsl" ) );
    modules.push_back( std::make_shared<Module>( GL_FRAGMENT_SHADER, "src/frag.glsl" ) );
    Program *shaderProg = new Program( modules );
    GLuint programID{ shaderProg->getProgram() };

    const int NSTORE = 16384;
    const int XSTART = 0*NSTORE;
    const int YSTART = 1*NSTORE;
    const int ZSTART = 2*NSTORE;

    GLfloat g_vertexes[3*NSTORE];


    GLuint vertexbuffer;
    glGenBuffers(1, &vertexbuffer);
    glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
    //glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertexes), NULL, GL_DYNAMIC_DRAW);  // allocate memory on GPU
    glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertexes), NULL, GL_STATIC_DRAW);

    GLuint MatrixID = glGetUniformLocation(programID, "MVP");
    GLuint BrightnessID = glGetUniformLocation(programID, "intrinsic_brightness");

    // Use our shader
    glUseProgram(programID);

    glUniform1f(BrightnessID, NEAR_MM*NEAR_MM);  // Brightness/NEAR_MM/NEAR_MM == 1.0 = maximum brightness
    glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);

    //glDisable(GL_POINT_SMOOTH);
    //glPointSize(1);  // smaller than 1 makes no difference

    float rot = 0;
    int i;
    int fcnt=0;
    const int DOFRAMES=600;

    do {

        // Clear the Screen
        glClear( GL_COLOR_BUFFER_BIT );

        // Model matrix : rotate slowly around Z axis
        glm::mat4 Model      = glm::rotate(glm::mat4(1.0f), rot/2, glm::vec3(0.0f,0.0f,1.0f));

        // Camera matrix
        glm::mat4 View       = glm::lookAt(glm::vec3(0.0f,0.0f,USER_DISTANCE_MM), // Position of Camera in World Space
                                           glm::vec3(0.0f, 0.0f, 0.0f), // What the camera is looking at
                                           glm::vec3(0.0f,1.0f,0.0f)  // Head is up
                                          );

        // Projection matrix : Vertical Field of View, square pixel ratio, display range : 100 - 1000 mm
        // glm::mat4 Projection = glm::perspective(asin(Window.height_mm/2/USER_DISTANCE_MM)*2, Window.width/Window.height, NEAR_MM, FAR_MM);
        glm::mat4 Projection = glm::perspective( glm::radians( 15.0f ), 800.0f/600.0f, NEAR_MM, FAR_MM);

        // Our ModelViewProjection : multiplication of our 3 matrices
        glm::mat4 MVP        = Projection * View * Model; // Order matters

        glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]);


        rot += 0.01f;


        // Create the model: a regular 2D array of points spaced 3 mm apart
        int NDRAW =0;
        for (int y=-60; y<=60; y+=3 ) {
            for (int x=-60; x<=60; x+=3) {
               g_vertexes[XSTART+NDRAW] = x;
               g_vertexes[YSTART+NDRAW] = y;
               g_vertexes[ZSTART+NDRAW] = 0;
               NDRAW++;
            }
        }


        // memcpy vertex coordinates to GPU
        for (i=0; i<3; i++) {
            glBufferSubData(GL_ARRAY_BUFFER, i*NSTORE*sizeof(float), NDRAW*sizeof(float), &g_vertexes[i*NSTORE]);
            glEnableVertexAttribArray(i);
            glVertexAttribPointer(
                i,                  // attribute 0. No particular reason for 0, but must match the layout in the shader.
                1,                  // size
                GL_FLOAT,           // type
                GL_FALSE,           // normalized?
                0,                  // stride
                (void*)(i*NSTORE*sizeof(float))            // array buffer offset
            );
        }


        // Draw the dots !
        glDrawArrays(GL_POINTS, 0, NDRAW);


        for (i=0; i<3; i++)
            glDisableVertexAttribArray(i);


        // Swap buffers
        glfwSwapBuffers(window);


        glfwPollEvents();

        fcnt++;
        if (fcnt==DOFRAMES)  break;

    } // Check if the ESC key was pressed or the window was closed
    while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS
#if SIZE_PERCENT != 100
         &&  glfwWindowShouldClose(window) == 0
#endif
           );


    // Cleanup VBO
    glDeleteBuffers(1, &vertexbuffer);
    glDeleteVertexArrays(1, &VertexArrayID);
    glDeleteProgram(programID);

    // Close OpenGL window and terminate GLFW
    glfwTerminate();

    return 0;
}

 

Sorry for the c++/c chaos ... but i think you get the point (hahaha) of it: If using windowed mode, set the window size manually. Use glfw's callbacks to resize (not implemented here) and give the window sizes to the lookat/projection matrices.

n.b.: please consider doing epsilon comparisons for floating point == comparisons !

If something's unclear, just ask ?

Edit: don't be confused by the shader compiling stuff, i just copied in my routines to get it running quickly with error messages etc.

And pls. consider computing color in the shader not based on fargcoord.w, but some other calculation. Fragcoord.w is clip space position related, i think ... (?)

fragment shader code:

// float brightness = pow(intrinsic_brightness * gl_FragCoord.w * gl_FragCoord.w, 1/2.2);
    float brightness = 1.0f; // do awesome colours here.

 

 

 

This may be a superfluous remark, but in general one would not prepare the vertex data and buffers inside the render loop if the data does not change between frames ...

Ah, thank you Green_Baron, now the points look better!  However, after some experimentation I find that it was not the manually set window size that did it: what made it work was setting GLFW_SAMPLES to 0 rather than 1.

As per https://stackoverflow.com/questions/10389040/what-does-the-1-w-coordinate-stand-for-in-gl-fragcoord

gl_fragcoord.w is  1/Wc where Wc is the gl_Position.w  that is the output of the vertex shader.   The projection matrix is supposed to move the z in camera space over to w. Thus  gl_fragcoord.w should be the (inverse of the) distance from the camera to the vertex along the Z axis in camera space, so i don't think it is used incorrectly in my fragment shader.

On my Monitor, the image is stretched and the points are commas, regardless of the sample setting, and stuff is not redrawn correctly on resize. Colour is usually set via lighting calculations and material properties, not via clip-space to ndc dependent transformations, because they will be different elsewhere.

Leave the window handling to glfw, take out the two blocks  that determine monitor capabilities, use a single line to open the window and use the window resize callback to feed the new values to your matrices. On resize, obtain new values via the callback.

If you want to do awesome colours, use phong, blinn, shadow/emissive/normal maps, pbr, raytracing, etc ?

Edit: and take care of if( float == 100 ) ... may be the case more rarely than one thinks.

Quote

as has been said, it because of different resolutions of the frame buffer and the window size.

And not even a +1, sad panda.

Editted: Thank you

This topic is closed to new replies.

Advertisement