Jump to content
  • Advertisement

drhex

Member
  • Content Count

    3
  • Joined

  • Last visited

Community Reputation

0 Neutral

About drhex

  • Rank
    Newbie

Personal Information

  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ah, thank you Green_Baron, now the points look better! However, after some experimentation I find that it was not the manually set window size that did it: what made it work was setting GLFW_SAMPLES to 0 rather than 1. As per https://stackoverflow.com/questions/10389040/what-does-the-1-w-coordinate-stand-for-in-gl-fragcoord gl_fragcoord.w is 1/Wc where Wc is the gl_Position.w that is the output of the vertex shader. The projection matrix is supposed to move the z in camera space over to w. Thus gl_fragcoord.w should be the (inverse of the) distance from the camera to the vertex along the Z axis in camera space, so i don't think it is used incorrectly in my fragment shader.
  2. My eyes are not what the used to be either : -) Not blaming the monitor for smearing here - I take a screenshot and magnify to verify the results. The points of the model are eventually supposed to be stars that make up a galaxy. So there can be many points that fall on the same onscreen pixel and should then be added up. I suppose that can be handled with the proper glBlendFunc, but first the rasterizer has to be coerced to plot single pixels. Following Green_Baron's hints, I'm using glfwWindowHint(GLFW_SAMPLES, 1); glDisable(GL_POINT_SMOOTH); glPointSize(1); As seen in the attached screenshot, points spread out to 2 pixels horizontally vertically or diagonally. Here's a cleaned-up version of the source code: // Include standard headers #include <stdio.h> #include <stdlib.h> // Include GLEWf #include <GL/glew.h> // Include GLFW #include <glfw3.h> GLFWwindow* window; // Include GLM #include <glm/glm.hpp> #include <glm/gtc/matrix_transform.hpp> using namespace glm; #include <common/shader.hpp> const float USER_DISTANCE_MM = 440; // eye to Screen const float NEAR_MM = 100; const float FAR_MM = 1000; const float SIZE_PERCENT = 70; // 100=fullScreen struct Size { float width, height; // pixels int width_mm, height_mm; void init(Size ref, float percent) { float p = percent/100; width = int(ref.width * p); height = int(ref.height * p); width_mm = ref.width_mm * p; height_mm = ref.height_mm * p; } } Screen, Window; int main( void ) { // Initialise GLFW if( !glfwInit() ) { fprintf( stderr, "Failed to initialize GLFW\n" ); getchar(); return -1; } glfwWindowHint(GLFW_SAMPLES, 1); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); GLFWmonitor *monitor = glfwGetPrimaryMonitor(); const GLFWvidmode *return_struct = glfwGetVideoMode(monitor); Screen.width = return_struct->width; Screen.height = return_struct->height; if (Screen.width < 800 || Screen.height > 8192) { fprintf(stderr, "Got weird display resolution %f x %f", Screen.width, Screen.height); Screen.width = 1024; Screen.height = 576; fprintf(stderr, "Defaulting to %f x %f", Screen.width, Screen.height); } glfwGetMonitorPhysicalSize(monitor, &Screen.width_mm, &Screen.height_mm); if (Screen.width_mm < 120 || Screen.width_mm > 1000) { fprintf(stderr, "Got weird display size %d x %d mm", Screen.width_mm, Screen.height_mm); Screen.width_mm = 345; Screen.height_mm = 194; fprintf(stderr, "Defaulting to %d x %d mm", Screen.width_mm, Screen.height_mm); } Window.init(Screen, SIZE_PERCENT); // Open a window and create its OpenGL context window = glfwCreateWindow(Window.width, Window.height, "$DR.HEX$ Galaxies", SIZE_PERCENT == 100 ? monitor : NULL, NULL); if( window == NULL ){ fprintf( stderr, "Failed to open GLFW window. If you have an Intel GPU, they are not 3.3 compatible. Try the 2.1 version of the tutorials.\n" ); getchar(); glfwTerminate(); return -1; } glfwMakeContextCurrent(window); // Initialize GLEW glewExperimental = true; // Needed for core profile if (glewInit() != GLEW_OK) { fprintf(stderr, "Failed to initialize GLEW\n"); getchar(); glfwTerminate(); return -1; } // Ensure we can capture the escape key being pressed below glfwSetInputMode(window, GLFW_STICKY_KEYS, GL_TRUE); // Black background glClearColor(0.0f, 0.0f, 0.0f, 0.0f); GLuint VertexArrayID; glGenVertexArrays(1, &VertexArrayID); glBindVertexArray(VertexArrayID); // Create and compile our GLSL program from the shaders GLuint programID = LoadShaders( "SimpleVertexShader.vertexshader", "PowFade.fragmentshader" ); const int NSTORE = 16384; const int XSTART = 0*NSTORE; const int YSTART = 1*NSTORE; const int ZSTART = 2*NSTORE; GLfloat g_vertexes[3*NSTORE]; GLuint vertexbuffer; glGenBuffers(1, &vertexbuffer); glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertexes), NULL, GL_DYNAMIC_DRAW); // allocate memory on GPU GLuint MatrixID = glGetUniformLocation(programID, "MVP"); GLuint BrightnessID = glGetUniformLocation(programID, "intrinsic_brightness"); // Use our shader glUseProgram(programID); glUniform1f(BrightnessID, NEAR_MM*NEAR_MM); // Brightness/NEAR_MM/NEAR_MM == 1.0 = maximum brightness glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer); glDisable(GL_POINT_SMOOTH); glPointSize(1); // smaller than 1 makes no difference float rot = 0; int i; int fcnt=0; const int DOFRAMES=600; do { // Clear the Screen glClear( GL_COLOR_BUFFER_BIT ); // Model matrix : rotate slowly around Z axis glm::mat4 Model = glm::rotate(glm::mat4(1.0f), rot/2, glm::vec3(0,0,1)); // Camera matrix glm::mat4 View = glm::lookAt(glm::vec3(0,0,USER_DISTANCE_MM), // Position of Camera in World Space glm::vec3(0, 0, 0), // What the camera is looking at glm::vec3(0,1,0) // Head is up ); // Projection matrix : Vertical Field of View, square pixel ratio, display range : 100 - 1000 mm glm::mat4 Projection = glm::perspective(asin(Window.height_mm/2/USER_DISTANCE_MM)*2, Window.width/Window.height, NEAR_MM, FAR_MM); // Our ModelViewProjection : multiplication of our 3 matrices glm::mat4 MVP = Projection * View * Model; // Order matters glUniformMatrix4fv(MatrixID, 1, GL_FALSE, &MVP[0][0]); rot += 0.01f; // Create the model: a regular 2D array of points spaced 3 mm apart int NDRAW =0; for (int y=-60; y<=60; y+=3 ) { for (int x=-60; x<=60; x+=3) { g_vertexes[XSTART+NDRAW] = x; g_vertexes[YSTART+NDRAW] = y; g_vertexes[ZSTART+NDRAW] = 0; NDRAW++; } } // memcpy vertex coordinates to GPU for (i=0; i<3; i++) { glBufferSubData(GL_ARRAY_BUFFER, i*NSTORE*sizeof(float), NDRAW*sizeof(float), &g_vertexes[i*NSTORE]); glEnableVertexAttribArray(i); glVertexAttribPointer( i, // attribute 0. No particular reason for 0, but must match the layout in the shader. 1, // size GL_FLOAT, // type GL_FALSE, // normalized? 0, // stride (void*)(i*NSTORE*sizeof(float)) // array buffer offset ); } // Draw the dots ! glDrawArrays(GL_POINTS, 0, NDRAW); for (i=0; i<3; i++) glDisableVertexAttribArray(i); // Swap buffers glfwSwapBuffers(window); glfwPollEvents(); fcnt++; if (fcnt==DOFRAMES) break; } // Check if the ESC key was pressed or the window was closed while( glfwGetKey(window, GLFW_KEY_ESCAPE ) != GLFW_PRESS #if SIZE_PERCENT != 100 && glfwWindowShouldClose(window) == 0 #endif ); // Cleanup VBO glDeleteBuffers(1, &vertexbuffer); glDeleteVertexArrays(1, &VertexArrayID); glDeleteProgram(programID); // Close OpenGL window and terminate GLFW glfwTerminate(); return 0; } And the vertex + fragment shaders look like this #version 330 core // Input vertex data, different for all executions of this shader. layout(location = 0) in float myx; layout(location = 1) in float myy; layout(location = 2) in float myz; uniform mat4 MVP; void main() { gl_Position = MVP * vec4(myx,myy,myz,1.0); } #version 330 core uniform float intrinsic_brightness; // Implicitly declared // in vec4 gl_FragCoord; w component is 1/w after projection matrix, i.e. Z distance from camera out vec4 outcolor; void main() { // Divide by square of distance and compensate for monitor's SRGB gamma float brightness = pow(intrinsic_brightness * gl_FragCoord.w * gl_FragCoord.w, 1/2.2); outcolor = vec4(brightness, brightness, brightness, 1.0); }
  3. Hi, opengl-beginner here. I'm trying to visualize a cloud of points whose X,Y,Z coordinates are sent to a VBO on the GPU and drawn with glDrawArrays(GL_POINTS, ...) A vertex shader applies the combined model-view-projection matrix that lets me rotate, place and scale the cloud, position and aim a camera and choose my field of view. After the mandatory perspective divide, the vertices/points end up where they should on the screen and the fragment shader provides the correct color. But. opengl draws pixel-sized squares rather than points. Depending on how the fractional on-screen coordinates fall on or between the integer pixel coordinates, my points smear out to affect 1-4 pixels each. The points in my model are supposed to be very small and so should only affect a single pixel each. What I have tried to solve the problem: 1) Adjust the coordinates before they are sent to the GPU: At first, when I was using matrices so simple that I was effectively doing 2D graphics, it was trivial to do minor adjustments to X and Y to make the points end up at the center of pixels, but I want to make full use of the possibilities of 3D and the model-view-projection trio. 2) Adjust the coordinates in the fragment shader. There is a gl_FragCoord.xyzw variable containing the on-screen coordinates, but it is read-only. 3) Looking for a programmable "rasterization shader" that runs between the vertex and fragment shader, but this step in the graphics pipeline seems to be hardwired. 4) Selecting a smaller point size with glPointSize(). Using an argument of 10 gives me bigger points, so I know the call has an effect. glGetFloatv with GL_POINT_SIZE_MIN gives 0 and GL_SMOOTH_POINT_SIZE_GRANULARITY gives 0.125, but setting the point size to 0 or 0.125 gives the same results as setting it to 1. glEnable(GL_POINT_SMOOTH) did not help either. Graphics card: Geforce GT 560M from 2011, capable of openGL versions upto 4.6. NVIDIA driver 390.77 on Linux.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!