Jump to content
  • Advertisement
Sign in to follow this  
Shawn619

OpenGL GLSL not working at long distance

This topic is 2119 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

It looks like my shader acts differently when my camera is at short range(camera=~5 units) and long range(camera>5 units).

In this picture, my blue cube light (toon)shader is hitting the plane quad at about 40degree from the plane and produces a light pink, which is the correct color. At this point you can see my camera is ~5units from the plane.

2i0xc3t.jpg

In this picture, the plane shows its base color, as if the light isnt even there, even though all i did was move my camera out 20 units. It SHOULD be light pink just as when the camera was close to the plane.

1440fo4.jpg

This has not been a problem with Opengl's fixed-function lighting.

.vert

varying vec3 normal;
varying vec4 c;
void main()
{
c = gl_Color;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}


.frag

varying vec3 normal;
varying vec4 c;
void main()
{
float intensity;
vec4 color;
vec3 n = normalize(normal);
vec3 lightvector = normalize(vec3(gl_LightSource[0].position));
intensity = dot(lightvector, n);

if (intensity > 0.90)
color = c + vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = c + vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.2)
color = c + vec4(0.4,0.2,0.2,1.0);
else
color = c + vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}

Share this post


Link to post
Share on other sites
Advertisement
I see you still have not implemented Ignifex' fix in your practically identical thread? Because I still believe as well the light vector must should be something on the lines of
lightvector = normalize(lightPosition - vertexPosition);

Share this post


Link to post
Share on other sites
Trust me, i'll attempt anything that has the possiblity of working. I confirmed it didn't work in my program + my shader designer, it just produced weird results.

10xuaag.jpg

His shader suggestion:

.vert

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
c = gl_Color;
vertPos = gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}


.frag

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
float intensity;
vec4 color;
vec3 n = normalize(normal);
vec3 lightvector = normalize(vec3(gl_LightSource[0].position.xyz - vertPos.xyz));
intensity = dot(lightvector, n);

if (intensity > 0.90)
color = c + vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = c + vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.2)
color = c + vec4(0.4,0.2,0.2,1.0);
else
color = c + vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}

Share this post


Link to post
Share on other sites
Despite the fact that the results are not yet as desired, Ignifex' solution is definitely more correct. One thing that comes immediately to mind is that you are using the input vertex position for lighting. Usually I would expect to use the vertex position transformed by the modelview matrix for lighting.

Share this post


Link to post
Share on other sites
It's important to have some understanding of the lighting equation you are using.

The dot product you are computing is Lambert's cosine law, stating that the amount of light that hits a surface is proportional to the cosine of the angle at which it is hit. This dot product is computed between the light vector and the surface normal, both normalized so that the result is the cosine of the angle between them.

This light vector is not the light position "vector", which is hardly a vector at all, just a position. It is the vector pointing from the shaded point to the light source. The shaded point is easiest to find by an interpolation of your vertex positions. Since your normal and vertices are likely in world space, you will need your light source in world space as well, to get correct and view independent results.

Share this post


Link to post
Share on other sites
Is this what your saying i need to do? (have light in world space)

.vert

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
c = gl_Color;
vertPos = gl_Vertex;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}


.frag

varying vec3 normal;
varying vec4 c;
varying vec4 vertPos;
void main()
{
vec4 lightPos = (vec4(gl_LightSource[0].position)*gl_ModelViewMatrix);
float intensity;
vec4 color;
vec3 n = normalize(normal);
vec3 lightvector = normalize(vec3( lightPos - vertPos.xyz));
intensity = dot(lightvector, n);

if (intensity > 0.90)
color = c + vec4(1.0,0.5,0.5,1.0);
else if (intensity > 0.5)
color = c + vec4(0.6,0.3,0.3,1.0);
else if (intensity > 0.2)
color = c + vec4(0.4,0.2,0.2,1.0);
else
color = c + vec4(0.2,0.1,0.1,1.0);
gl_FragColor = color;
}

Share this post


Link to post
Share on other sites
No. Your light and vertex need to be in the same space gl_Vertex is in model/object space. Multiply it by the ModelMatrix to put it into the world. IE each model you can see has its own matrix. From there you can mutliply it into any camera view space. Same with the sun. The sun vector starts in world space. Multiply it by any cameras view and its relative to that camera. The light should be in camera space (the sun does not know anything about objects in the world so there should be no modelmatrix, just the view). Then take the vertex/normal in view and the vectors are both in view space.

Share this post


Link to post
Share on other sites
You can also do all your computations in view space of course, which works perfectly fine. It is often even a little simpler because of the combined model and view matrices.
A brief overview of the spaces you are using:
Object Space -> World Space -> View Space -> Clip Space
Your convert between these like so:
Object Space -[ Modelview Matrix ]-> View Space -[ Perspective Matrix ]-> Clip Space
At the moment your lighting computation seems to have all parts properly in view space.

One thing to look out for though is the gl_LightSource[0].position. If you set this using something like glLightfv(GL_POSITION, mylightposition), the mylightposition you provide is multiplied by the modelview matrix before being sent to the GPU. That gives you two options.
First, you can set it after setting up your camera's modelview matrix, in which case you don't have to transform it in your shader, but you will need to update it whenever your modelview matrix changes.
Second, you can use an identity modelview matrix when setting up your light position, which means you will be transforming it into view space in your shader, but you will only need to update it when the light source moves.
Note that multiplying in your shader for each shaded pixel is rather expensive, so I would go for the first choice.

If any of this does not make sense to you, please let us know. Edited by Ignifex

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By nOoNEE
      hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource 
    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631383
    • Total Posts
      2999680
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!