Inverse fog, Ugly interpolation

Started by
3 comments, last by Aliii 10 years, 5 months ago

I tried to do something like inverse fog, where the closer the points are to you the more fog they get. But its not good, I can see the triangles of the surface.

I also tried it with a surface that looks like a spider web and you stand in the middle, ...but there I could see the horizontal lines. On the picture Ive changed the contrast so in the game is not as visible, but still ugly enough.

Ive tried it with higher surface resolution, it didnt really help. Is there a workaround for this or is this just the way interpolation works?

surface.png

Advertisement

It looks like the fog values are being calculated per vertex. If you have control of the vertex/pixel shaders then you could try changing it so that the distance from the eye position is calculated on the vertex shader (as this will interpolate linearly quite well), and then the fog equation is applied on a per pixel basis.

If you're using some technology that doesn't let you control the shaders then it might be trickier, you'll probably end up having more triangles, or a fog that blends in more gently so the artifacts are not so visible.

Thanks! Actually, Im trying to create a water surface where the fog level(alpha component, originally) depends on the angle at which you can see the surface(the vertices).

This is a simplified shader, but it produces the same thing:


#version 330 core

layout(location = 0) in vec3 vertex_pos;
layout(location = 1) in vec3 vertex_color;

uniform mat4    V_rot;
uniform mat4    V_tran;
uniform mat4    P;

out vec4   color_VOUT;

void main(){

    vec4    pos_TRAN_ONLY =     V_tran * vec4( vertex_pos, 1);    
    float   pos_dist =          length( pos_TRAN_ONLY.xyz);

    float   alpha_level = 1 - abs( pos_TRAN_ONLY.z) / pos_dist;     //1 - sin( alpha)

    color_VOUT =    vec4( vertex_color, 1);    
    color_VOUT.r =  alpha_level;                                    //set the RED component, so its more visible.

    gl_Position = P * (V_rot * pos_TRAN_ONLY);

}

Sorry, I dont get why setting the distance instead of the color would be interpolated better.

...now as Im thinking, it can be that the color (which depends on the distance) is interpolated linearly on a given line of a triangle, but the distance from the camera is not linear. or something like that.

Edit:

The fragment shader doesnt do anything now, just sets the output color.

When you pass data from your vertex shader to your pixel shader, the GPU will interpolate values linearly. In many cases (e.g. interpolating UV coordinates), this linear interpolation is correct. However, for other functions, the linear interpolation will introduce some errors. Calculations done on the vertex shader are much cheaper and part of writing shaders well is balancing the performance versus accuracy of doing calculations per vertex.

I suggested passing distance from camera to the pixel shader and doing the remaining calculations on the pixel shader. This is because the distance from camera is a big part of the fog calculation and it's pretty linear (although not perfectly so, imagine the case where you're standing on a very large triangle). If you really needed complete accuracy you could pass the vertex positions through to the pixel shader and do the entire calculation per pixel.


I suggested passing distance from camera to the pixel shader and doing the remaining calculations on the pixel shader. This is because the distance from camera is a big part of the fog calculation and it's pretty linear (although not perfectly so, imagine the case where you're standing on a very large triangle). If you really needed complete accuracy you could pass the vertex positions through to the pixel shader and do the entire calculation per pixel.

Thanks! It worked out perfectly.

This topic is closed to new replies.

Advertisement