Screen-Space-Reflections - OpenGL/Glsl - Any tips/help?

Started by
16 comments, last by AntRWells 4 years, 5 months ago

Hi,

I am trying to add a SSR post-processor to my engine, and although I've sound several samples on the net, one reason or another stops me from getting them working

within my engine. here is the basic shader I use, along with some questions. I'm sure the shader would work, I'm just not sure i'm passing the correct data,

as the tutorials/samples never included C++ code, just the shader code.



out vec3 color;

uniform mat4 invView;
uniform mat4 proj;
uniform mat4 invProj;
uniform mat4 view;



uniform sampler2D tDepth;
uniform sampler2D tNorm;
uniform sampler2D tFrame;


noperspective in vec2 UV;

const float step = 0.1;
const float minRayStep = 0.1;
const float maxSteps = 30;
const int numBinarySearchSteps = 5;
const float reflectionSpecularFalloffExponent = 3.0;

float Metallic;

#define Scale vec3(.8, .8, .8)
#define K 19.19


vec3 CalcViewPosition(in vec2 TexCoord)
{
    // Combine UV & depth into XY & Z (NDC)
    vec3 rawPosition                = vec3(TexCoord, texture(tDepth, TexCoord).r);

    // Convert from (0, 1) range to (-1, 1)
    vec4 ScreenSpacePosition        = vec4( rawPosition * 2 - 1, 1);

    // Undo Perspective transformation to bring into view space
    vec4 ViewPosition               = invProj * ScreenSpacePosition;

    // Perform perspective divide and return
    return                          ViewPosition.xyz / ViewPosition.w;
}




vec3 PositionFromDepth(float depth) {
    float z = depth * 2.0 - 1.0;

    vec4 clipSpacePosition = vec4(UV * 2.0 - 1.0, z, 1.0);
    vec4 viewSpacePosition = invProj * clipSpacePosition;

    // Perspective division
    viewSpacePosition /= viewSpacePosition.w;

    return viewSpacePosition.xyz;
}

vec3 BinarySearch(inout vec3 dir, inout vec3 hitCoord, inout float dDepth)
{
    float depth;

    vec4 projectedCoord;
 
    for(int i = 0; i < numBinarySearchSteps; i++)
    {

        projectedCoord = proj * vec4(hitCoord, 1.0);
        projectedCoord.xy /= projectedCoord.w;
        projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
 
        depth = texture(tDepth, projectedCoord.xy).z;

 
        dDepth = hitCoord.z - depth;

        dir *= 0.5;
        if(dDepth > 0.0)
            hitCoord += dir;
        else
            hitCoord -= dir;    
    }

        projectedCoord = proj * vec4(hitCoord, 1.0);
        projectedCoord.xy /= projectedCoord.w;
        projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
 
    return vec3(projectedCoord.xy, depth);
}

vec4 RayMarch(vec3 dir, inout vec3 hitCoord, out float dDepth)
{

    dir *= step;
 
 
    float depth;
    int steps;
    vec4 projectedCoord;

 
    for(int i = 0; i < maxSteps; i++)
    {
        hitCoord += dir;
 
        projectedCoord = proj * vec4(hitCoord, 1.0);
        projectedCoord.xy /= projectedCoord.w;
        projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
 
        depth = texture(tDepth, projectedCoord.xy).z;
        if(depth > 1000.0)
            continue;
 
        dDepth = hitCoord.z - depth;

        if((dir.z - dDepth) < 1.2)
        {
            if(dDepth <= 0.0)
            {   
                vec4 Result;
                Result = vec4(BinarySearch(dir, hitCoord, dDepth), 1.0);

                return Result;
            }
        }
        
        steps++;
    }
 
    
    return vec4(projectedCoord.xy, depth, 0.0);
}

vec3 fresnelSchlick(float cosTheta, vec3 F0)
{
    return F0 + (1.0 - F0) * pow(1.0 - cosTheta, 5.0);
}


vec3 hash(vec3 a)
{
    a = fract(a * Scale);
    a += dot(a, a.yxz + K);
    return fract((a.xxy + a.yxx)*a.zyx);
}

void main(){

    vec3 viewNormal = vec3(texture(tNorm,UV) * invView);
    vec3 viewPos = CalcViewPosition(UV);
    vec3 albedo = texture(tFrame,UV).rgb;

    Metallic = 0.4f;



    float spec = 0.2f;


   vec3 F0 = vec3(0.04); 
    F0      = mix(F0, albedo, Metallic);
    vec3 Fresnel = fresnelSchlick(max(dot(normalize(viewNormal), normalize(viewPos)), 0.0), F0);


   vec3 reflected = normalize(reflect(normalize(viewPos), normalize(viewNormal)));

  vec3 hitPos = viewPos;
    float dDepth;
 
    vec3 wp = vec3(vec4(viewPos, 1.0) * invView);
    vec3 jitt = mix(vec3(0.0), vec3(hash(wp)), spec);
    vec4 coords = RayMarch((vec3(jitt) + reflected * max(minRayStep, -viewPos.z)), hitPos, dDepth);
 
    


    vec2 dCoords = smoothstep(0.2, 0.6, abs(vec2(0.5, 0.5) - coords.xy));
 
      float screenEdgefactor = clamp(1.0 - (dCoords.x + dCoords.y), 0.0, 1.0);

    float ReflectionMultiplier = pow(Metallic, reflectionSpecularFalloffExponent) * 
                screenEdgefactor * 
                -reflected.z;
 
    // Get color
    vec3 SSR = texture(tFrame, coords.xy).rgb;// * clamp(ReflectionMultiplier, 0.0, 0.9) * Fresnel;  



    color = SSR;


}

So my questions are,

 

1) Matrixes passed.

 
uniform mat4 invView; - This is the View matrix, this means the world matrix of the camera? Before or after inverstion?
uniform mat4 proj; - The project matrix? I.e the Perspective matrix
uniform mat4 invProj; - inverse of above?
uniform mat4 view; - view matrix not inverted?

 

uniform sampler2D tDepth; - the depth of the scene, rendered from camera position? 0.1 0 = minZ 1.0 = maxZ
uniform sampler2D tNorm; - the world normals of rendered from camera? I.e 0.5 + normal * 0.5;?
uniform sampler2D tFrame; - the final color buffer to read the reflected pixel from?

 

I've tried several shaders, tried re-implemented here and there based on other shaders, but the final result is always very off, i mean just not working at all?

 

If you're not sure what's wrong with the above shader could you link me to a working shader, prefably with c++ code so I can understand it fully, not just the final shader?

 

Cheers.

 

Thanks, Antony.

Advertisement

I also had problems with SSR implementation. So you can look at what I got, it seems to give a good result: https://gitlab.com/congard/algine/blob/master/src/resources/shaders/fragment_screenspace_shader.glsl

Spoiler

Algine v1.4.1 beta-candidate

 

Cheers looks good - Can you link me to the code that sets up the data for the inputs? I.e the position/normals. I understand it's in view space, but I'm not exactly sure how to set that up? (I.e how to define the matrixes, and how to render those maps.)

Thanks, Antony.

To create matrices used library glm.

Short algorithm:

1. In the main rendering pass (that is when we draw meshes) we use MRT to draw several textures at once in a single pass:


layout(location = 0) out vec4 fragColor;
layout(location = 1) out float dofBuffer;
layout(location = 2) out vec3 normalBuffer;
layout(location = 3) out vec2 ssrValuesBuffer;
layout(location = 4) out vec3 positionBuffer;
...
in vec4 v_Position; // Position for this fragment in world space
in vec3 v_Normal; // Interpolated normal for this fragment
...
in mat4 vmMatrix; // view model matrix
...
    fragPos = vec3(vmMatrix * v_Position); // Transform the vertex into eye space
    norm = vec3(normalize(vmMatrix * vec4(v_Normal, 0)));
    ...
    normalBuffer = norm;
    positionBuffer = fragPos;
    ssrValuesBuffer.r = texture(material.reflectionStrength, v_Texture).r;
    ssrValuesBuffer.g = texture(material.jitter, v_Texture).r;

Source: https://gitlab.com/congard/algine/blob/master/src/resources/shaders/fragment_shader.glsl

2. Next, simply bind the rendered textures to SSR shader

You may also not need ssrValuesBuffer at all, so you can simply omit this moment.

Here how can I create view and projection matrix:


glm::vec3 cameraPos(-9, 8, 7);
glm::vec3 cameraLookAt(0, 4, 0);
glm::vec3 cameraUp(0, 1, 0);

glm::mat4 projectionMatrix = glm::perspective(glm::radians(90.0f), (float) WIN_W / (float) WIN_H, 1.0f, 32.0f);
glm::mat4 viewMatrix = glm::lookAt(cameraPos, cameraLookAt, cameraUp);

I hope this will be useful for you.

ok thanks that helps, just to be clear though could you answer these questions? It's what's causing me problems.

 

1. When rendering the position buffer, how do I derieve the values that become the rgb of each pixel?

i.e - out = vec3(worldX,worldY,worldZ) - i ask because it can only be 0..1 so i'm not sure how to convert the world position to this. I've heard it can be done in view space - but i'm not sure how to convert a world coord into  view space.

 

2. similar question, how do I convert the normal of each fragment, into something that can be packed into a rgb and also then used in the SSR shader 

 

if all of the above is done within your code, I'll just go through that. 

 

Cheers.

Thanks, Antony.

It is not at all necessary for us to store values within 0..1, since we do not display them on the screen, but use them in further stages of rendering. For normalBuffer and positionBuffer, you must also use GL_RGB16F

oh ok i never thought that was possible. so I could for example set the rgb to - col.r = 200; and when used later as data it would still be 200? ok that should be enough for me to try, thanks for your help.

Thanks, Antony.

One more question for anyone - based on congards code, the positions and normals need to be in view space. While I understand the concept, how do I actually convert them from world space? either in shader or c++?

Thanks, Antony.

You get from world space to view space by multiplying with the view matrix.

Maybe have a look at this for a reiteration of the coordinate systems:

https://learnopengl.com/Getting-started/Coordinate-Systems

sbusch@pixellight.orghttp://www.pixellight.org

Ok that sounds easy enough, thanks for the link too.

Thanks, Antony.

This topic is closed to new replies.

Advertisement