Deferred rendering shader spaces issue

Started by
5 comments, last by Murloc992 9 years, 6 months ago
Hello,
I have currently got into shaders again. Now, I want some clarification.
As I have been googling, reading and all I have come to conclusion about spaces:
Origin is just something you define, like (0,1,0), it's a position from the Model's center or OpenGL coordinate center.
Model transform(M) is the so told model center's transform regarding the OpenGL coordinate center. (M*vec)
View transform(V) is the Model's transform according to camera, as the camera doesn't move and is at the (0,0,0). (V*M*vec)
Projection(P) is what projects the View to Screen. (P*V*M*vec)
(Please correct me if I am wrong.)
So now to the problem:
I am trying to implement a Deferred Rendering G-Buffer.
I have got a FBO with all the required textures: Diffuse, Depth, Normals, Position, UV(I am not using UVs in my particular sample).
The main issue starts here. I know that Diffuse and Depth are okay Out of the Box.
I need Normals and Positions in View-Space, for my SSAO implementation. I get that by multiplying normals I have calculated in my Geometry shader by Transpose Inverse View*Model matrix and packing them.
But how do I need to calculate Positions texture?
Also, are there any clarifications what exactly are: Model-Space, View-Space, Camera-Space? Are they exactly as I told?
This is my G-Buffer Shader Set:
Vertex:

#version 330
// THE FOOD CHAIN GOES LIKE THIS
// Vert
// Geom
// Frag
 
layout (location=0) in vec3 pos;
layout (location=6) in vec4 col;
 
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
 
out vData
{
    vec4 color;
vec4 pos;
}vertex;
 
void main()
{
vec4 _col=vec4(col.x/255,col.y/255,col.z/255,col.w/255);
mat4 MVP = P*V*M;
vertex.color=_col;
vertex.pos=vec4(pos,1.0);
gl_Position = MVP * vec4(pos,1);
}
Geometry

#version 330
// THE FOOD CHAIN GOES LIKE THIS
// Vert
// Geom
// Frag
 
layout (triangles) in;
layout (triangle_strip, max_vertices=3) out;
 
uniform mat3 normMatrix;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;
 
 
in vData
{
    vec4 color;
vec4 pos;
}vertex[];
 
out vec3 normal;
out vec4 color;
out vec4 position;
 
void main()
{
vec3 calcNorm=normalize(normMatrix*cross(vertex[1].pos.xyz - vertex[0].pos.xyz, vertex[2].pos.xyz - vertex[0].pos.xyz))*0.5+0.5;
normal = calcNorm;
color = vertex[0].color;
position = M*vertex[0].pos;
    gl_Position = gl_in[0].gl_Position;
    EmitVertex();
normal = calcNorm;
color = vertex[1].color;
position = M*vertex[1].pos;
    gl_Position = gl_in[1].gl_Position;
    EmitVertex();
normal = calcNorm;
color = vertex[2].color;
position = M*vertex[2].pos;
    gl_Position = gl_in[2].gl_Position;
    EmitVertex();
    EndPrimitive();
}
Fragment

#version 330
// THE FOOD CHAIN GOES LIKE THIS
// Vert
// Geom
// Frag
 
in vec4 position;
in vec3 normal;
in vec4 color;
 
layout (location = 0) out vec3 Gdiffuse;
layout (location = 1) out vec3 Gnormal;
layout (location = 2) out vec3 Gposition;
layout (location = 3) out vec3 Gtexcoord;
 
void main(){
 
Gdiffuse = color.rgb;
Gnormal = normal;
Gposition = position.rgb;
Gtexcoord = vec3(0);
}
This is how it looks:
48811a409e.png
(Upper left - diffuse, Lower left - random normals for SSAO, Right - Depth, Normal, Position, UV)
( Yeah, it's a voxel engine, generic. tongue.png But it's really fun learning to optimize and all. smile.png Nothing more satisfying than to see it be faster and more memory efficient. smile.png )
Advertisement

Also, are there any clarifications what exactly are: Model-Space, View-Space, Camera-Space? Are they exactly as I told?

Model space or object space, is the space in which your mesh is defined. When you load your mesh, each vertex is in model/object space.

Eye,view or camera space , is the space around wink.png your camera, with the camera position (center of projection) being the center (0,0,0).

The common transformation is

objectspace ==model-transform==> wordspace ==view transform==> viewspace ==projection transform ==> screenspace

When using deferred rendering, the g-buffer is typically in view/camera space.


But how do I need to calculate Positions texture?

If you are in camera-space, you only need to save the depth, that is all. You dont need the position, because you can reconstruct the camera-space position from the screen-pixel position (aka fragment position) which is available in the fragment/pixel shader and the depth (there are different approaches to do it, best to use the search function/google for more detailed informations).

Wow, that was fast. :)

Thank you very much for getting me on track and clarifying!

(Also, saw a very neat trick for alpha you got there! Will definitely try it. :) )

Thanks! Will see how it goes, if I struggle anymore or not.

vec4 _col=vec4(col.x/255,col.y/255,col.z/255,col.w/255);

you can just write col.xyzw / 255.0 here, or col / 255.0. 255 is an integer though and might not compile on stricter implementations (im not actually sure)

vec3 calcNorm=normalize(normMatrix*cross(vertex[1].pos.xyz - vertex[0].pos.xyz, vertex[2].pos.xyz - vertex[0].pos.xyz)) * 0.5 + 0.5;

is invalid and will likely not compile on stricter implementations, it should be + vec3(0.5);

Thank you for your responses. smile.png I have fixed my normals and positions calculations accordingly and now, I presume, they are both in ModelView space, which is required for the implementation of this SSAO approach. This is how my buffers and SSAO look now:

fcda583769.png

I had to convert the SSAO code from what I think is DirectX shading language to GLSL, so there might be some mistakes by me. What I could be doing wrong, to get such distorted 4-piece screen results? This is my SSAO conversion:

Vertex Shader


#version 330
layout (location=0) in vec3 pos;
layout (location=1) in vec2 tex;

out vec4 _pos;
out vec2 _uv;

void main(void)
{
	_pos = vec4(pos,1);
    _uv = tex;
    gl_Position = vec4(pos,1);
}

Fragment shader


#version 330

uniform sampler2D g_buffer_diff;
uniform sampler2D g_buffer_norm;
uniform sampler2D g_buffer_pos;
uniform sampler2D g_random;
uniform sampler2D g_depth;

const float g_screen_size=1280*768;
const float random_size=64*64;
const float g_sample_rad=0.006;
const float g_intensity=1.38;
const float g_scale=0.000002;
const float g_bias=0.07;

in vec4 _pos;
in vec2 _uv;

out vec4 fragColor;

vec3 getPosition(vec2 uv)
{
return texture2D(g_buffer_pos,uv).xyz;
}

vec3 getNormal(vec2 uv)
{
return normalize(texture2D(g_buffer_norm, uv).xyz * 2.0f - 1.0f);
}

vec2 getRandom(vec2 uv)
{
return normalize(texture2D(g_random, g_screen_size * uv / random_size).xy * 2.0f - 1.0f);
}

float doAmbientOcclusion(vec2 tcoord,in vec2 uv, in vec3 p, in vec3 cnorm)
{
vec3 diff = getPosition(tcoord + uv) - p;
vec3 v = normalize(diff);
float d = length(diff)*g_scale;
return max(0.0, dot(cnorm,v)-g_bias)*(1.0/(1.0+d))*g_intensity;
}

void main()
{
vec2 vecs[4];
vecs[0]= vec2(1,0);
vecs[1]= vec2(-1,0);
vecs[2]= vec2(0,1);
vecs[3]= vec2(0,-1);

vec3 p = getPosition(_uv);
vec3 n = getNormal(_uv);
vec2 rand = getRandom(_uv);

float ao = 0.0f;
float rad = g_sample_rad/p.z;

//**SSAO Calculation**//
int iterations = 4;
for (int j = 0; j < iterations; ++j)
{
  vec2 coord1 = reflect(vecs[j],rand)*rad;
  vec2 coord2 = vec2(coord1.x*0.707 - coord1.y*0.707, coord1.x*0.707 + coord1.y*0.707);
  
  ao += doAmbientOcclusion(_uv,coord1*0.25, p, n);
  ao += doAmbientOcclusion(_uv,coord2*0.5, p, n);
  ao += doAmbientOcclusion(_uv,coord1*0.75, p, n);
  ao += doAmbientOcclusion(_uv,coord2, p, n);
}
ao/=iterations*4.0;
//**END**//

//Do stuff here with your occlusion value âaoâ: modulate ambient lighting, write it to a buffer for later //use, etc.
fragColor=vec4(1-ao);
}

I am lost now. sad.png

Still struggling. Not really sure where I am lost right now. The 4 times split screen looks like it's a position issue, but I am not sure. Maybe my settings are bad or something.. Trying to fix but no avail.

Was a little bit off from it, but now as I am back, have played a bit with the shader variables, tried multiple things with positions (first thought it was some wrong spaces, apparently not), but I think it's the 4 squares of positions what messes up the look of it. I yet have no idea why final render is split into four parts tho. Could anybody put me on the right track?

This topic is closed to new replies.

Advertisement