# OpenGL Deferred rendering shader spaces issue

## Recommended Posts

Hello,

I have currently got into shaders again. Now, I want some clarification.

As I have been googling, reading and all I have come to conclusion about spaces:

Origin is just something you define, like (0,1,0), it's a position from the Model's center or OpenGL coordinate center.
Model transform(M) is the so told model center's transform regarding the OpenGL coordinate center. (M*vec)
View transform(V) is the Model's transform according to camera, as the camera doesn't move and is at the (0,0,0). (V*M*vec)
Projection(P) is what projects the View to Screen. (P*V*M*vec)
(Please correct me if I am wrong.)

So now to the problem:

I am trying to implement a Deferred Rendering G-Buffer.
I have got a FBO with all the required textures: Diffuse, Depth, Normals, Position, UV(I am not using UVs in my particular sample).
The main issue starts here. I know that Diffuse and Depth are okay Out of the Box.
I need Normals and Positions in View-Space, for my SSAO implementation. I get that by multiplying normals I have calculated in my Geometry shader by Transpose Inverse View*Model matrix and packing them.

But how do I need to calculate Positions texture?
Also, are there any clarifications what exactly are: Model-Space, View-Space, Camera-Space? Are they exactly as I told?

This is my G-Buffer Shader Set:

Vertex:
#version 330
// THE FOOD CHAIN GOES LIKE THIS
// Vert
// Geom
// Frag

layout (location=0) in vec3 pos;
layout (location=6) in vec4 col;

uniform mat4 M;
uniform mat4 V;
uniform mat4 P;

out vData
{
vec4 color;
vec4 pos;
}vertex;

void main()
{
vec4 _col=vec4(col.x/255,col.y/255,col.z/255,col.w/255);
mat4 MVP = P*V*M;
vertex.color=_col;
vertex.pos=vec4(pos,1.0);
gl_Position = MVP * vec4(pos,1);
}


Geometry
#version 330
// THE FOOD CHAIN GOES LIKE THIS
// Vert
// Geom
// Frag

layout (triangles) in;
layout (triangle_strip, max_vertices=3) out;

uniform mat3 normMatrix;
uniform mat4 M;
uniform mat4 V;
uniform mat4 P;

in vData
{
vec4 color;
vec4 pos;
}vertex[];

out vec3 normal;
out vec4 color;
out vec4 position;

void main()
{
vec3 calcNorm=normalize(normMatrix*cross(vertex[1].pos.xyz - vertex[0].pos.xyz, vertex[2].pos.xyz - vertex[0].pos.xyz))*0.5+0.5;
normal = calcNorm;
color = vertex[0].color;
position = M*vertex[0].pos;
gl_Position = gl_in[0].gl_Position;
EmitVertex();
normal = calcNorm;
color = vertex[1].color;
position = M*vertex[1].pos;
gl_Position = gl_in[1].gl_Position;
EmitVertex();
normal = calcNorm;
color = vertex[2].color;
position = M*vertex[2].pos;
gl_Position = gl_in[2].gl_Position;
EmitVertex();
EndPrimitive();
}


Fragment
#version 330
// THE FOOD CHAIN GOES LIKE THIS
// Vert
// Geom
// Frag

in vec4 position;
in vec3 normal;
in vec4 color;

layout (location = 0) out vec3 Gdiffuse;
layout (location = 1) out vec3 Gnormal;
layout (location = 2) out vec3 Gposition;
layout (location = 3) out vec3 Gtexcoord;

void main(){

Gdiffuse = color.rgb;
Gnormal = normal;
Gposition = position.rgb;
Gtexcoord = vec3(0);
}


This is how it looks:
(Upper left - diffuse, Lower left - random normals for SSAO, Right - Depth, Normal, Position, UV)

( Yeah, it's a voxel engine, generic. But it's really fun learning to optimize and all.  Nothing more satisfying than to see it be faster and more memory efficient. )

##### Share on other sites

Also, are there any clarifications what exactly are: Model-Space, View-Space, Camera-Space? Are they exactly as I told?

Model space or object space, is the space in which your mesh is defined. When you load your mesh, each vertex is in model/object space.

Eye,view or camera space , is the space around   your camera, with the camera position (center of projection) being the center (0,0,0).

The common transformation is

objectspace ==model-transform==> wordspace ==view transform==> viewspace ==projection transform ==> screenspace

When using deferred rendering, the g-buffer is typically in view/camera space.

But how do I need to calculate Positions texture?

If you are in camera-space, you only need to save the depth, that is all. You dont need the position, because you can reconstruct the camera-space position from the screen-pixel position (aka fragment position) which is available in the fragment/pixel shader and the depth (there are different approaches to do it, best to use the search function/google for more detailed informations).

Edited by Ashaman73

##### Share on other sites

Wow, that was fast. :)

Thank you very much for getting me on track and clarifying!

(Also, saw a very neat trick for alpha you got there! Will definitely try it. :) )

Thanks! Will see how it goes, if I struggle anymore or not.

##### Share on other sites

vec4 _col=vec4(col.x/255,col.y/255,col.z/255,col.w/255);

you can just write col.xyzw / 255.0 here, or col / 255.0. 255 is an integer though and might not compile on stricter implementations (im not actually sure)

vec3 calcNorm=normalize(normMatrix*cross(vertex[1].pos.xyz - vertex[0].pos.xyz, vertex[2].pos.xyz - vertex[0].pos.xyz)) * 0.5 + 0.5;

is invalid and will likely not compile on stricter implementations, it should be + vec3(0.5);

Edited by Kaptein

##### Share on other sites

Thank you for your responses. I have fixed my normals and positions calculations accordingly and now, I presume, they are both in ModelView space, which is required for the implementation of this SSAO approach. This is how my buffers and SSAO look now:

I had to convert the SSAO code from what I think is DirectX shading language to GLSL, so there might be some mistakes by me. What I could be doing wrong, to get such distorted 4-piece screen results? This is my SSAO conversion:

#version 330
layout (location=0) in vec3 pos;
layout (location=1) in vec2 tex;

out vec4 _pos;
out vec2 _uv;

void main(void)
{
_pos = vec4(pos,1);
_uv = tex;
gl_Position = vec4(pos,1);
}



#version 330

uniform sampler2D g_buffer_diff;
uniform sampler2D g_buffer_norm;
uniform sampler2D g_buffer_pos;
uniform sampler2D g_random;
uniform sampler2D g_depth;

const float g_screen_size=1280*768;
const float random_size=64*64;
const float g_intensity=1.38;
const float g_scale=0.000002;
const float g_bias=0.07;

in vec4 _pos;
in vec2 _uv;

out vec4 fragColor;

vec3 getPosition(vec2 uv)
{
return texture2D(g_buffer_pos,uv).xyz;
}

vec3 getNormal(vec2 uv)
{
return normalize(texture2D(g_buffer_norm, uv).xyz * 2.0f - 1.0f);
}

vec2 getRandom(vec2 uv)
{
return normalize(texture2D(g_random, g_screen_size * uv / random_size).xy * 2.0f - 1.0f);
}

float doAmbientOcclusion(vec2 tcoord,in vec2 uv, in vec3 p, in vec3 cnorm)
{
vec3 diff = getPosition(tcoord + uv) - p;
vec3 v = normalize(diff);
float d = length(diff)*g_scale;
return max(0.0, dot(cnorm,v)-g_bias)*(1.0/(1.0+d))*g_intensity;
}

void main()
{
vec2 vecs[4];
vecs[0]= vec2(1,0);
vecs[1]= vec2(-1,0);
vecs[2]= vec2(0,1);
vecs[3]= vec2(0,-1);

vec3 p = getPosition(_uv);
vec3 n = getNormal(_uv);
vec2 rand = getRandom(_uv);

float ao = 0.0f;

//**SSAO Calculation**//
int iterations = 4;
for (int j = 0; j < iterations; ++j)
{
vec2 coord2 = vec2(coord1.x*0.707 - coord1.y*0.707, coord1.x*0.707 + coord1.y*0.707);

ao += doAmbientOcclusion(_uv,coord1*0.25, p, n);
ao += doAmbientOcclusion(_uv,coord2*0.5, p, n);
ao += doAmbientOcclusion(_uv,coord1*0.75, p, n);
ao += doAmbientOcclusion(_uv,coord2, p, n);
}
ao/=iterations*4.0;
//**END**//

//Do stuff here with your occlusion value Ã¢aoÃ¢: modulate ambient lighting, write it to a buffer for later //use, etc.
fragColor=vec4(1-ao);
}


I am lost now.

##### Share on other sites

Still struggling. Not really sure where I am lost right now. The 4 times split screen looks like it's a position issue, but I am not sure. Maybe my settings are bad or something.. Trying to fix but no avail.

##### Share on other sites

Was a little bit off from it, but now as I am back, have played a bit with the shader variables, tried multiple things with positions (first thought it was some wrong spaces, apparently not), but I think it's the 4 squares of positions what messes up the look of it. I yet have no idea why final render is split into four parts tho. Could anybody put me on the right track?

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627662
• Total Posts
2978519
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 10
• 12
• 22
• 13