Jump to content

  • Log In with Google      Sign In   
  • Create Account


#ActualEnlightenedOne

Posted 20 April 2014 - 09:25 AM

My goodness it worked!

 

VERTEX

#version 330
 
in vec3 inPos;
in vec4 inCol;
//model being tested has no normals yet
 
uniform mat4 inverseProjectionMatrix;
 
uniform mat4 wvpMatrix;
uniform mat4 modelViewMatrix;
 
out vec3 pass_colour;
out vec4 worldPosition;
out float depth;
 
const float far_plane_distance = 15000.0f;
 
void main(void) {
gl_Position = wvpMatrix * vec4(inPos, 1);
 
worldPosition = gl_Position;
 
vec4 viewSpacePos = modelViewMatrix * vec4(inPos, 1);
 
depth = viewSpacePos.z / - far_plane_distance;
 
pass_colour = inCol.xyz;
}
 
 
FRAGMENT
 
#version 330
 
in vec3 pass_colour;
in vec4 worldPosition;
in float depth;
 
layout (location = 0) out vec4 ColourOut;   
layout (location = 1) out vec4 NormalOut;
layout (location = 2) out vec4 PosOut;     
 
void main(void) {
ColourOut = vec4(pass_colour, 1);
NormalOut = vec4(0.5,0.5,0.5,1); //dummy values please ignore
 
PosOut.xyz = worldPosition.xyz; //intending to use this to compare to the reconstructed positions as soon as I have either working smile.png
PosOut.w = 1;
gl_FragDepth = depth;
}

 

Then just sampling the depth:

    d = texture2D(texture_depth, pass_texCoord).x;
  DepthOut = vec4(d, d, d, 1.0);
 
My old linearisation from none linear depth buffer approach when using this is redundant as the depth texture is linear. Following on from his approach I expect I will get the desired view space positions necessary to get around.
 
I dug out this:
 
It suggests what I am doing is a recipe for performance disaster, however "gl_Position.z = viewSpacePos.z / - far_plane_distance;" simply causes depth testing to fail.
 
Interestingly but obviously I get less close banding from the none linear depth calculation, reconstructing linear depth without storing it must be the way forward.
 
I am about to find out if I stick with post GBuffer linear depth reconstruction can I still reconstruct the surface position using Yours3!f's method? more challenging will be to get the viewDir from this data.

#2EnlightenedOne

Posted 20 April 2014 - 08:48 AM

My goodness it worked!

 

VERTEX

#version 330
 
in vec3 inPos;
in vec4 inCol;
//model being tested has no normals yet
 
uniform mat4 inverseProjectionMatrix;
 
uniform mat4 wvpMatrix;
uniform mat4 modelViewMatrix;
 
out vec3 pass_colour;
out vec4 worldPosition;
out float depth;
 
const float far_plane_distance = 15000.0f;
 
void main(void) {
gl_Position = wvpMatrix * vec4(inPos, 1);
 
worldPosition = gl_Position;
 
vec4 viewSpacePos = modelViewMatrix * vec4(inPos, 1);
 
depth = viewSpacePos.z / - far_plane_distance;
 
pass_colour = inCol.xyz;
}
 
 
FRAGMENT
 
#version 330
 
in vec3 pass_colour;
in vec4 worldPosition;
in float depth;
 
layout (location = 0) out vec4 ColourOut;   
layout (location = 1) out vec4 NormalOut;
layout (location = 2) out vec4 PosOut;     
 
void main(void) {
ColourOut = vec4(pass_colour, 1);
NormalOut = vec4(0.5,0.5,0.5,1); //dummy values please ignore
 
PosOut.xyz = worldPosition.xyz; //intending to use this to compare to the reconstructed positions as soon as I have either working smile.png
PosOut.w = 1;
gl_FragDepth = depth;
}

 

Then just sampling the depth:

    d = texture2D(texture_depth, pass_texCoord).x;
  DepthOut = vec4(d, d, d, 1.0);
 
My old linearisation from none linear depth buffer approach when using this is redundant as the depth texture is linear. Following on from his approach I expect I will get the desired view space positions necessary to get around.
 
I dug out this:
 
It suggests what I am doing is a recipe for performance disaster, however "gl_Position.z = viewSpacePos.z / - far_plane_distance;" simply causes depth testing to fail.
 
Interestingly but obviously I get less close banding from the linear from none linear depth calculation, reconstructing linear depth without storing it must be the way forward.
 
I am about to find out if I stick with post GBuffer linear depth reconstruction can I still reconstruct the surface position using Yours3!f's method? more challenging will be to get the viewDir from this data.

#1EnlightenedOne

Posted 20 April 2014 - 08:46 AM

My goodness it worked!

 

FRAGMENT

#version 330
 
in vec3 inPos;
in vec4 inCol;
 
uniform mat4 inverseProjectionMatrix;
 
uniform mat4 wvpMatrix;
uniform mat4 modelViewMatrix;
 
out vec3 pass_colour;
out vec4 worldPosition;
out float depth;
 
const float far_plane_distance = 15000.0f;
 
void main(void) {
gl_Position = wvpMatrix * vec4(inPos, 1);
 
worldPosition = gl_Position;
 
vec4 viewSpacePos = modelViewMatrix * vec4(inPos, 1);
 
depth = viewSpacePos.z / - far_plane_distance;
 
pass_colour = inCol.xyz;
}
 
 
FRAGMENT
 
#version 330
 
in vec3 pass_colour;
in vec4 worldPosition;
in float depth;
 
layout (location = 0) out vec4 ColourOut;   
layout (location = 1) out vec4 NormalOut;
layout (location = 2) out vec4 PosOut;     
 
void main(void) {
ColourOut = vec4(pass_colour, 1);
NormalOut = vec4(0.5,0.5,0.5,1); //dummy values please ignore
 
PosOut.xyz = worldPosition.xyz; //intending to use this to compare to the reconstructed positions as soon as I have either working :)
PosOut.w = 1;
gl_FragDepth = depth;
}

 

Then just sampling the depth:

    d = texture2D(texture_depth, pass_texCoord).x;
  DepthOut = vec4(d, d, d, 1.0);
 
My old linearisation from none linear depth buffer approach when using this is redundant as the depth texture is linear. Following on from his approach I expect I will get the desired view space positions necessary to get around.
 
I dug out this:
 
It suggests what I am doing is a recipe for performance disaster, however "gl_Position.z = viewSpacePos.z / - far_plane_distance;" simply causes depth testing to fail.
 
Interestingly but obviously I get less close banding from the linear from none linear depth calculation, reconstructing linear depth without storing it must be the way forward.
 
I am about to find out if I stick with post GBuffer linear depth reconstruction can I still reconstruct the surface position using Yours3!f's method? more challenging will be to get the viewDir from this data.

PARTNERS