Jump to content

  • Log In with Google      Sign In   
  • Create Account

GeneralQuery

Member Since 25 Oct 2011
Offline Last Active Aug 18 2013 09:40 AM

Topics I've Started

Switch to from desktop site to mobile site

18 July 2012 - 05:45 AM

On my phone I was initially on the mobile site then switched to the desktop site I cannot for the life of me find the button to swtich from the desktop site to the mobile site. Could someone put me out of my misery, please? Thanks :D

Matrix Math Implementation for Library Question

18 July 2012 - 04:23 AM

I'm rolling my own math library to reinforce my understanding of the math behind computer graphics. I think I have a good handle on vectors/matrices and their nuances (row & col vectors, matrix majorness, left & right handed rotations, left & right handed coordinate systems) and have made solid progress implementing and testing my library. However, one issue has come up that I would like some suggestions and clarifications from the good folks at GameDev.

I have opted for column vector notation so use vector post multiplication when transforming vertices. I have overloaded the * operator in my matrix class to accommodate this convention with the vector as the argument for the overloaded operator.

Now, here's what I'm fretting over. Should I overload the * operator again to allow row vector-style multiplication with the vector as the left hand argument and the matrix as the right? If so, should I implicitly transpose the matrix to allow "correct" (expected) results? Or should I overload the operator but perform a column-style post multiplication so that the user has to explicitly transpose the matrices when doing row vector-style multiplication?

Assuming that my understanding is correct and that this question isn't a result of a misunderstanding on my part, my inclination is to either not allow row vector multiplication at all (after all, I clearly state in the documentation that column vectors are assumed) or allow row vector multiplication but put the responsibility onto the user to transpose the matrices (I feel this would make the intent of any user code clearer).

I hope I have made myself clear and look forward to your suggestions.
Thanks Posted Image

SSAO Issue

08 November 2011 - 04:40 PM

Hi, I'm implementing this article using GLSL and a different G Buffer layout. More specifically, I'm getting the view space position by using the method I normally use by sampling the depth buffer, getting the NDCs, unprojecting them and then dividing by w. Now, when I modified the code from the article to work with GLSL and my layout, I get dark artefacts when the view space normal points at (or nearly at) the camera, i.e. (0,0,1) or there abouts, as seen in these pics here:

Bunny Right
Posted Image

Bunny Left
Posted Image

Bunny Centre
Posted Image

I'm really stumped as to the source of the problem and I've got a feeling the answer is going to be really obvious. Anyway, I hate to do a code dump and expect people to do the work for me but I'll include it anyway in case people want to spot any glaring errors:

varying vec2 oTexCoords; 	// Texture coords of screen aligned quad
 
uniform mat4x4 mtxInvProj;   // Inverse of projection matrix

uniform sampler2D iTexDepth; 	// Depth buffer values in [0,1] range
uniform sampler2D iTexNormal;	
uniform sampler2D iTexRandom;

vec2 g_screen_size = vec2(1280.0, 720.0);

float random_size	= 64.0;		// Random texture = 64x64
float g_sample_rad	= 0.5;		// Between 0.5 and 2.0
float g_intensity	= 3.0;
float g_scale		= 1.5;		// Between 1.0 and 2.0
float g_bias		= 0.05;		// Negative values should give "incorrect" results, but sometimes look good

float UnpackSigned(float Value)
{
	return (2.0f * Value - 1.0f);
}

vec2 UnpackSigned(vec2 Value)
{
	return (2.0f * Value - 1.0f);
}

vec3 UnpackSigned(vec3 Value)
{
	return (2.0f * Value - 1.0f);
}

float getDepth(in vec2 uv)
{
	return UnpackSigned(texture2D(iTexDepth, oTexCoords).r);
}

vec3 getPosition(in vec2 uv)	// uv.z = linear depth
{
	float Depth =  getDepth(uv);

	vec4 Position = vec4(uv.x * 2.0f - 1.0f, (uv.y * 2.0f - 1.0f), Depth, 1.0);
	
	Position  = mtxInvProj * Position;
	Position /= Position.w;
	
	return Position.xyz;
}

vec3 getNormal(in vec2 uv)
{
	vec3 Normal = UnpackSigned(texture2D(iTexNormal, uv).xyz);
	
	return normalize(Normal);
}

vec2 getRandom(in vec2 uv)
{
	vec2 Random = UnpackSigned(texture2D(iTexRandom, g_screen_size * uv / random_size).xy);
	
	return normalize(Random);
}

float doAmbientOcclusion(in vec2 tcoord, in vec2 uv, in vec3 p, in vec3 cnorm)
{
	vec3 diff = getPosition(tcoord + uv) - p;
	const vec3 v = normalize(diff);
	const float d = length(diff)*g_scale;
	return max(0.0,dot(cnorm,v)-g_bias)*(1.0/(1.0+d))*g_intensity;
}

void main()
{		
	const vec2 vec[4] = {vec2(1,0),vec2(-1,0),
      	vec2(0,1),vec2(0,-1)};

     vec3 p = getPosition(oTexCoords);
     vec3 n = getNormal(oTexCoords);
     vec2 rand = getRandom(oTexCoords);

     float ao = 0.0f;
     float rad = g_sample_rad/p.z;
     
     //**SSAO Calculation**//
 int iterations = 4;
 for (int j = 0; j < iterations; ++j)
 {
  vec2 coord1 = reflect(vec[j],rand)*rad;
  vec2 coord2 = vec2(coord1.x*0.707 - coord1.y*0.707,
          	coord1.x*0.707 + coord1.y*0.707);
  
  ao += doAmbientOcclusion(oTexCoords,coord1*0.25, p, n);
  ao += doAmbientOcclusion(oTexCoords,coord2*0.5, p, n);
  ao += doAmbientOcclusion(oTexCoords,coord1*0.75, p, n);
  ao += doAmbientOcclusion(oTexCoords,coord2, p, n);
 } 
 ao/=float(iterations)*4.0;
	
	gl_FragColor = vec4(ao);
}


Deferred Shading & Early Z/Stencil

04 November 2011 - 09:09 AM

I've been reading a bit about optimisations for deferred shading using stencil tests and taking advantage of early z rejection. I understand these concepts in isolation but am not quite sure onhow they integrate with the G Buffers. How dothe lights access the z buffer used by the G Buffers? Do you bind another colour attachment to the FBO for rendering the lighting passes to get acess to the depth buffer used by the FBO?

Render depth to texture issue

02 November 2011 - 07:22 PM

I'm implementing a deferred shader and am running into issues obtaining a 3D view space position from the depth buffer and screen space position. If I attach a depth renderbuffer for depth testing and write the depth values to a colour attachment, everything works as expected, like so:


	glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthBuffer);

	glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, w, h);

	glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depthBuffer);



	...



	glGenTextures(1, &texDepth);

	glBindTexture(GL_TEXTURE_2D, texDepth);

	glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_FLOAT32_ATI, w, h, 0, GL_RGB, GL_FLOAT, NULL);

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT2_EXT, GL_TEXTURE_2D, texDepth, 0);



	GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_COLOR_ATTACHMENT2_EXT};

	glDrawBuffers(3, buffers);



	/////////////////////////////////////////////////////////////////////////////////////////////////////////



	// G Buffer fragment shader

	gl_FragData[0] = vec4(oDiffuse, 1.0);

	gl_FragData[1] = vec4(0.5f * (normalize(oNormal) + 1.0f) , 1.0);

	gl_FragData[2] = vec4(oDepth.x / oDepth.y, 1.0, 1.0, 1.0);



	/////////////////////////////////////////////////////////////////////////////////////////////////////////



	// Lighting fragment shader

	float Depth	= texture2D(iTexDepth, oTexCoords).r;

	vec4  Position = vec4(oTexCoords.x * 2.0f - 1.0f, (oTexCoords.y * 2.0f - 1.0f), Depth, 1.0);



	Position  = mtxInvProj * Position;

	Position /= Position.w;



However, when I try and use GL_DEPTH_ATTACHMENT_EXT as the attachment point (see listing below), I get incorrect results. Lighting changes with camera position, triangles facing away from the light source are lit and so on. When I display just the depth buffer, data is being written but it seems to be much more "bunched together" than when using GL_COLOR_ATTACHMENT2_EXT as the attachment point. For example, if I move the camera towards the mesh with the latter, the mesh depth values "pop" into view much more gradually than the former, so i figured that when I'm reconstructing the view space vector for a given fragment, the incorrect result is throwing off my point lighting. Any ideas?




	glGenTextures(1, &texDepth);

	glBindTexture(GL_TEXTURE_2D, texDepth);

	glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, w, h, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);

	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);

	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);

	glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);

	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

	glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

	glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, texDepth, 0);



	GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT};

	glDrawBuffers(2, buffers);



  	/////////////////////////////////////////////////////////////////////////////////////////////////////////



	// G Buffer fragment shader

	gl_FragData[0] = vec4(oDiffuse, 1.0);

	gl_FragData[1] = vec4(0.5f * (normalize(oNormal) + 1.0f) , 1.0);



	/////////////////////////////////////////////////////////////////////////////////////////////////////////



	// Lighting fragment shader

	float Depth	= texture2D(iTexDepth, oTexCoords).r;

	vec4  Position = vec4(oTexCoords.x * 2.0f - 1.0f, (oTexCoords.y * 2.0f - 1.0f), Depth, 1.0);



	Position  = mtxInvProj * Position;

	Position /= Position.w;


PARTNERS