Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


cifa

Member Since 29 Feb 2012
Offline Last Active Jul 26 2015 10:08 AM

Topics I've Started

Portfolio feedback request

14 December 2014 - 05:41 PM

Hi all!

 

I am on the verge of sending out my first applications and I have recently put up a portfolio website. 

My goal is to try and get a job as (junior) graphics/engine programmer- 

 

 

Any kind of feedback would be greatly appreciated, here's the website: http://fcifariellociardi.com/

 

 

(Note, in the CV section all the personal data info, apart from the name, are eliminated as I am still unsure if I am up to having them public) 

 

Thank you! 


Perpendicular vectors on mesh starting from screen space

12 August 2014 - 02:08 PM

Hi there,

 

I was wondering if it is possible somehow to find two orthogonal vectors on a mesh, but starting from screen space. 

I know that I can bring two points (e.g. currPixel and currPixel + (1,0)) to object space if I have also depth info. In such way I can find a vector that is on the mesh in object space.

Now we all know that in 3D there are an infinite number of perpendicular vectors to another one so if I just take one of them I have no guarantee it would be on the surface of the mesh. Taking perpendicular vectors in screenspace is of no help as they may well map to non-orthogonal vector in object space.

 

Is it possible, starting from the data I have (persp. matrix, viewMatrix, modelMatrix, depth info and screenspace info), to obtain the said vector or is it an impossible task? 

 

Thank you!


UE4 IBL glsl

14 July 2014 - 04:02 PM

************** UPDATE ********************** 

 

I had a stupid issue due to the resolution of the texture, not the code below works as it should. 

 

***********************************************

 

 

Hi there!

 

I'm trying to implement the IBL technique described in here: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf

 

 

Now I'm still trying to get the envBRDF LUT (roughness / NdotV) to render properly as shown in the paper: 

 

TDb6Dhh.png?1

 

What I have now is: 

 

the usual Hammersley functions:


float radicalInverse_VdC(uint bits) {
     bits = (bits << 16u) | (bits >> 16u);
     bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u);
     bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u);
     bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u);
     bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u);
     return float(bits) * 2.3283064365386963e-10; // / 0x100000000
 }
 // http://holger.dammertz.org/stuff/notes_HammersleyOnHemisphere.html
 vec2 Hammersley(uint i, uint N) {
     return vec2(float(i)/float(N), radicalInverse_VdC(i));
 }
 

The ImportanceSampleGGX practically as reported in the paper:

vec3 importanceSampleGGX(vec2 sample, float m, vec3 N){
	float phi = sample.x * 2.0f * PI;
	float cosTheta = sqrt( (1.0f - sample.y) / 1.0f + (m*m - 1.0f) * sample.y ); 
	float sinTheta = sqrt(1.0f - cosTheta*cosTheta);

	vec3 vector = vec3(
		sinTheta * cos(phi),
		sinTheta * sin(phi),
		cosTheta
		);


	return vector;
	vec3 up = abs(N.z) < 0.999 ? vec3(0.0,0.0,1.0) : vec3(1.0,0.0,0.0);
	vec3 tangentX = normalize(cross(up,N));
	vec3 tangentY = normalize(cross(N,tangentX));
	// Project 
	return tangentX * vector.x + tangentY * vector.y + N * vector.z;
}

The integrateBRDF

 vec2 IntegrateBRDF( float Roughness, float NoV ){
 vec3 V;
 V.x = sqrt( 1.0f - NoV * NoV ); // sin
 V.y = 0.0;
 V.z = NoV;
 // cos
 float A = 0.0;
 float B = 0.0;

 for( uint i = 0u; i < NUMBER_OF_SAMPLES; i++ )
 {
 vec2 Xi = Hammersley( i, NUMBER_OF_SAMPLES );
 vec3 H = ImportanceSampleGGX( Xi, Roughness, vec3(0.0,0.0,1.0) );
 vec3 L = 2.0 * dot( V, H ) * H - V;
 float NoL = max( L.z, 0.0 );
 float NoH = max( H.z, 0.0 );
 float VoH = max( dot( V, H ), 0.0 );

 	if( NoL > 0 )
 	{
	 float G = G_Smith( Roughness, NoV, NoL );
	 float G_Vis = G * VoH / (NoH * NoV);
	 float Fc = pow( 1.0 - VoH, 5.0 );
	 A += (1.0 - Fc) * G_Vis;
	 B += Fc * G_Vis;
 	}
 }
return vec2(  (A)/float(NUMBER_OF_SAMPLES), (B)/float(NUMBER_OF_SAMPLES)) ;
}

 

 

 

As you can see they're almost a copy-paste from the reference, but what I got is completely different: 

 

6b9aXlD.png

 

 

The roughness is the uv.x and uv.y of a quad that are calculated in the VS as: 

	vTextureCoordinate = (aVertexPosition.xy+vec2(1,1))/2.0;

Am I missin something very stupid? 


HDR ToneMapping: Exposure selection

04 July 2014 - 05:09 PM

Hi everyone!

 

I've been trying adding HDR support to my system but I got a couple of questions. I believe what I'm doing before  tone-mapping is fine: 

 

- Setup a RGBA16F Render Target

- Render to that RT

- call the postprocessing tone mapping on the cited RT and display the result on screen. 

 

 

Now to test this I've cranked up a bit my light intensity: 

 

jzHh4q5.png

 

Now if I apply the following tone mapping function (either one, the result is similar)

	col *= exposure;
	col = max(vec3(0.0), col -0.004);
	col = (col *(6.2*col + 0.5)) / (col *(6.2*col +1.7)+0.06);
	return vec4(col ,1.0)

or 

	col *= exposure;
	col = col / (1+col);
	return vec4(col, 1.0);

if the exposure is more than 0.5 for the above image and previous functions the result is pretty flat:

 

7eM68eY.png

 

Where for an exposure value of about 0.4 the result is ok: 

 

QoX5F2O.png

 

Although it's very darkish (unsurprisingly as the exposure is pretty low). 

 

Now the same issue (flatness of the tone-mapped image) is there for a "normal" light intensity which is fine even without using HDR. To get a decent result I have to lower the exposure even for that case, resulting though, again, in a darkish image. 

 

My questions are then two: 

 

- Am I missing something important here? I feel that the result I get is somewhat wrong

- Is there a way to "automatically" select what's considered a good exposure value for a given scene? 

 

 

 

Thank you very much


VSM severe acne

28 June 2014 - 04:51 AM

********** EDIT: I've found a bug in the CPU side that wasn't my fault, I'm investigating and then update the topic, for now the following description is not valid anymore, if mods want to  close the topic it's fine to me ****************

 

 

I'm implementing a VSM algorithm. My code is pretty much like everything is online. 

//coords is after the w divide and 0.5 + 0.5* corrections
float VSM(vec3 coords, float dist){

	vec2 moments = texture(shadowTexture, coords.xy).rg;

	if(dist <= moments.x) return 1.0;
	float variance = moments.y - (moments.x*moments.x);
	variance = max(variance, 0.001); 

	float d = (dist) - moments.x;
	float P_max = variance / (variance + d*d);

	return pMax;
}

Here if I use a small value to clamp the variance as done in almost every source I found the result is a terrible acne:

 

gdFYopa.png

 

 

If I start increasing the max value for the variance the acne diminish, although still evident. Also the proper shadow is definitely lighten (too much)

 

X8Aej8W.png

 

Eventually if I keep increasing such value the proper shadow disappear. Moreover the value to produce this image above is 6.0! Way higher than everything I saw around.

 

 

Similarly I've tried something like:

	float d = (dist + bias) - moments.x;

but I found no value that solve the problem, although not tried that many. 

 

The depth map is I think fine because with PCF I get a good result.

 

What can be the issue here?

 

Thanks!  


PARTNERS