Jump to content

  • Log In with Google      Sign In   
  • Create Account


Styves

Member Since 17 Dec 2009
Offline Last Active Yesterday, 04:58 PM

#5170000 Reconstructing Position From Depth Buffer

Posted by Styves on 29 July 2014 - 04:32 AM

To go with the explanation above, here's some HLSL example code on how to construct the frustum corners in the vertex-shader. This is just quick pseudo-code, so it may have an issue or two. It's also better to compute this on CPU and pass it in as a constant to the shader instead of computing it per vertex (you can also reuse it for all quads in a frame).

float3 vFrustumCornersWS[8] =
{
	float3(-1.0,  1.0, 0.0),	// near top left
	float3( 1.0,  1.0, 0.0),	// near top right
	float3(-1.0, -1.0, 0.0),	// near bottom left
	float3( 1.0, -1.0, 0.0),	// near bottom right
	float3(-1.0,  1.0, 1.0),	// far top left
	float3( 1.0,  1.0, 1.0),	// far top right
	float3(-1.0, -1.0, 1.0),	// far bottom left
	float3( 1.0, -1.0, 1.0)		// far bottom right
};

for(int i = 0; i < 8; ++i)
{
	float4 vCornerWS = mul(mViewProjInv, float4(vFrustumCornersWS[i].xyz, 1.0));
	vFrustumCornersWS[i].xyz = vCornerWS.xyz / vCornerWS.w;
}

for(int i = 0; i < 4; ++i)
{
	vFrustumCornersWS[i + 4].xyz -= vFrustumCornersWS[i].xyz;
}

// Passes to the pixel shader - there are two ways to do this:
// 1. Pass the corner with the vertex of your quad as part of the input stage.
// 2. If you're quad is created dynamically with SV_VertexID, as in this example, just pick one from a constant array based on the position.
float3 vCamVecLerpT = (OUT.vPosition.x>0) ? vFrustumCornersWS[5].xyz : vFrustumCornersWS[4].xyz;
float3 vCamVecLerpB = (OUT.vPosition.x>0) ? vFrustumCornersWS[7].xyz : vFrustumCornersWS[6].xyz;	
OUT.vCamVec.xyz = (OUT.vPosition.y<0) ? vCamVecLerpB.xyz : vCamVecLerpT.xyz;

To get position in the pixel shader:

float3 vWorldPos = fLinearDepth * IN.vCamVec.xyz + vCameraPos;



#5168706 Parallax corrected cube maps

Posted by Styves on 23 July 2014 - 12:52 PM

Your min and max need to be in world space. So add the cube position to the min and max to ensure it works when you move the object.




#5153808 Screen Space Reflection Optimization

Posted by Styves on 15 May 2014 - 11:42 AM

A possible improvement is to swap from a fixed step march and instead use a depth hierarchy that you can traverse as you get further along your ray. You can use lower resolution depths for longer rays/distant steps and get away with some pretty good speed increases. You can also use the hierarchy for detecting invalid tiles on the screen that clearly wouldn't have an intersection (quad-tree style), allowing you to skip portions of the screen that a pixel will never hit.




#5150923 Per-pixel displacement mapping GLSL

Posted by Styves on 02 May 2014 - 08:35 AM

Looks almost like your camera ray is inverted. Try using:

normalize( posObject.xyz - uCameraDirection );

Always seems to be where I screw up when implementing POM and occasionally lighting. Easy to get wrong and even easier to overlook. :)




#5149463 Screenshot of your biggest success/ tech demo

Posted by Styves on 25 April 2014 - 02:18 PM

I'm responsible for the skin shader you see in this gif, as well as the eyes and the water caustics (in the staircase scene). There's also the hair but you really only see it on the eyebrows (or in the 1080p version, face stubble), since the guy is bald. smile.png

 

Edit: Here's a high-res screenshot from the last shot.

 

10562499284_3a4f039dd5_o.gif

 

Not coincidentally, I'm also responsible for the skin/eye tech for Crysis 3 (which was more last-gen friendly) - an earlier version of the same caustic tech was also present in the PC version.

 

crykurtz.jpg

 

 

I'm afraid most of the other things I worked on haven't been shown publicly and I don't want to risk breaking any NDAs. smile.png

 

I'd show you some personal projects, but they're too early to show. :)




#5147714 Problems with Normal Mapping

Posted by Styves on 17 April 2014 - 01:11 PM

That's definitely a bug. If you negate the Y component first, then apply 0-1 to -1-1 scale, then a negative normal of -1 (0 in the texture) will be -3 instead of -1. This is, as you'd expect, incorrect. smile.png




#5145615 Deferred shading position reconstruction change

Posted by Styves on 09 April 2014 - 03:16 AM

If you have access to the cameras front vector, then you can do:

// Compute ray from vertex positions 
vec3 eyeRay = normalize( vertexPosition - eyePosition );

// Ray gets shorter towards edges of the screen (due to camera being at center)
// So we need to correct for this.
// Note: this is scaled by the far plane, but the depth can also be scaled (since both get multiplied onto the eyeRay anyway).
float eyeCorrection = rcp( dot( eyeRay, -cameraFront ) ) * farPlaneDist;
eyeRay *= eyeCorrection;

// Final world position
vec3 wPosition = eyePosition + eyeRay * depthVal;



#5144974 Bloom not smooth

Posted by Styves on 07 April 2014 - 04:17 AM

You don't have enough samples for the blur size you're using. Either add more samples to fill in the gaps, or lower the step size.

 

Tip: typical bloom isn't computed in a single pass, as it's too expensive to get a wide radius with decent performance and quality. Instead, you should perform a blur on a couple of downscaled images, starting with the highest resolution one and using that result in the next (lower resolution) pass as the source texture. Then you composite the results from every pass at the end and apply that to the image as bloom.

 

Something like:

 

  • Perform bloom threshold if needed (which you might be doing already).
  • Blur to the first texture (1/2 or 1/4 resolution).
  • Blur to the second texture, using the previous result as input (1/8 resolution).
  • Continue for the # of passes you want (this depends on the radius you want to achieve). Each pass is half the resolution of the last.
  • Take all the textures and add them together (if you want more accurate results, energy conservation can be applied).



#5141964 GLSL normal mapping strange behaviour

Posted by Styves on 25 March 2014 - 05:05 AM

First thing is first: you're not even using the TBN! :D You need to take the TBN and multiply it to your normal map, otherwise what's the point? tongue.png

 

A side note: I don't trust your TBN calculations as they don't actually use the triangle data, but some arbitrary directions. This isn't a trustworthy approach and even if you use it, I don't expect proper results from it.

 

Instead, try this in your fragment shader (pulled from my engine, you're welcome happy.png):

mat3 ComputeCotangentFrame( vec3 vNormal, vec3 vPosition, vec2 vTexCoord )
{
	vec3 ddxPos = dFdx(vPosition);
	vec3 ddyPos = dFdy(vPosition);
	vec2 ddxUV = dFdx(vTexCoord);
	vec2 ddyUV = dFdy(vTexCoord);

	vec3 vCrossVec1 = cross( ddyPos, vNormal );
	vec3 vCrossVec2 = cross( vNormal, ddxPos );
	vec3 vTangent = vCrossVec1 * ddxUV.x + vCrossVec2 * ddyUV.x;
	vec3 vBinormal = vCrossVec1 * ddxUV.y + vCrossVec2 * ddyUV.y;

	float fDotT = dot( vTangent, vTangent );
	float fDotB = dot( vBinormal, vBinormal );
	float fInvMax = 1.0 / sqrt(max(fDotT, fDotB));

	return mat3( vTangent * fInvMax, vBinormal * fInvMax, vNormal );
}

Inputs are vertex normal, world position and texture coordinates.

 

 

Next, why do you apply a transform matrix to the vertex position in the fragment shader? This is most definitely better done in the vertex shader and passed in.

 

And finally, what's the point of this:

vec3 normal;
	if(fragNormalMapping[0] == 1.0f){
		normal = normalize(texture(normalMap, fragTexCoord).xyz * 2.0 - 1.0);
	}else{
		normal = normalize(texture(normalMap, fragTexCoord).xyz * 2.0 - 1.0);
	}

You're doing the same thing twice regardless of the branch, so why bother?

 

 

Once you get everything working, you can look into replacing the Cotangent stuff I supplied with standard tangent/binormal calculations on the CPU and pass them in as vertex attributes. Unless you don't mind the performance overhead of computing it all on the fly per-pixel (you'll gain some memory savings by not storing them ;D), or you're doing all shading/materials on the GPU in a procedural deferred pass like I am... which I highly doubt anyone is doing. tongue.png




#5133141 [Cg] Can this perpixel phong lighting shader be optimised?

Posted by Styves on 20 February 2014 - 08:41 PM

Read this. It should help you squeeze out some of your redundant shader instructions.




#5125044 Hypothesizing a new lighting method.

Posted by Styves on 20 January 2014 - 07:44 AM

I'd like to also put it on record that shadow mapping with deferred shading is no harder than it is with forward shading, and in fact many games use a very simple deferred approach for shadows (Crysis 1 for example) rather than applying shadows in the forward pass.




#5124185 Pack two 8 bits into a 16 bits channel _ HLSL

Posted by Styves on 16 January 2014 - 12:56 PM

Coming up with this on the fly, but assuming your inputs are both in the range 0-1, you should be able to pack the 2 values by storing one value as integer, and the second as fractional, into your alpha:

float Pack( in float a, in float b )
{
  return floor(a * 255) + b; // stores A as integer, B as fractional
}

And to unpack, you can use:

float Unpack( in float c, inout float a, inout float b )
{
  a = floor(c) / 255; // removes the fractional value and scales back to 0-1
  b = frac(c); // removes the integer value, leaving B (0-1)
}

Alternatively, this might be faster for unpacking:

float Unpack( in float c, inout float a, inout float b )
{
  b = frac(c); // removes the integer value, leaving B (0-1)
  a = (c - b) / 255; // removes the fractional value (B) and scales back to 0-1
}

I haven't tested these at all, but I think they should work. smile.png

 

 

Edit: Just re-read the inital post and noticed you're using values from 0-255 uint, instead of float 0-1. The concept should be the same, you can either scale from 0-255 to 0-1, or you can just invert the scaling I applied (instead of increasing A, decrease B).




#5109906 Artist wants Blinn, Phong, etc but how do I code those?

Posted by Styves on 17 November 2013 - 06:02 AM

Fresnel's law in physics determines how much reflection vs refraction occur when light hits a surface.
Very simply speaking, we can say reflection = specular and refraction = diffuse.
In graphics, we usually use schlicks approximation, which is:
F = f + (1-f)*(1-N•V)^5
(where f is your original spec intensity and F is basically a new spec intensity)
This is then used to weight the specular and diffuse terms - at glancing angles (where N•V is low), much more light is reflected than refracted.
N.B. physically speaking, diffuse should also be weighted using inverse fresnel, but this is often skipped.

 

Note for theagentd: For physically based BRDFs, you don't want to use N•V for your fresnel equation as it's just a view space fresnel and isn't physically based. Instead, use L•H (or V•H), which is dependent on the grazing angle from the viewer towards the light, which is more accurate.




#5103096 Rain droplets technique

Posted by Styves on 21 October 2013 - 05:51 AM

A much better approach than a static scrolling texture, which is likely what's being used here, is using a very simple particle system. Spawn N number of particles and make them slide down the screen with time and some variation (you can just render some quads to the screen and make them move with time). Use some extinction to make the particle fade away after a short period. Once that period is over, spawn a new particle. This way you always have N number of particles, no more, no less.

 

Render this to a buffer using simple vignetting on each particle quad, to get a heightmap of the drops (you can use an ellipsoid shape for the vignette, for a better drop-like look). Then to apply it to the scene, compute a normal from it and use it to displace the screen coordinates when sampling the screen image (same pass).

 

You can vary the amount of slide based on the camera's looking direction, so that it doesn't slide when you look up. You can also do it to hide drops if you're looking down. Another cool thing is you can use some pre-blurred textures for out-of-focus drops (or even merge them into mip-maps for easy lookups).

 

 

This shouldn't take more than a fraction of a millisecond, since the geometry is simple, fillrate is very small and the texture you use for the drops can be quite small. You can also try batching the particles via a vertex buffer and expanding on the GPU to lower the draw call count, if that becomes a problem.




#5100769 tutorial deferred shading? [Implemented it, not working properly]

Posted by Styves on 12 October 2013 - 05:06 AM

Make sure your render targets are of sufficient precision. If you're storing position in ARGB8 then you're going to have a bad time.;)






PARTNERS