Jump to content

  • Log In with Google      Sign In   
  • Create Account


Styves

Member Since 17 Dec 2009
Offline Last Active Jun 29 2014 11:15 AM
-----

#5153808 Screen Space Reflection Optimization

Posted by Styves on 15 May 2014 - 11:42 AM

A possible improvement is to swap from a fixed step march and instead use a depth hierarchy that you can traverse as you get further along your ray. You can use lower resolution depths for longer rays/distant steps and get away with some pretty good speed increases. You can also use the hierarchy for detecting invalid tiles on the screen that clearly wouldn't have an intersection (quad-tree style), allowing you to skip portions of the screen that a pixel will never hit.




#5150923 Per-pixel displacement mapping GLSL

Posted by Styves on 02 May 2014 - 08:35 AM

Looks almost like your camera ray is inverted. Try using:

normalize( posObject.xyz - uCameraDirection );

Always seems to be where I screw up when implementing POM and occasionally lighting. Easy to get wrong and even easier to overlook. :)




#5149463 Screenshot of your biggest success/ tech demo

Posted by Styves on 25 April 2014 - 02:18 PM

I'm responsible for the skin shader you see in this gif, as well as the eyes and the water caustics (in the staircase scene). There's also the hair but you really only see it on the eyebrows (or in the 1080p version, face stubble), since the guy is bald. smile.png

 

Edit: Here's a high-res screenshot from the last shot.

 

10562499284_3a4f039dd5_o.gif

 

Not coincidentally, I'm also responsible for the skin/eye tech for Crysis 3 (which was more last-gen friendly) - an earlier version of the same caustic tech was also present in the PC version.

 

crykurtz.jpg

 

 

I'm afraid most of the other things I worked on haven't been shown publicly and I don't want to risk breaking any NDAs. smile.png

 

I'd show you some personal projects, but they're too early to show. :)




#5147714 Problems with Normal Mapping

Posted by Styves on 17 April 2014 - 01:11 PM

That's definitely a bug. If you negate the Y component first, then apply 0-1 to -1-1 scale, then a negative normal of -1 (0 in the texture) will be -3 instead of -1. This is, as you'd expect, incorrect. smile.png




#5145615 Deferred shading position reconstruction change

Posted by Styves on 09 April 2014 - 03:16 AM

If you have access to the cameras front vector, then you can do:

// Compute ray from vertex positions 
vec3 eyeRay = normalize( vertexPosition - eyePosition );

// Ray gets shorter towards edges of the screen (due to camera being at center)
// So we need to correct for this.
// Note: this is scaled by the far plane, but the depth can also be scaled (since both get multiplied onto the eyeRay anyway).
float eyeCorrection = rcp( dot( eyeRay, -cameraFront ) ) * farPlaneDist;
eyeRay *= eyeCorrection;

// Final world position
vec3 wPosition = eyePosition + eyeRay * depthVal;



#5144974 Bloom not smooth

Posted by Styves on 07 April 2014 - 04:17 AM

You don't have enough samples for the blur size you're using. Either add more samples to fill in the gaps, or lower the step size.

 

Tip: typical bloom isn't computed in a single pass, as it's too expensive to get a wide radius with decent performance and quality. Instead, you should perform a blur on a couple of downscaled images, starting with the highest resolution one and using that result in the next (lower resolution) pass as the source texture. Then you composite the results from every pass at the end and apply that to the image as bloom.

 

Something like:

 

  • Perform bloom threshold if needed (which you might be doing already).
  • Blur to the first texture (1/2 or 1/4 resolution).
  • Blur to the second texture, using the previous result as input (1/8 resolution).
  • Continue for the # of passes you want (this depends on the radius you want to achieve). Each pass is half the resolution of the last.
  • Take all the textures and add them together (if you want more accurate results, energy conservation can be applied).



#5141964 GLSL normal mapping strange behaviour

Posted by Styves on 25 March 2014 - 05:05 AM

First thing is first: you're not even using the TBN! :D You need to take the TBN and multiply it to your normal map, otherwise what's the point? tongue.png

 

A side note: I don't trust your TBN calculations as they don't actually use the triangle data, but some arbitrary directions. This isn't a trustworthy approach and even if you use it, I don't expect proper results from it.

 

Instead, try this in your fragment shader (pulled from my engine, you're welcome happy.png):

mat3 ComputeCotangentFrame( vec3 vNormal, vec3 vPosition, vec2 vTexCoord )
{
	vec3 ddxPos = dFdx(vPosition);
	vec3 ddyPos = dFdy(vPosition);
	vec2 ddxUV = dFdx(vTexCoord);
	vec2 ddyUV = dFdy(vTexCoord);

	vec3 vCrossVec1 = cross( ddyPos, vNormal );
	vec3 vCrossVec2 = cross( vNormal, ddxPos );
	vec3 vTangent = vCrossVec1 * ddxUV.x + vCrossVec2 * ddyUV.x;
	vec3 vBinormal = vCrossVec1 * ddxUV.y + vCrossVec2 * ddyUV.y;

	float fDotT = dot( vTangent, vTangent );
	float fDotB = dot( vBinormal, vBinormal );
	float fInvMax = 1.0 / sqrt(max(fDotT, fDotB));

	return mat3( vTangent * fInvMax, vBinormal * fInvMax, vNormal );
}

Inputs are vertex normal, world position and texture coordinates.

 

 

Next, why do you apply a transform matrix to the vertex position in the fragment shader? This is most definitely better done in the vertex shader and passed in.

 

And finally, what's the point of this:

vec3 normal;
	if(fragNormalMapping[0] == 1.0f){
		normal = normalize(texture(normalMap, fragTexCoord).xyz * 2.0 - 1.0);
	}else{
		normal = normalize(texture(normalMap, fragTexCoord).xyz * 2.0 - 1.0);
	}

You're doing the same thing twice regardless of the branch, so why bother?

 

 

Once you get everything working, you can look into replacing the Cotangent stuff I supplied with standard tangent/binormal calculations on the CPU and pass them in as vertex attributes. Unless you don't mind the performance overhead of computing it all on the fly per-pixel (you'll gain some memory savings by not storing them ;D), or you're doing all shading/materials on the GPU in a procedural deferred pass like I am... which I highly doubt anyone is doing. tongue.png




#5133141 [Cg] Can this perpixel phong lighting shader be optimised?

Posted by Styves on 20 February 2014 - 08:41 PM

Read this. It should help you squeeze out some of your redundant shader instructions.




#5125044 Hypothesizing a new lighting method.

Posted by Styves on 20 January 2014 - 07:44 AM

I'd like to also put it on record that shadow mapping with deferred shading is no harder than it is with forward shading, and in fact many games use a very simple deferred approach for shadows (Crysis 1 for example) rather than applying shadows in the forward pass.




#5124185 Pack two 8 bits into a 16 bits channel _ HLSL

Posted by Styves on 16 January 2014 - 12:56 PM

Coming up with this on the fly, but assuming your inputs are both in the range 0-1, you should be able to pack the 2 values by storing one value as integer, and the second as fractional, into your alpha:

float Pack( in float a, in float b )
{
  return floor(a * 255) + b; // stores A as integer, B as fractional
}

And to unpack, you can use:

float Unpack( in float c, inout float a, inout float b )
{
  a = floor(c) / 255; // removes the fractional value and scales back to 0-1
  b = frac(c); // removes the integer value, leaving B (0-1)
}

Alternatively, this might be faster for unpacking:

float Unpack( in float c, inout float a, inout float b )
{
  b = frac(c); // removes the integer value, leaving B (0-1)
  a = (c - b) / 255; // removes the fractional value (B) and scales back to 0-1
}

I haven't tested these at all, but I think they should work. smile.png

 

 

Edit: Just re-read the inital post and noticed you're using values from 0-255 uint, instead of float 0-1. The concept should be the same, you can either scale from 0-255 to 0-1, or you can just invert the scaling I applied (instead of increasing A, decrease B).




#5109906 Artist wants Blinn, Phong, etc but how do I code those?

Posted by Styves on 17 November 2013 - 06:02 AM

Fresnel's law in physics determines how much reflection vs refraction occur when light hits a surface.
Very simply speaking, we can say reflection = specular and refraction = diffuse.
In graphics, we usually use schlicks approximation, which is:
F = f + (1-f)*(1-N•V)^5
(where f is your original spec intensity and F is basically a new spec intensity)
This is then used to weight the specular and diffuse terms - at glancing angles (where N•V is low), much more light is reflected than refracted.
N.B. physically speaking, diffuse should also be weighted using inverse fresnel, but this is often skipped.

 

Note for theagentd: For physically based BRDFs, you don't want to use N•V for your fresnel equation as it's just a view space fresnel and isn't physically based. Instead, use L•H (or V•H), which is dependent on the grazing angle from the viewer towards the light, which is more accurate.




#5103096 Rain droplets technique

Posted by Styves on 21 October 2013 - 05:51 AM

A much better approach than a static scrolling texture, which is likely what's being used here, is using a very simple particle system. Spawn N number of particles and make them slide down the screen with time and some variation (you can just render some quads to the screen and make them move with time). Use some extinction to make the particle fade away after a short period. Once that period is over, spawn a new particle. This way you always have N number of particles, no more, no less.

 

Render this to a buffer using simple vignetting on each particle quad, to get a heightmap of the drops (you can use an ellipsoid shape for the vignette, for a better drop-like look). Then to apply it to the scene, compute a normal from it and use it to displace the screen coordinates when sampling the screen image (same pass).

 

You can vary the amount of slide based on the camera's looking direction, so that it doesn't slide when you look up. You can also do it to hide drops if you're looking down. Another cool thing is you can use some pre-blurred textures for out-of-focus drops (or even merge them into mip-maps for easy lookups).

 

 

This shouldn't take more than a fraction of a millisecond, since the geometry is simple, fillrate is very small and the texture you use for the drops can be quite small. You can also try batching the particles via a vertex buffer and expanding on the GPU to lower the draw call count, if that becomes a problem.




#5100769 tutorial deferred shading? [Implemented it, not working properly]

Posted by Styves on 12 October 2013 - 05:06 AM

Make sure your render targets are of sufficient precision. If you're storing position in ARGB8 then you're going to have a bad time.;)




#5100463 Improving Graphics Scene

Posted by Styves on 11 October 2013 - 01:56 AM

It's easy to sum up good rendering in one word for me: subtlety. I see people complain about motion blur all the time in several games, but they never mention it about Portal 2. Why is that? Because it's subtle, they don't notice it's there but it adds to the feeling of movement. The same goes for everything else. People will say things like "bloom sucks!", when they're really saying "badly implemented bloom sucks!" without realizing it.

 

Matias had a good list, I'd definitely follow that. smile.png

 

 

There is a difference though in "realistic look", and "realistic behaviour". Ray tracing for example I consider a realistic behaviour, while it does not provide the realistic visual, rather complicates it by consuming resources. Then as well, Depth of field is a realistic behaviour towards camera phenomena, not real eye experience fenomena. I do not see objects blurred when they are far away. am I normal?

 

Close one eye and focus on your finger. I guarantee you'll see a blurry background. ;)

 

Eyes do have this DOF effect, but because we have 2 of them and we only focus on the center of our vision (hence what we're focusing on), we rarely notice.

 

In fact, the same thing applies to bad vision (I wear glasses). Imagine setting your camera to focus at ~1-2 feet then look around with it. That's exactly the same behavior as what I get with one eye. My eyes don't just magically blur stuff, they can't focus beyond a certain distance. smile.png

 

It realy irks me when this argument comes up, especially when we're talking about realistic images. People are very used to seeing the world through a camera because of pictures and movies. So when they see a realistic render that has lens flares and vignetting, they don't yell "FAKE!".

 

IMO, it's a silly idea to replicate an eye. There's so much more going on, with diffraction from eye lashes and such, that it just doesn't make sense. Besides, you're already looking through eyes anyway. ;)




#5099363 HLSL Vignetting [SOLVED]

Posted by Styves on 07 October 2013 - 12:35 PM

Try using input.Tex * 2.0 - 1.0 instead of input.Tex - 0.5. ATM you're using a [-0.5, 0.5] range, you should be using [-1,1]. At least that's what I've always done for vignetting and it works, so maybe it'll help. :)






PARTNERS