Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 02 Mar 2010
Offline Last Active Sep 14 2016 12:15 PM

#5294057 (Physically based) Hair shading

Posted by on 29 May 2016 - 10:51 AM

Hey guys.

I'm playing around with hair rendering these days and have implemented the shading model that is used in AMD's Tress FX sample which I believe is the kajiya-kay model.

However having done that I'm not really satisfied with the results. Especially after using a physically based GGX for my regular shading.

UE4 has recently worked and presented something that can be seen in this Video here: https://youtu.be/toLJh5nnJA8?t=2598 which looks really nice.

I've also no idea how to use my image-based-lighting with this Kind of BRDF since using my ggx convoluted probes the normal way looks very wrong.


Since the kajiya-kay diffuse term uses something like

float cosTL = saturate(dot(hairTangent, LightDirection)); // dot(T, L) instead of dot(N, L)

I was thinking maybe use the Tangent as the lookup vector for the irradiance map and something similar for the glossy env map but that also didn't seem to work out of the box.


Brian Karis told me on Twitter he was using an approximation that square-enix uses however I could not find anything about that on the Internet.

Does anyone know of a physically based hair model (marschner?) that can be used for realtime shading ?

And does somebody have an idea how to solve the IBL Problem ?


Update: After watching the Video again I found the paper his shading model is based on (not sure I'll be able to translate that into Code but I'll try)


#5287611 What is the relationship between area lights and reflections (SSR/local/global)

Posted by on 19 April 2016 - 09:21 AM

Since area lights and punctual light sources (spot, directional, point) basically try to approximate a realtime reflection for one light source it's not wrong for you to think that they are one and the same because what they are trying to achieve is the same thing. However as "everything reflects" in some way or another you'd have to evaluate every single pixel in your scene as a sort of light source, which is kinda what you're doing in image based lighting. Capture the environment and evaluate the reflected light from/into all directions.

Which also has the issue that it can only be done efficiently in an offline preprocess so that leaves us (for realtime reflections) with either planar reflections or screen space techniques.

#5276199 BRDFs in shaders

Posted by on 17 February 2016 - 03:39 PM

E is the irradiance measured on a surface location (in your shading that would be the pixel you shade on your geometry).

E_L is irradiance measured not on a surface location but on a unit plane. The L subscript tells you already that it is irradiance corresponding to the light source.


edit: What you wrote is correct, although it's kind of confusing to think about shading a surface location as measuring it on a plane. This plane is kind of imaginary but perpendicular to the normal vector at that location, hence the N dot L term is born to saturate the amount of light depending on the angle the light hits this plane.


Here are a few quotes from RealTimeRendering:


- "The emission of a directional light source can be quantified by measuring power through a unit area surface perpendicular to L. This quantity, called irradiance, is equivalent to the sum of energies of the photons passing through the surface in one second".

Note: He's talking about E_L here, irradiance perpendicular to the light source L.


- "Although measuring irradiance at a plane perpendicular to L tells us how bright the light is in general, to compute its illumination on a surface, we need to measure irradiance at a plane parallel to that surface..."

Note: He goes on to talk about how the N dot L factor is derived...


On page 103 you can see that irradiance E is equal to the irradiance E_L times the cosine angle between the surface normal N and the light direction L.

E = E_L * cos_theta_i


looking at the equation in your original post it now makes sense because it now translates the brdf into:

f(l,v) = outgoing_radiance / irradiance

aka the ratio between outgoing light into a small set of directions (in this case in the direction of our sensor/eye, which is vector V) and incoming light to this surface location (or rather a plane perpendicular to the surface normal N)


so finally to translate this into actual hlsl code your very simple light equation could look like this:

float3 E_L = light_intensity * light_color;
float cos_theta_i = saturate(dot(N, -L)); // negate L because we go from surface to light
float3 E = E_L * cos_theta_i;

// We actually output outgoing radiance here but since this is a very simplified/approximated BRDF 
// we can set this equal since we assume that diffuse light is reflected the same in all directions
return E;

which is the lambertian shading / BRDF :)

#5275405 BRDFs in shaders

Posted by on 12 February 2016 - 08:28 AM


I'm looking for better (easier) explanations about this topic, rather than the book yoshi_t mentioned. Any idea?


okay let's see....


Irradiance is the quantity of energy that is measured on a surface location incoming from all directions (that is mostly shown in literature as the letter E)

Now you may be confused and say "But hey if irradiance (E) is measured on a single location what's E_L then aka the irradiance measured on a surface perpendicular to L".

The irradiance perpendicular to L (E_L) is the amount of energy passing through a unit sized plane (don't get confused by this it's just something to make the measuring easier). You can think of it as the amount of energy the light source itself emits. Think of a light bulb emitting light with some amount of intensity into a direction. That is your E_L.

Radiance on the other hand is basically the same as irradiance (also remember radiance can be incoming or outgoing energy!) but not from all directions but only a limited or focused amount (think of a camera lens focusing light into a small amount of directions, that is the solid angle).

In the equation above it shows outgoing radiance (L_o) which is light reflecting from your surface location into a certain amount of directions.


I hope that is somewhat easier to understand...if it's still a little too hard to grasp here's the short version:


1. Irradiance = light energy on a single location from all incoming directions

​    ​Radiance = light energy on a single location from a small set of directions (solid angle)

    Solid Angle = small set of directions in 3D space (think of a piece of cake)


2. Irradiance measured on a plane perpendicular to the light direction = light flowing through a unit sized plane (for measurement sake) to basically tell you how much energy the light is emitting/transmitting


3. Pretty sure if you've done anything that involves light or texture color you've made use of those equations (even if you didn't know).

Radiometry is just a way to mathematically or physically explain / define those things



The problem with radiometry is often that the "basics" are confusing since they are already based on simplification or approximations of more advanced equations.

Maybe try to keep going and see if it starts to make more sense going further...

For example later on when they explain how irradiance is obtained by summing up incoming radiance over all directions it made more sense to me

#5260087 Equation for light reflection (diffuse color)

Posted by on 02 November 2015 - 04:17 AM

Probably just repeating what Alundra said but if you think about it you have the following:

- A light that has a specific wavelength (LightColor) and intensity (LightIntensity)

- A material that has specific properties that absorb some amount of light and reflects the rest of it back (MaterialDiffuseColor)


The only thing that's left now is putting it together and combining it with the cosine angle between the incident light ray and the surface normal of your material (let's just say a flat grass textured surface) which describes the amount of light that hits the surface depending on this angle.


So the equation becomes:

float cosTheta_i = saturate(dot(surfaceNormal, -LightDirection)); // light direction from the surface point towards the light
float3 FinalDiffuse = (LightIntensity * LightColor) * MaterialDiffuseColor * cosTheta_i;

#5248573 Post Processing Causing Terrible Swimming

Posted by on 24 August 2015 - 11:36 AM

Not quite sure as you haven't shown any pictures or videos of the effect you're describing so I can only guess that you're talking about downsample/upsampling artifacts ?

If so I'd try to down or upsample using something more "sophisticated" than just linear upscale. Try using a gaussian blur for example.

#5247366 PBR Specular reflections, next question(s)

Posted by on 18 August 2015 - 05:49 AM

I first saved them as DDS - DXT3, but unless I'm mistaking, that gives only 8 bits per pixel


You should make sure to keep your HDR data in tact as much as possible. I capture my scene as R16G16B16A16 and store it using BC6H_UF16 (unsigned), which is compressed half precision float 16bit hdr format. 



You can use the DirectXTex library to do that for you: https://directxtex.codeplex.com/

#5246924 PBR Specular reflections, next question(s)

Posted by on 16 August 2015 - 10:49 AM

Can't say one method looked better than the other, it's just *different* -and therefore confusing!. But since I'm reading GGX is becoming more and more standard, I wonder why one would chose one method over the other...? It sounds a bit weird to me that reflected light becomes stronger, read multiplies with a value higher than 100%.

If you look at the NDF lobe you can see that GGX has a broader "tail" aka falloff which makes it more realistic than your regular blinn-phong.

Take a look at the disney brdf explorer: http://www.disneyanimation.com/technology/brdf.html

This is a comparison taken from a disney paper: (left GGX, right beckmann)



Not sure what you mean by the second part...

If both are properly normalized they should behave the same in the following sense:

Let's say you have X amount of energy hitting the surface. If you have an optically flat surface most of the light is going to be focused on (or around) a small location because it didn't scatter. If you take the same amount of energy but let it hit a very rough surface it means that most of the light is going to be spread about the surface but the amount of energy reflected is still the same. So in that way it makes sense that if you take lots of energy that is spread out and put it together you get a stronger single highlight (However no difference in energy!)



3. to do IBL correctly you need to sample every pixel in the probe (no mipmaps!), treat them all as a directional light source

Importance sampling is a technique that tries to reduce the amount of samples needed by placing random samples in an area (or direction) that is most likely to contain the most important samples. So in the case of IBL you evaluate the probability density function (pdf) of your NDF using a certain roughness value which gives you a direction where the peak of the lobe is located at. But since you're using AMD cubemapgen to generate your IBL probes it should be fine.

Someone correct me if I said something wrong smile.png



4. yes, *pure/clean* metals have no diffuse.
All right. And for non-metals, it basically boils down to using 97% (1 - 0.03) of the diffuse (Lambert) light? It's little effort, but I'd say this reduction is barely noticable, right? Just asking to see if I'm thinking straight.

Well it's a cheap approximation for trying to energy conserve your diffuse and specular terms. In most cases it won't be very noticable but you should still do it.

In my experience it can be more difficult dealing with too strong diffuse light (especially with organic materials like skin) than specular so you want to decrease it to make the specular a tad more prominent.

#5233550 Difference between camera velocity & object velocity

Posted by on 08 June 2015 - 10:34 AM

Whenever I read up on motion blur in presentations or papers it seems people always separate camera motion blur (or velocity) from object's motion blur (velocity). I'm wondering why that is ?

I don't quite see how the velocity computed from sampling the Z/W depth and reconstructing the world space position and then reprojecting it to the previous frame using the PrevViewProj matrix is any different than the velocity computed from just transforming (in your geometry pass's VS) your vertices by the WorldViewProj & PrevWorldViewProj matrix (ofc other than that it includes the world transform for object translation,rotation and scaling).


Why do people separate these two ways ? Is there any particular reason ?

#5232120 Need Help fixing my timestep

Posted by on 01 June 2015 - 02:55 AM

@DaveSF yea I'm using a slerp for the orientation in the interpolation step


@Buckeye That's what I'm still trying to figure out, if this is conditional or not. As far as I can tell its not but I haven't had much time to test these last  few days...Basically I tried 3 test cases with 2 of them displaying the rendered velocity on screen:


1. Move the camera left and right (no rotation) while having a plane in front of you. My velocity that is calculated is constant (I double checked it). So the result should be a constant color during the movement, but it is not. You can see it flickering from time to time. (And no it's not an issue of the rendered velocity calculation because the stuttering can be observed regularly, too)


2. An object that I'm rotating around the Y axis using only "elapsedTime" which is accumulated inside the update loop using my deltaTime.

The output here should also be constant but it is not. Again the flickering can be observed


3. Standing in front of a plane that has a UV animation, where a symbol is "scrolling down". This time of course there's no velocity to be calculated but I believe the stutter can be observed here too...


What I believe right now is that the stuttering originates from the inner update while loop because the more intense the work inside becomes the bigger the stuttering/flickering becomes. E.g. I output the acc variable each update step to my GUI (which creates lag) and the flickering becomes very prominent...


UPDATE: I just tried removing the inner while entirely just to see how it behaved. If I don't limit my physics the motion is perfectly smooth. At least as long as the framerate is around the same as the update frequency.

As soon as I introduce a time difference variable (currTime - prevTime) as deltaTime, the stuttering reappears.


Don't think there's anything wrong with my method of getting the current time stamp but anyway here's the code:

const double GameTime::GetTime()
        if (start == 0)
		return 0.0;

	__int64 counter = 0;
	return (double)((counter - start) / double(frequency));

#5224144 Localizing image based reflections (issues / questions)

Posted by on 18 April 2015 - 04:28 AM

I'm currently trying to localize my IBL probes but have come across some issues that I couldn't find any answers to in any paper or otherwise.


1. How do I localize my diffuse IBL (irradiance map) ? The same method as with specular doesn't really seem to work.

    The locality does not really seem to work for the light bleed.


As you can see here the light bleed is spread over a large portion of the entire plane even if the rectangular object is very thin.

Also the red'ish tone doesn't really become increasingly stronger the closer the sphere is moved to the red object.

If I move the sphere further to the side of the thin red object the red reflection is still visible. So there's no real locality to it.



2. How do I solve the case for objects that aren't rectangular or where there's objects not entirely at the edge of the AABB that I intersect ? (or am I'm missing a line or two to do that ?)




As you can see here the rectangular red object reflection works perfectly (but then again only if its exactly at the edge of the AABB).

If an object is like a sphere or moved closer to the probe (so not at the edge) the reflection will still be completely flat and projected to the side of the AABB.

Here's the code snippet how I localize my probe reflection vector...



float3 LocalizeReflectionVector(in float3 R, in float3 PosWS, 
                                in float3 AABB_Max, in float3 AABB_Min, 
                                in float3 ProbePosition,
	// Find the ray intersection with box plane
	float3 FirstPlaneIntersect = (AABB_Max - PosWS) / R;
	float3 SecondPlaneIntersect = (AABB_Min - PosWS) / R;

	// Get the furthest of these intersections along the ray
	float3 FurthestPlane = max(FirstPlaneIntersect, SecondPlaneIntersect);

	// Find the closest far intersection
	float distance = min(min(FurthestPlane.x, FurthestPlane.y), FurthestPlane.z);

	// Get the intersection position
	float3 IntersectPositionWS = PosWS + distance * R;

	return IntersectPositionWS - ProbePosition;

#5203884 Motion Blur flickering highlights unavoidable ?

Posted by on 13 January 2015 - 04:59 AM

With motion blur: https://www.youtube.com/watch?v=IV7HLkXjy2I&feature=youtu.be

Without motion blur: https://www.youtube.com/watch?v=h7Ej5KWKNHM


I was wondering if this artifact/flickering is something unavoidable with these common type motion blur techniques?

Because I've seen this happening in several other applications (e.g. CryEngine).

Reducing the velocity amount kind of reduces the amount of flickering but never completely fixes it.

#5199658 Rendering blurred progress lines

Posted by on 23 December 2014 - 02:18 AM

Looks like a 2D animation to me.

You just blur it manually in photoshop.

Or maybe do something more fancy with UI stuff like flash (I think UE3 does UI animations with scaleform).

#5185341 Forward+ Rendering - best way to updating light buffer ?

Posted by on 06 October 2014 - 12:44 PM

Ah so I can just basically do the same as with constant buffers ! smile.png

Having some issues copying the data due to heap corruption but that might be related to something else.

Is this the correct way of doing it ?


Update: yep that heap corruption was something else. Seems to be working now smile.png

if (!pointLights_center_and_radius.empty())
	this->numActiveLights = static_cast<unsigned int>(pointLights_center_and_radius.size());
	D3D11_MAPPED_SUBRESOURCE pointLightResource = {};
	if (SUCCEEDED(this->context->Map(this->pointLightBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &pointLightResource)))
		DirectX::XMFLOAT4 *pData = (DirectX::XMFLOAT4*)pointLightResource.pData;
		memcpy(pData, &pointLights_center_and_radius[0], sizeof(DirectX::XMFLOAT4) * numActiveLights);
		this->context->Unmap(this->pointLightBuffer, 0);

#5172591 Alpha-Test and Forward+ Rendering (Z-prepass questions)

Posted by on 10 August 2014 - 07:17 AM


Alpha-tested geometry tends to mess up z-buffer compression and hierarchical z representations. So usually you want to render it after your "normal" opaques, so that the normal geometry can get the benefit of full-speed depth testing.


As for why they return a color from their pixel shader...I have no idea. In our engine we use a void return type for our alpha-tested depth-only pixel shader.


Could it be beneficial to skip z prepass for alpha tested geometry compleatly to avoid that z-buffer compression mess ups?


The issue here is you can't since you need the depth information for light culling in the compute shader.

By the way it seems they have alpha-to-coverage enabled in the AMD sample so maybe the color return type has something to do with that ?