Jump to content

  • Log In with Google      Sign In   
  • Create Account

lipsryme

Member Since 02 Mar 2010
Offline Last Active Jan 15 2016 10:07 AM

#5260087 Equation for light reflection (diffuse color)

Posted by lipsryme on 02 November 2015 - 04:17 AM

Probably just repeating what Alundra said but if you think about it you have the following:

- A light that has a specific wavelength (LightColor) and intensity (LightIntensity)

- A material that has specific properties that absorb some amount of light and reflects the rest of it back (MaterialDiffuseColor)

 

The only thing that's left now is putting it together and combining it with the cosine angle between the incident light ray and the surface normal of your material (let's just say a flat grass textured surface) which describes the amount of light that hits the surface depending on this angle.

 

So the equation becomes:

float cosTheta_i = saturate(dot(surfaceNormal, -LightDirection)); // light direction from the surface point towards the light
float3 FinalDiffuse = (LightIntensity * LightColor) * MaterialDiffuseColor * cosTheta_i;



#5248573 Post Processing Causing Terrible Swimming

Posted by lipsryme on 24 August 2015 - 11:36 AM

Not quite sure as you haven't shown any pictures or videos of the effect you're describing so I can only guess that you're talking about downsample/upsampling artifacts ?

If so I'd try to down or upsample using something more "sophisticated" than just linear upscale. Try using a gaussian blur for example.




#5247366 PBR Specular reflections, next question(s)

Posted by lipsryme on 18 August 2015 - 05:49 AM


I first saved them as DDS - DXT3, but unless I'm mistaking, that gives only 8 bits per pixel

 

You should make sure to keep your HDR data in tact as much as possible. I capture my scene as R16G16B16A16 and store it using BC6H_UF16 (unsigned), which is compressed half precision float 16bit hdr format. 

https://msdn.microsoft.com/en-us/library/windows/desktop/hh308952(v=vs.85).aspx

 

You can use the DirectXTex library to do that for you: https://directxtex.codeplex.com/




#5246924 PBR Specular reflections, next question(s)

Posted by lipsryme on 16 August 2015 - 10:49 AM


Can't say one method looked better than the other, it's just *different* -and therefore confusing!. But since I'm reading GGX is becoming more and more standard, I wonder why one would chose one method over the other...? It sounds a bit weird to me that reflected light becomes stronger, read multiplies with a value higher than 100%.

If you look at the NDF lobe you can see that GGX has a broader "tail" aka falloff which makes it more realistic than your regular blinn-phong.

Take a look at the disney brdf explorer: http://www.disneyanimation.com/technology/brdf.html

This is a comparison taken from a disney paper: (left GGX, right beckmann)

adEpwVr.png?1

 

Not sure what you mean by the second part...

If both are properly normalized they should behave the same in the following sense:

Let's say you have X amount of energy hitting the surface. If you have an optically flat surface most of the light is going to be focused on (or around) a small location because it didn't scatter. If you take the same amount of energy but let it hit a very rough surface it means that most of the light is going to be spread about the surface but the amount of energy reflected is still the same. So in that way it makes sense that if you take lots of energy that is spread out and put it together you get a stronger single highlight (However no difference in energy!)

 

 


3. to do IBL correctly you need to sample every pixel in the probe (no mipmaps!), treat them all as a directional light source

Importance sampling is a technique that tries to reduce the amount of samples needed by placing random samples in an area (or direction) that is most likely to contain the most important samples. So in the case of IBL you evaluate the probability density function (pdf) of your NDF using a certain roughness value which gives you a direction where the peak of the lobe is located at. But since you're using AMD cubemapgen to generate your IBL probes it should be fine.

Someone correct me if I said something wrong smile.png

 

 


4. yes, *pure/clean* metals have no diffuse.
All right. And for non-metals, it basically boils down to using 97% (1 - 0.03) of the diffuse (Lambert) light? It's little effort, but I'd say this reduction is barely noticable, right? Just asking to see if I'm thinking straight.

Well it's a cheap approximation for trying to energy conserve your diffuse and specular terms. In most cases it won't be very noticable but you should still do it.

In my experience it can be more difficult dealing with too strong diffuse light (especially with organic materials like skin) than specular so you want to decrease it to make the specular a tad more prominent.




#5233550 Difference between camera velocity & object velocity

Posted by lipsryme on 08 June 2015 - 10:34 AM

Whenever I read up on motion blur in presentations or papers it seems people always separate camera motion blur (or velocity) from object's motion blur (velocity). I'm wondering why that is ?

I don't quite see how the velocity computed from sampling the Z/W depth and reconstructing the world space position and then reprojecting it to the previous frame using the PrevViewProj matrix is any different than the velocity computed from just transforming (in your geometry pass's VS) your vertices by the WorldViewProj & PrevWorldViewProj matrix (ofc other than that it includes the world transform for object translation,rotation and scaling).

 

Why do people separate these two ways ? Is there any particular reason ?




#5232120 Need Help fixing my timestep

Posted by lipsryme on 01 June 2015 - 02:55 AM

@DaveSF yea I'm using a slerp for the orientation in the interpolation step

 

@Buckeye That's what I'm still trying to figure out, if this is conditional or not. As far as I can tell its not but I haven't had much time to test these last  few days...Basically I tried 3 test cases with 2 of them displaying the rendered velocity on screen:

 

1. Move the camera left and right (no rotation) while having a plane in front of you. My velocity that is calculated is constant (I double checked it). So the result should be a constant color during the movement, but it is not. You can see it flickering from time to time. (And no it's not an issue of the rendered velocity calculation because the stuttering can be observed regularly, too)

 

2. An object that I'm rotating around the Y axis using only "elapsedTime" which is accumulated inside the update loop using my deltaTime.

The output here should also be constant but it is not. Again the flickering can be observed

 

3. Standing in front of a plane that has a UV animation, where a symbol is "scrolling down". This time of course there's no velocity to be calculated but I believe the stutter can be observed here too...

 

What I believe right now is that the stuttering originates from the inner update while loop because the more intense the work inside becomes the bigger the stuttering/flickering becomes. E.g. I output the acc variable each update step to my GUI (which creates lag) and the flickering becomes very prominent...

 

UPDATE: I just tried removing the inner while entirely just to see how it behaved. If I don't limit my physics the motion is perfectly smooth. At least as long as the framerate is around the same as the update frequency.

As soon as I introduce a time difference variable (currTime - prevTime) as deltaTime, the stuttering reappears.

 

Don't think there's anything wrong with my method of getting the current time stamp but anyway here's the code:

const double GameTime::GetTime()
{
        if (start == 0)
	{
		QueryPerformanceCounter((LARGE_INTEGER*)&start);
		QueryPerformanceFrequency((LARGE_INTEGER*)&frequency);
		return 0.0;
	}

	__int64 counter = 0;
	QueryPerformanceCounter((LARGE_INTEGER*)&counter);
	return (double)((counter - start) / double(frequency));
}



#5224144 Localizing image based reflections (issues / questions)

Posted by lipsryme on 18 April 2015 - 04:28 AM

I'm currently trying to localize my IBL probes but have come across some issues that I couldn't find any answers to in any paper or otherwise.

 

1. How do I localize my diffuse IBL (irradiance map) ? The same method as with specular doesn't really seem to work.

    The locality does not really seem to work for the light bleed.

Image%202015-04-18%20at%2012.17.10%20.pn

As you can see here the light bleed is spread over a large portion of the entire plane even if the rectangular object is very thin.

Also the red'ish tone doesn't really become increasingly stronger the closer the sphere is moved to the red object.

If I move the sphere further to the side of the thin red object the red reflection is still visible. So there's no real locality to it.

 

 

2. How do I solve the case for objects that aren't rectangular or where there's objects not entirely at the edge of the AABB that I intersect ? (or am I'm missing a line or two to do that ?)

 

EXAMPLE_PICTURE:

Image%202015-04-18%20at%2012.22.24%20.pn

As you can see here the rectangular red object reflection works perfectly (but then again only if its exactly at the edge of the AABB).

If an object is like a sphere or moved closer to the probe (so not at the edge) the reflection will still be completely flat and projected to the side of the AABB.

Here's the code snippet how I localize my probe reflection vector...

 

CODE_SAMPLE:

float3 LocalizeReflectionVector(in float3 R, in float3 PosWS, 
                                in float3 AABB_Max, in float3 AABB_Min, 
                                in float3 ProbePosition,
{
	// Find the ray intersection with box plane
	float3 FirstPlaneIntersect = (AABB_Max - PosWS) / R;
	float3 SecondPlaneIntersect = (AABB_Min - PosWS) / R;

	// Get the furthest of these intersections along the ray
	float3 FurthestPlane = max(FirstPlaneIntersect, SecondPlaneIntersect);

	// Find the closest far intersection
	float distance = min(min(FurthestPlane.x, FurthestPlane.y), FurthestPlane.z);

	// Get the intersection position
	float3 IntersectPositionWS = PosWS + distance * R;

	return IntersectPositionWS - ProbePosition;
}



#5203884 Motion Blur flickering highlights unavoidable ?

Posted by lipsryme on 13 January 2015 - 04:59 AM

With motion blur: https://www.youtube.com/watch?v=IV7HLkXjy2I&feature=youtu.be

Without motion blur: https://www.youtube.com/watch?v=h7Ej5KWKNHM

 

I was wondering if this artifact/flickering is something unavoidable with these common type motion blur techniques?

Because I've seen this happening in several other applications (e.g. CryEngine).

Reducing the velocity amount kind of reduces the amount of flickering but never completely fixes it.




#5199658 Rendering blurred progress lines

Posted by lipsryme on 23 December 2014 - 02:18 AM

Looks like a 2D animation to me.

You just blur it manually in photoshop.

Or maybe do something more fancy with UI stuff like flash (I think UE3 does UI animations with scaleform).




#5185341 Forward+ Rendering - best way to updating light buffer ?

Posted by lipsryme on 06 October 2014 - 12:44 PM

Ah so I can just basically do the same as with constant buffers ! smile.png

Having some issues copying the data due to heap corruption but that might be related to something else.

Is this the correct way of doing it ?

 

Update: yep that heap corruption was something else. Seems to be working now smile.png

if (!pointLights_center_and_radius.empty())
{
	this->numActiveLights = static_cast<unsigned int>(pointLights_center_and_radius.size());
	D3D11_MAPPED_SUBRESOURCE pointLightResource = {};
	if (SUCCEEDED(this->context->Map(this->pointLightBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &pointLightResource)))
	{
		DirectX::XMFLOAT4 *pData = (DirectX::XMFLOAT4*)pointLightResource.pData;
		memcpy(pData, &pointLights_center_and_radius[0], sizeof(DirectX::XMFLOAT4) * numActiveLights);
		this->context->Unmap(this->pointLightBuffer, 0);
	}
}



#5172591 Alpha-Test and Forward+ Rendering (Z-prepass questions)

Posted by lipsryme on 10 August 2014 - 07:17 AM

 

Alpha-tested geometry tends to mess up z-buffer compression and hierarchical z representations. So usually you want to render it after your "normal" opaques, so that the normal geometry can get the benefit of full-speed depth testing.

 

As for why they return a color from their pixel shader...I have no idea. In our engine we use a void return type for our alpha-tested depth-only pixel shader.

 

Could it be beneficial to skip z prepass for alpha tested geometry compleatly to avoid that z-buffer compression mess ups?

 

The issue here is you can't since you need the depth information for light culling in the compute shader.

By the way it seems they have alpha-to-coverage enabled in the AMD sample so maybe the color return type has something to do with that ?




#5163235 UE4 IBL / shading confusion

Posted by lipsryme on 27 June 2014 - 07:55 AM

Few months back I had a thread here on trying to implement their approach of IBL using this so called split-sum-approximation and I thought I understood what it was trying to do (same with other aspects of their paper)...I guess I was wrong and I'm even more confused now.

 

For anyone who doesn't know which paper I'm refering to: http://blog.selfshadow.com/publications/s2013-shading-course/karis/s2013_pbs_epic_notes_v2.pdf

 

1. Metallic shading parameter: I'm still unsure if I get what this parameter actually does. What I thought it does is have some value [0-1] that just attenuates the diffuse term like so:

Diffuse *= 1.0f - metallic;

Because if we want the material to be more metallic the diffuse term needs to decrease until it's fully gone for an actual metal. Just attenuating the fresnel reflectance does not completely remove that. Looking more closely at the pictures in the paper it looks as if they control the actual FresnelReflectance with it, something like:

FresnelReflectance = lerp(0.04, 1.0f, metallic);

Does anyone know what the parameter actually does ?

 

2. Cavity Map: They describe this as small-scale shadowing...so is this just an AO map ? But then they talk about this being a replacement for specular (do they mean specular intensity?)

 

3. Split-sum-approxmation: Now this is the most confusing thing to me. My understanding of this was that they additionally precompute a 2D texture that handles the geometry and fresnel portion of the BRDF and that this is a lookup texture that is used during the realtime lighting pass to attenuate the FresnelReflectance so that it looks more realistic(?) / e.g. removes the glowing edges at high roughness. Am I wrong ?

I've generated this 2d lookup texture and it look almost precisely like the picture in his paper except that it's rotated 45 degrees for some odd reason ? I've tried rotating it so it looks the same as in the paper using photoshop but the result of looking up the values seems completely wrong (did he just artificially rotate the texture just for the paper ??) while the original texture does produce reasonable(?) results in that if applied to a sphere the edges become increasingly stronger the smoother it is.

Let me give you a few screenshots:

 

This is the texture generated by me:

zB8UF1G.png?1

 

This is the texture shown in his paper:

 

TDb6Dhh.png?1

And here's a quick video showing how the fresnel reflectance looks like using the formula described in the paper:

FresnelReflectance * EnvBRDF.x + EnvBRDF.y

https://www.youtube.com/watch?v=bdY1rvDPCB8&feature=youtu.be

 

 

Also what does he mean by can only afford a single sample for each (why would you have more ?):

 

Even with importance sampling, many samples still need to be taken. The sample count can be
reduced significantly by using mip maps [3], but counts still need to be greater than 16 for sufficient
quality. Because we blend between many environment maps per pixel for local reflections, we can only
practically afford a single sample for each

 

In his sample code he uses a sample_count of 1024 which in my tests produces lots of dots on the env map from the random distribution and it only gets better using at least 5k samples. I don't see how he does that. Is this just a case of making the precomputation faster because of hundreds/thousands of probes ?




#5154785 why the alphablend is a better choice than alphatest to implement transparent...

Posted by lipsryme on 20 May 2014 - 05:28 AM

discard is GLSL (openGL based shading language) / clip is HLSL (directX based shading language) -> both refer to the clipping operation that can be used for alpha testing.

 

The reason why alpha blend is "sometimes" the better choice is that for example powerVR gpus use so called "Deferred Tile-Based-Rendering". The gpu collects triangle data and at some point executes pixel processing. But before going there powerVR chips implement an additional optimization stage (this is what the "Deferred" part refers to) that determines which parts of the "Tile" should actually be drawn so we don't shade them multiple times for no reason aka overdraw (overdraw mostly refers to the redundant multiple framebuffer writes but shading is also part of this problem).

So when using clipping operations the gpu won't be able to do this optimization anymore. Note that on every gpu this results in a performance reduction because of early-Z since you can't determine which pixel should be culled before going through the pixel pipeline. But on powerVR chips this is even more of a problem due to the aforementioned "pixel overlap determination stage".

 

Why exactly alpha blend is faster in this case I'm not quite sure...my guess is that the blend operation in a tile-based-rendering environment is fairly fast since you don't blend into the actual framebuffer but the small on-chip-memory that holds the tile which as it seems may still be faster than opaque rendering without the hidden-surface-removal stage.

 

I hope I got all of this right since I'm still in the process of learning so if I made a mistake someone correct me please smile.png




#5151423 Specular Mapping on terrain

Posted by lipsryme on 04 May 2014 - 09:31 AM

Your shader code looks a little odd to me (but maybe I'm missing something).

The WoW example is just regular blinn-phong specular shading which is done like this:

// This is the "half-vector", because it's half-way between the light direction and the view direction
float3 H = normalize(L + V); 

// LightDirection being the vector from the surface towards the light
float NdotH = saturate(dot(Normal, LightDirection));

// This is the blinn-phong distribution (D)
float D = pow(NdotH, glossiness);

// You should multiply it by the angle between the surface normal and the light direction
// to avoid light leaking at some angles.
float3 specular = D * LightColor * NdotL; 

Notice specular being a single float value / scalar (this is basically just the form and strength of your specular reflection).

You may also want to apply at least an additional Fresnel Term to it so its intensity will differ more realistically according to the viewing angle.

 

P.S. A reflection vector is calculated like so:

float3 R = 2 * (NdotL * N) - L

or just use the hlsl intrinsics:

float3 R = reflect(L, N);



#5121213 [Software Rasterizer] Vertex Clipping & Guard Bands

Posted by lipsryme on 04 January 2014 - 04:06 PM

@Tim: I actually do something similar with the clipping to viewport bounds and stuff but I was trying to implement actual vertex clipping if a vertex is outside the guard bands.

 

Anyhow I've managed to get it working (at least flawlessly with a single triangle smile.png ).

For those who might wanna know and/or found this years after tongue.png, this is how I do it :

 

I'm now translating my post transform coordinates to basically get [-1,1] coordinates and then check these with the guard band boundaries.

Then I count how many vertices are outside the guard band. If there's two vertices outside then it's an easy case where I just use the cohen-sutherland algorithm to find the two intersection points with the guard band boundary and just change the original position with the intersection point.

If there's just one vertex outside it gets a little trickier, because you now have to basically generate an additional triangle and "reconnect" the original vertices with the new ones from the intersections.






PARTNERS