Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 12:03 AM

#5072884 DX11 - Multiple Render Targets

Posted by on 25 June 2013 - 06:09 PM

Use PIX or VS 2012 Graphics Debugger to inspect device state at the time of the draw call, it will tell you what render targets are currently bound.




#5072305 about Shader Arrays, Reflection and BindCount

Posted by on 23 June 2013 - 02:14 PM

Arrays of textures in shaders are really just a syntactical convenience, the underlying shader assembly doesn't actually have any support for them so the compiler just turns the array into N separate resource bindings. So it doesn't surprise me that the reflection interface would report it as N separate resource bindings, since that's essentially how you have to treat it on the C++ side of things.

It does seem weird that the BIND_DESC structure has a BindCount fieldthat suggests it would be used for cases like these, but I suppose it doesn't actually work that way. I wonder if that field actually gets used for anything.




#5072302 Environment reflection & fresnel correct ?

Posted by on 23 June 2013 - 02:08 PM

These are open problems, so I don't really have any silver bullet solutions to share with you. For approximating the geometric term of the BRDF + incorrect Fresnel you can apply a curve to approximate those factors, which is basically what Sebastian Legarde described in his blog post. Macro-scale occlusion is trickier, since it depends on the actual geometry of your scene. One idea might be to pre-compute directional occlusion per-vertex or in a texture, and use that as an occlusion factor for your cubemap reflections. Another idea might be to attempt to determine occlusion in screen space using the depth buffer.




#5072109 Environment reflection & fresnel correct ?

Posted by on 22 June 2013 - 07:08 PM

Okay, let's see if we can straighten all of this out.

 

First, let's start with a BRDF itself. A BRDF basically tells you how much light reflects off a surface towards the eye, given a lighting environment surrounding that surface. To apply it, you integrate BRDF * incidentLighting about a hemisphere surrounding the point's surface normal. A common way of approximating the result of these kinds of integrals is to use monte carlo sampling, where you basically evalute the result of the function being integrated at random points and sum the results (in reality it's more complex than this, but that's not important at the moment). So you can imagine that this is pretty simple to do in a ray tracer: you pick random rays surrounding the surface normal direction, trace the ray, evaluate the BRDF for that given ray direction and eye direction, multiply the the BRDF with the ray result, and add that result to a running sum. It's also trivial to handle punctual light sources (point lights, directional lights, etc.) since these lights are infinitely small  (they're basically a delta) so you can integrate them by just multiplying the BRDF with the lighting intensity.

 

Now let's talk about microfacet BRDF's. The main idea behind a microfacet BRDF is that you treat a surface as if it's made up of millions of little microscopic surfaces, where each one of those tiny microfacets is perfectly flat. Being perfectly flat lets you treat that microfacet as Fresnel reflector, which means that as light reflects at a shallow angle more of the light is reflected instead of being refracted into the surface. It also means you can use basic geometry to determine what direction a ray of light will reflect off that microfacet. A microfacet BRDF will then assume that all of these little facets are oriented in random directions relative to the overall surface normal. For rougher surfaces, the facets will mostly point away from the normal. For less rough surfaces, the facets will mostly line up with the surface normal. This is modeled in a microfacet BRDF wtih a normal distribution function (NDF), which is essentially a probability density function that tells you the percentage of micro facets that will "line up" in such a way that light from a given direction will reflect towards the eye. For these facets the reflection intensity is assumed to respect Fresnel's laws, which is why you have the Fresnel term in a microfacet BRDF. Now there's one other important piece, which is the geometry term (also known as the shadowing term). This term accounts for light reflecting off a microfacet but then being blocked by other microfacets. In general this will balance out the Fresnel effect, particularly for rougher surfaces since they will have more shadowing.

 

So let's say we want to apply a microfacet BRDF to environment lighting in a real-time application. Doing this with monte carlo sampling is prohibitively expensive, since you often need thousands of samples to converge on a result. So instead a common technique is to approximate the integral using a pre-integrated environment map. The basic idea is pre-integate a portion of your BRDF with an environment map, using a different roughness for each mip level (integrating with a BRDF is essentially a convolution, so it basically amounts to a blur pass). However you have a major issue, which is that the function you're trying to compute has too high dimensionality. The reflected light off a surface depends on the viewing angle and the surface normal, which means we can't use a single cubemap to store the integrated result for all possible viewing directions and surface orientations. So instead, we make a major approximation by only parameterizing on the view direction reflected off the surface normal. To do this, we can only pre-integrate the distribution term of the BRDF with the environment map. This leaves the geometric and Fresnel terms to be handled at runtime. The common approach for Fresnel is to apply it for the reflected view direction, basically just means that you go to 1 as the normal becomes perpendicular to the view direction. This produces incorrect results, since the Fresnel term should have been applied to all of the individual light directions instead of to one direction after convolving with the NDF. The same goes for the geometric term, which leaves you simple approximations like what Sebastian suggests on his blog.

 

Now let's look at some pictures that illustrate how some of this works. This first picture shows a object being lit by an environment, with the the material having low roughness (0.01) and low specular intensity (0.05). It was rendered by monte carlo integrating a microfacet BRDF, so it serves as our ground truth:

 

Skull Specular LowRoughness MC.png

 

As you can see there's a strong Fresnel effect along the top left portion of the skull.

 

Now we have an approximated version using a cubemap that was pre-convolved to match the NDF for the same roughness, and Fresnel applied to the reflected view direction:

 

Skull Specular LowRoughness EM.png

 

This approximation is actually pretty good, which makes sense since our approximation works best for low roughnesses. This is because for low roughnesses most of the microfacets will be active, and so our assumption of sampling at the reflected view direction is a good one.

 

Now we have the same skull but with a higher roughness of 0.2, rendered with monte carlo sampling:

 

Skull Specular HiRoughness MC.png

 

Now the Fresnel effect is much less pronounced due to the geometric term kicking in, and due to having more variance in the incoming light directions that reflect towards the eye.

 

Now we'll go back to your cubemap approximation:

 

Skull Specular HiRoughness EM.png

 

In this case our Fresnel term is making the reflections much to bright at glancing angles, which means our approximation is no longer a good match.

 

Now we'll add in a simple curve to Fresnel term to decrease the intensity as roughness increases, in an attempt to balance out the over-brightening of our Fresnel approximation:

 

Skull Specular HiRoughness EM GApprox.png

 

This is certainly better, but still wrong in a lot of ways. Ideally we would do a better job with regards to pre-computing the BRDF, and handling view dependence.

 

One other important thing I'll mention is that you'll also get poor results if you don't handle macro-scale shadowing. As Hodgman mentioned earlier, objects should occlude themselves and if you don't account for this you will get light rays reflecting off surfaces that they should never reach. I don't actually handle this in my images, so you should keep that in mind when looking at them. I agree with Hodgman that this probably the most offensive thing about the original rock image that was posted, since the lack of occlusion combined with incorrect Fresnel gives you that "X-Ray" look.




#5072099 Using D3D9 Functions and, HLSL

Posted by on 22 June 2013 - 05:32 PM

Yes, you can absolutely do that. However the D3DX9 mesh loading functions require a D3D9 device, so you will need to create one in addition to your D3D11 device.




#5071611 Better to have separate shaders for each graphical option, or pass constants...

Posted by on 20 June 2013 - 04:39 PM

Like anything else, the correct choice depends on a few things. Generating seperate shaders will *always* result in more efficient assembly being generated when compared to branching on a value from a constant buffer. Statically disabling a feature allows the compiler to optimize away any calculations and texture fetches that would be needed for that feature, which results in a more efficient shader. Branching on the other hand will allow the GPU to skip executing all of the code in the branch, but there will still be performance penalties from having the branch itself. Also it won't be able to optimize away the code inside the branch, which can increase register usage.

However there are downsides to using seperate shaders. For instance, you have to compile and load more shaders. The number of shaders can explode once you add more than a few features that can all be turned on or off. Also you have to switch shaders more often, which can result in higher CPU overhead and can also impact GPU efficiency by causing pipeline flushes.

 

For your particular case, shadows are probably a good fit for having a seperate shader. This is because shadows tend to be heavy in terms of GPU performance due to multiple texture fetches, so the performance gain is probably worth it.




#5071338 GPU particles

Posted by on 19 June 2013 - 10:31 PM

Yeah the point->quad expansion has special-case handling in GPU's because it's so common. If you really want to avoid GS you can also use instancing to accomplish the same thing.




#5071337 Optimized deferred lighting....algorithm question

Posted by on 19 June 2013 - 10:29 PM

Why don't you just use additive blending to combine the results of subsequent lighting passes?




#5070603 The Pixel Shader expects a Render Target View

Posted by on 17 June 2013 - 04:24 PM

That warning means your pixel shader is trying to write out to SV_Target1, but you have a NULL render target view bound to the device context for slot 1. It won't actually cause a problem since the write to SV_Target1 will just be ignored, but you will be wasting a little bit of performance.




#5070483 D3D9 64-bit debug runtime

Posted by on 17 June 2013 - 11:54 AM

There was a Windows 7 platform update that updated D3D components, and broke a few things (like PIX). To get the debug runtimes to work you need to either install the Windows 8 SDK to get the new debug DLL's, or you need to uninstall the platform update.




#5070041 low precision formats for vertex data?

Posted by on 15 June 2013 - 03:09 PM

I can't say that I have ever observed such behavior on any hardware that I've worked on extensively, save for one console GPU that really liked fetching 32-byte vertex chunks. Any modern (DX10+) GPU doesn't even have dedicated vertex fetching hardware anymore, and will read the vertex data the same way it reads any other buffer.




#5070019 low precision formats for vertex data?

Posted by on 15 June 2013 - 01:15 PM

Is there some reason that you care about 64-byte alignment?

 

The only thing you should need full 32-bit precision for is position, everything else you could compress. For texture coordinates 16-bit should be sufficient, either using an integer or half-precision float depending on whether you need values > 1 or < 0. Normals should be 16-bit integers with a sign bit, since they're always in the [-1, 1] range. Same for tangents. Typically you store bone weights as 4 8-bit integers, since they're in the [0, 1] range.

EDIT: I forgot to mention you can possibly compress normals and tangents even further by taking advantage of the fact that they are direction vectors, if you're willing to introduce some unpacking code into your vertex shader. Most of the techniques listed here are applicable, or if your tangent frame is orthogonal then you can store the entire thing as a single quaternion.




#5069611 changing code on its roots

Posted by on 13 June 2013 - 07:27 PM

Dynamic linking can definitely be used to implement this, although it's a little wacky to use and will often generate sub-optimal code. Personally I would just do this by pre-compiling several permutations of the shader, with the value of c defined in a a preprocessor macro (similar to what Adam_42 suggests). Doing it this way allows the compiler to completely optimize away the if statement, and also any additional operations performed with the value of c. You can specify the macro definition when compiling the shader using the "pDefines" parameter of D3DCompile, and then just compile the shaders in a for loop.




#5068335 Tangent Binormal Normal

Posted by on 08 June 2013 - 05:36 PM

A water plane aligned with the XZ plane isn't going to match the coordinate space of a tangent-space normal map unless you swap Y and Z. You will probably also need to negate on or both values depending on how your coordinate system is setup.




#5067755 Constant Buffer By Name in HLSL

Posted by on 05 June 2013 - 08:11 PM

You can definitely get a constant buffer location by name. Just obtain the ID3D11ShaderReflection interface for a shader, and then call GetResourceBindingDescByName to get the info for your constant buffer. You can also enumerate all of the resource bindings with GetResourceBindingDesc to see what resources are bound to which slots.






PARTNERS