Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


MJP

Member Since 29 Mar 2007
Offline Last Active Yesterday, 11:03 PM

#5231120 Fov and proportional Depth

Posted by MJP on 26 May 2015 - 01:38 PM

Ahh sorry, I didn't see your tag.

As an alternative, you can use ddx/ddy to calculate the mip level manually. Something like this should work (not tested):

float MipLevel(in float2 uv)
{ 
    float2 dx_uv = ddx(uv);
    float2 dy_uv = ddy(uv);
    float maxSqr = max(dot(dx_uv, dx_uv), dot(dy_uv, dy_uv));
 
    return 0.5 * log2(maxSqr); // == log2(sqrt(maxSqr));
}



#5231116 Depth texture empty (shadowmapping)

Posted by MJP on 26 May 2015 - 01:30 PM

SamplerComparisonState lets you use the hardware's PCF, which is generally faster than doing it manually in the shader. Basically you get 2x2 PCF at the same cost as a normal bilinear texture fetch, which is pretty nice.

To use it, you want to create your sampler state with D3D11_FILTER_COMPARISON_MIN_MAG_MIP_LINEAR and D3D11_COMPARISON_LESS_EQUAL. Then in your shader, declare your sampler as a SamplerComparisonState, and sample your shadow map texture using SampleCmp or SampleCmpLevelZero. For the comparison value that you pass to SampleCmp, you pass the pixel's projected depth in shadow space. The hardware will then compare the pixel depth against the depth from the shadow map texture, and return 1 when the pixel depth is less than or equal to the shadow map depth.


#5230915 Fov and proportional Depth

Posted by MJP on 25 May 2015 - 04:16 PM

So it sounds like you just want to fade off a detail texture at a point where the details would be imperceptibly small. Unfortunately this doesn't just depend on depth (as you've already discovered), but also the FOV as well as the resolution at which you're rasterizing. Really what you want is to do the same thing that the hardware does for calculating which miplevel to use, which is analyze the gradients of your texture UV's. This can be done using the quad derivative functions that are available in HLSL or GLSL, or with HLSL it's also possible to get the texture LOD level directly with Texture2D.CalculateLevelOfDetail (I would imagine that GLSL has an equivalent as well). With this, you could pick a mip level for your grass texture at which it should fade out, and then use the returned LOD value to compute your alpha:

float lod = DetailTexture.CalculateLevelOfDetail(DetailSampler, uv);
float alpha = 1.0f - smoothstep(StartFadeMipLevel, EndFadeMipLevel, lod);



#5230913 What's the deal with setting multiple viewports on the rasterizer?

Posted by MJP on 25 May 2015 - 04:01 PM

Apparently it's a semantic applied to the geometry shader output, but I would imagine you can apply it to the vertex shader output if you have no geometry shader (I may be wrong on this).


Unfortunately, that's not the case. You can only use it as an output from a geometry shader.

Recent AMD hardware supports setting it from a vertex shader at the hardware level, but it's not exposed in D3D11. However they did expose it as an OpenGL extension.


#5230912 Depth texture empty (shadowmapping)

Posted by MJP on 25 May 2015 - 03:54 PM

Your code for creating the depth texture and corresponding DSV + SRV looks correct, and so does your vertex shader code. If I were you, I would take a frame capture using RenderDoc to see what's going on. First, I would check the depth texture after rendering shadow casters to see if it looks correct. Keep in mind that for a depth texture, if you used a perspective projection then it will appear mostly white by default. To get a better visualization, use the range slider to set the start range to about 0.9. If the depth texture looks okay, then I would check the draw call where you use the shadow map to make sure that your textures and samplers are bound correctly.

As for that sampler state that you've created, how exactly are you using it? Are you trying to use it with a SamplerComparisonState in your pixel shader? Or are you just using a regular SamplerState for sampling from your shadow map texture?

Either way, always make sure that you've created your device with the D3D11_CREATE_DEVICE_DEBUG flag when you're debugging problems like this. It will cause D3D to output warnings to your debugger output window whenever an error occurs due to API misuse.


#5230600 Problem Alpha Blending in DirectX

Posted by MJP on 23 May 2015 - 01:56 PM

Do you disable depth buffer writes when rendering the transparent cube?


#5229740 Clamp light intensity

Posted by MJP on 18 May 2015 - 07:47 PM

Like Hodgman mentioned, mixing small mirror-like roughness values with infinitely-small point lights is bad news. Not only will you get those unreasonably-high values out of your BRDF, but that specular highlight is going to alias like crazy. So unless crazy sparkling specular is part of your game's look, I would avoid the lower roughness range for analytical light sources. It can work okay for area lights (or approximations to area lights, such as environment maps), but not for point lights.

Clamping can be good, especially if you get into higher light intensities. Note that if you want to get using real-world photometric units, FP16 won't be enough and you'll need to introduce some kind of scale factor to avoid overflow in the specular highlights. You should also note that even if you clamp, you can still cause overflow after-the-fact by using alpha blending. During production of The Order we actually had this problem all over the place, mainly due to light bulbs stacking up on the same pixel. We ended up just detecting overflow early on in the PostFX chain, and converting back to a reasonable value. It was heavy-handed, but guaranteed that invalid values didn't slip through into DOF and bloom and create the dreaded "squares of death".

We also used fp16 buffers everywhere, since R11G11B10 wasn't enough precision for our intensity range. You'll definitely want fp16 if you decide to use photometric units, since there's a very large range of values for real-world intensities.


#5229734 Fisheye by using pixel shader. Help.

Posted by MJP on 18 May 2015 - 07:21 PM

Have you tried paraboloid mapping?


Paraboloid mapping kinda works, but it has some major limitations. The main problem is that paraboloid mapping applies a non-linear projection to vertices, but the rasterizer can only interpolate linearly between the vertices. So you wind up with artifacts when rendering triangles that are too large in screen space (for instance, incorrect Z testing).


#5227798 Spherical Harmonics - analytical solution

Posted by MJP on 07 May 2015 - 10:20 AM

If you want to gain a better understanding of the cosine lobe convolution in SH, I would suggest reading this paper by Ravi Ramamoorthi and Pat Hanrahan. It covers the details of what Krzysztof and vlj touched on in their posts.

One thing to be aware of is that the code you just posted will compute the irradiance incident onto the surface, but not the diffuse reflectance. To compute Lambertian diffuse reflectance, you should multiply your irradiance by DiffuseAlbedo/Pi.


#5227566 GI ground truth for comparison

Posted by MJP on 06 May 2015 - 01:08 PM

Arnold is very good, but also very not-free.

I've had some people recommend LuxRender, but I don't really have any experience with it myself.

Have you thought about writing a quick exporter tool to Mitsuba's file format? If you can do that, you can totally skip the pain of manually editing XML files. In fact I wasn't really advocating that you hand-write the XML in my blog post, I was just doing it as a way help familiarize people with the format and Mitsuba's functionality.


#5227563 How 3D engines manage so many textures without running out of VRAM?

Posted by MJP on 06 May 2015 - 12:58 PM

The thing is, you would never actually destroy and create textures at runtime.  What you would do instead is replace the contents of already existing textures.


We actually did this for our internal (non-shipping) Windows build, which used D3D11. The Nvidia driver seemed to handle it well enough as long as we weren't creating/destroying too many textures, and we could create the resource on our streaming IO thread. However I don't know if really scales well enough across all hardware/drivers to be used in a shipping PC game, so I would have to defer to other developers that have more experience with that.

If you don't want to create re-create resources, then I believe the IHV recommended way of updating the texture contents with D3D11 is to manage a small pool of STAGING textures that you update, and then copy their contents into the "real" textures created with DEFAULT usage.


#5227562 Question about definition of PBR workflows/parameters

Posted by MJP on 06 May 2015 - 12:52 PM

The "metalness" parameter comes from Brent Burley's presentation from SIGGRAPH 2012, where he discussed the material and shading setup used at Disney. It's actually pretty simple to implement: when metalness is 0, then your F0 specular intensity is 0.04 and your diffuse reflectance is equal to albedo/Pi. When metalness is 1, then your specular intensity is equal to albedo and your diffuse reflectance is 0. There should also be more info in Brian's course notes from SIGGRAPH 2013, which describes how they adopted Disney's material model into Unreal Engine 4.

For that Blinn-Phong distribution that Brian listed in his blog, I believe that he started with the formulation given in this paper. Brian just reformulated it to be in terms of roughness (alpha), instead of being in terms of a specular power (alpha_p). So basically if you set alpha_p = 2 / alpha^2 - 2 (which is a standard conversion from Beckmann roughness to Blinn-Phong power) and then substitute it into equation 30 in the paper that I linked, then I believe that you should arrive at Brian's equation for Blinn-Phong.


#5227556 help~~how to recreate direct3d device in dx11 like dx9

Posted by MJP on 06 May 2015 - 12:29 PM

In my experience, the #1 cause of a device removed is the driver crashing or hanging. You can get a hang pretty easily by running a shader on the GPU that executes for too long, which causes a WDDM timeout to occur.


#5227017 Mapping fullscreen texture to object UV in forward pass

Posted by MJP on 03 May 2015 - 04:20 PM

Assuming that "PosCS" is your clip space position, then performing the perspective divide will give you normalized device coordinates. In this space (-1, -1) is the bottom left corner of the viewport, and (1, 1) is the upper right. For render target UV's you typically use (0,0) as the upper left corner, and (1,1) as the bottom right. So you just need to apply an appropriate scale and offset to go from NDC's to UV's.

If your SSAO texture is the same size as the render target that you're rasterizing to, then calling "Sample" with UV's is overkill. You can just use "Load" or the array index operator to directly load a single texel without any filtering. You can do this very easily in a pixel shader by taking SV_Position as input, which will have the un-normalized pixel coordinates in X and Y.


#5226994 Custom mipmap generation (Read mip0, write to mip1,...)

Posted by MJP on 03 May 2015 - 01:02 PM

When you create your shader resource view, you need to fill out the "Texture2D" member of D3D11_SHADER_RESOURCE_VIEW_DESC with the appropriate mip level that you want to use. Set "MostDetailedMip" to the mip level that you want to sample, and then set "MipLevels" to 1. This will create an SRV that's restricted to 1 particular mip level, which will let you read from one mip level and write to another using an appropriately created render target view.




PARTNERS