Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Yesterday, 12:42 AM

#5274941 Limiting light calculations

Posted by MJP on 09 February 2016 - 02:49 AM

I was always advised to avoid dynamic branching in pixel shader.


You should follow this advice if you're working on a GPU from 2005. If you're working on one from the last 10 years...not so much. On a modern GPU I would say that there's two main things you should be aware of with dynamic branches and loops:

1. Shaders will follow the worst case within a warp or wavefront. For a pixel shader, this means groups of 32-64 pixels that are (usually) close together in screen space. What this means is that if you have an if statement where the condition evaluates false to 31 pixels but true for one pixel in the 32-thread warp, then they're all to execute what's inside the if statement. This can be especially bad if you have an else clause, since you can end up with your shader executing both the "if" as well as the "else" part of your branch! For loops it's similar: the shader will keep executing the loop until all threads have hit the termination condition. Note that if you're branching or looping on something from a constant buffer, then you don't need to worry about any of this. For that case every single pixel will take the same path, so there's no coherency issue.

2. Watch out for lots of nested flow control. Doing this can start to add overhead from the actual flow control instructions (comparisons, jumps, etc.), and can cause the compiler to use a lot of general purpose registers.

For the case you're talking about, a dynamic branch is totally appropriate and is likely to give you a performance increase. The branch should be fairly coherent in screen space, so you should get lots of warps/wavefronts that can skip what's inside of the branch. For an even more optimal approach, look into deferred or clustered techniques.


#5274702 Questions on Baked GI Spherical Harmonics

Posted by MJP on 06 February 2016 - 05:34 PM

For The Order we kept track of "dead" probes that were buried under geometry. These were detected by counting the percentage of rays that hit backfaces when baking the probes, and marking as "dead" if over a threshold. Early in the project the probe sampling was done on the CPU, and was done once per object. When doing this, we would detect dead probes during filtering (they were marked with a special value), and give them a filter weight of 0. Later on we moved to per-pixel sampling on the GPU, and we decided that manual filtering would be too expensive. This lead us to preprocess the probes by using a flood-fill algorithm to assign dead probes a value from their closest neighbor. We also ended up allowing the lighting artists to author volumes, where any probes inside of the volume would be marked as "dead". This was useful for preventing leaking through walls or floors.


#5274158 A problem about implementing stochastic rasterization for rendering motion blur

Posted by MJP on 03 February 2016 - 08:33 PM

So they're using iFragCoordBase to lookup a value the random time texture. This will essentially "tile" the random texture over the screen, taking MSAA subsamples into account. So if there's no MSAA the random texture will be tiled over 128x128 squares on the screen, while for the 4xMSAA case it will be tiled over 64x64 squares. This ensures that each of the 4 subsamples gets a different random time value inside of the loop.


#5274149 Normalized Blinn Phong

Posted by MJP on 03 February 2016 - 07:50 PM

You should read through the section called "BRDF Characteristics" in chapter 7, specifically the part where they cover directional-hemispherical reflectance. This value is the "area under the function" that Hodgman is referring to, and must be <= 1 in order for a BRDF to obey energy conservation. As Hodgman mentioned a BRDF can still return a value > 1 for a particular view direction, as long as the result is still <= 1 after integrating about the hemisphere of possible view directions.


#5273767 Shadow Map gradients in Forward+ lighting loop

Posted by MJP on 01 February 2016 - 07:24 PM

In our engine I implemented it the way that you've described. It definitely works, but it consumes extra registers which isn't great. I don't know of any cheaper alternatives that would work with anisotropic filtering.


#5272923 directional shadow map problem

Posted by MJP on 27 January 2016 - 07:48 PM

You can use bias value that depends on angle between surface normal and direction to light:

float bias = clamp(0.005 * tan(acos(NoL)), 0, 0.01);
where: NoL = dot(surfaceNormal, lightDirection);

tan(acos(x)) == sqrt(1 - x * x) / x

You really do not want to use the inverse trig functions on a GPU. They are not natively supported by their ALUs, and will cause the compiler to generate a big pile of expensive code.


#5272897 D3d12 : d24_x8 format to rgba8?

Posted by MJP on 27 January 2016 - 05:01 PM

Yes they mentionned it on some twitter account, but then does GCN store 24 bits depth value as 32 bits if a 24 bits depth texture is requested ?
Since there is no performance bandwidth advantage since 24 bits needs to be stored in a 32 bits location and 8 bits are wasted the driver might as well promote d24x8 to d32 + r8 ?


No, they store it as 24-bit fixed point with 8 bits unused. It only uses 32 bits if you request a floating point depth buffer, and they can't promote from fixed point -> floating point since the distribution of precision is different.
 

[EDIT] Is it possible to copy depth component to a RGBA8 (possibly typeless) texture or do I have to use a shader to manually convert the float depth to int, do some bit shift operations and store component separatly ?


You can only copy between textures that have the same format family.


#5272794 D3d12 : d24_x8 format to rgba8?

Posted by MJP on 26 January 2016 - 09:03 PM

D3D12 doesn't allow creating a shader resource view for a resource that was created with a different format. The only exception is if the resource was created with a "TYPELESS" format, in which case you can create an SRV using a format from that same "family". So for instance if you create a texture with R8G8B8A8_TYPELESS, you can create an SRV that reads it as R8G8B8A8_UNORM.

If you really wanted to, you can create two placed resources at the same memory offset within the same heap. However this is very unlikely to give you usable results, since the hardware is free to store the texture data in a completely different layout or swizzle pattern for resources that use different formats. You also can't keep depth buffers and normal textures in the same heap if the hardware reports RESOURCE_HEAP_TIER_1, which applies to older Nvidia hardware.


#5272522 Shimmering / offset issues with shadow mapping

Posted by MJP on 24 January 2016 - 05:34 PM

The first is that I need to implement some culling of objects that need not be considered to render the shadow maps of (I haven't really looked into this yet and it is probably simple enough; I'd imagine I can just perform my usual frustum culling routine using a ortho view-projection matrix that has its near plane pulled back to the light source (or rather the length of the scene camera's z-range) but is otherwise the same as the light matrix of each cascade split?).


Yes, you can perform standard frustum/object intersection tests in order to cull objects for each cascade. Since the projection is orthographic, you can also treat the frustum as an OBB and test for intersection against that. Just be aware that if you use pancaking, then you have to treat the frustum as if it extended infinitely towards the light source. If you're going to cull by testing against the 6 planes of the frustum, then you can simply skip testing the near clip plane.
 

The second is how to get this shadow map interpolation working properly. I just whipped the following up for testing, it doesn't really create any visible difference from just leaving the interpolation part out alltogether, but am I going about this in the right way or would I be better off to change my approach?


Generally you want to determine if your pixel is at the "edge" of a cascade, using whichever method you use for partitioning the viewable area into your multiple cascades. You can have a look at my code for an example, if you'd like. In that sample app, the cascade is chosen using the view-space Z value (depth) of the pixel. It basically checks how far into the cascade the pixel is, and if it's in the last 10% of the depth range it starts to blend in the next cascade.


#5271622 Shimmering / offset issues with shadow mapping

Posted by MJP on 17 January 2016 - 06:28 PM

By the way, regarding this part of your code:

// The near- and far value multiplications are arbitrary and meant to catch shadow casters outside of the frustum, 
// whose shadows may extend into it. These should probably be better tweaked later on, but lets see if it at all works first.
It's not necessary to pull back the shadow near clip in order to capture shadows from meshes that are outside the view frustum. You can handle this with a technique sometimes referred to as "pancaking", which flattens said meshes onto each cascade's near clip plane. See this thread for details. I recommend implementing by disabling depth clipping in the rasterizer state, since it avoids artifacts for triangles that intersect the near clip plane.


#5271620 [D3D12] Freeing committed resources?

Posted by MJP on 17 January 2016 - 06:23 PM

ComPtr<T> calls Release on the underlying pointer when its assigned a new value. So you can just do "vertexBuffer = ComPtr<ID3D12Resource>", and Release will be called. Alternatively you can call "vertexBuffer.Reset()", which is equivalent. Or if you're going to pass the ComPtr to CreateCommittedResource, then it will call Release as part of its overloaded "&" operator. The resource will then be destroyed whenever the ref count hits 0, so if that's the only reference to that resource then it will be destroyed immediately.

Just be careful when destroying resources, since its invalid to do so while the GPU is still using it. So if you've just submitted a command list that references the resource, you need to wait on a fence to ensure that the GPU is finished before you destroy it. If you mess this up the debug layer will typically output an error message.


#5271617 Cubemap Depth Sample

Posted by MJP on 17 January 2016 - 06:17 PM

It looks like the direction that you use to sample the cubemap is backwards. You want to do "shadowPosH - l", assuming that "l" is the world space position of your point light. The code that vinterberg posted is actually incorrect in the same way: it uses the variable name "fromLightToFragment", but it's actually computing a vector from the fragment to the light (this is why it uses "-fromLightToFragment" when sampling the cube map).

Also...if you're going to use SampleCmp to sample a depth buffer, then you can't use the distance from your point light to the surface as the comparison value. Your depth buffer will contain [0, 1] values that correspond to z/w after applying your projection matrix, not the absolute world space distance from the light to the surface. This means you need to project your light->surface vector onto the axis that corresponds to the cubemap face you'll be sampling from:

float3 shadowPos = surfacePos - lightPos;
float3 shadowDistance = length(shadowPos);
float3 shadowDir = normalize(shadowPos);

// Doing the max of the components tells us 2 things: which cubemap face we're going to use,
// and also what the projected distance is onto the major axis for that face.
float projectedDistance = max(max(abs(shadowPos.x), abs(shadowPos.y)), abs(shadowPos.z));

// Compute the project depth value that matches what would be stored in the depth buffer
// for the current cube map face. "ShadowProjection" is the projection matrix used when
// rendering to the shadow map.
float a = ShadowProjection._33;
float b = ShadowProjection._43;
float z = projectedDistance * a + b;
float dbDistance = z / projectedDistance;

return ShadowMap.SampleCmpLevelZero(PCFSampler, shadowDir, dbDistance - Bias);



#5271472 Cube shadow mapping issue

Posted by MJP on 16 January 2016 - 05:16 PM

I'm going to lock this thread since you already have another thread open about this issue. I'd also like to add that it's not really appropriate for this forum to just dump some code, and then ask people to write a feature for you. If you're having trouble, then please feel to ask questions, and the community here will do their best to answer them.


#5271471 Cubemap Depth Sample

Posted by MJP on 16 January 2016 - 05:10 PM

You don't need a render target to write depth, you just need a depth stencil view. For rendering to the 6 faces separately, you just need 6 depth stencil views that each target a particular face. It's almost exactly like the code that you have for creating the 6 render target views, except that you create depth stencil views:
 
for(uint32_t i = 0; i < 6; ++i)
{
    D3D11_DEPTH_STENCIL_VIEW_DESC dsvDesc = { };
    dsvDesc.Format = format;
    dsvDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2DARRAY;
    dsvDesc.Texture2DArray.ArraySize = 1;
    dsvDesc.Texture2DArray.FirstArraySlice = i;
    dsvDesc.Texture2DArray.MipSlice = 0;
    dsvDesc.Flags = 0;
    DXCall(device->CreateDepthStencilView(textureResource, &dsvDesc, &arraySliceDSVs[i]));
}
The other way to do it is to have 1 depth stencil view that targets the entire array, and then use SV_RenderTargetArrayIndex from a geometry shader in order to specify which slice you want to render to.


#5271208 Vertex to cube using geometry shader

Posted by MJP on 14 January 2016 - 08:56 PM

Relevant blog post: http://www.joshbarczak.com/blog/?p=667




PARTNERS