Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 02:00 AM

#5292928 How to organize worlds/levels as seen in the new Doom

Posted by MJP on 22 May 2016 - 02:39 PM

The new Doom uses Umbra, it says so right when you start up the game.

 

A lot of games still use some form of PVS, but probably not based on BSP's like in the Quake days. The games that I have worked on used manual camera volumes where the set of visible meshes/lights/particles/whatever was computed at build-time based on generated or hand-placed sample points. Other games compute visibility completely at runtime by rasterizing a coarse depth buffer on the CPU, and then testing bounding volumes for visibility. Some newer games are moving towards doing all occlusion culling and scene submission on the GPU.




#5292920 Phong model BRDF

Posted by MJP on 22 May 2016 - 02:16 PM

 

Why is the outgoing radiance equal to 0 when the angle between n and l is <= 0?

Because the surface is back-facing from the light's point of view. The rendering equation itself -- which the BRDF plugs into -- multiplies the entire BRDF by N⋅L, so having this condition within the BRDF is actually superfluous. I guess it's just mentioned because most realtime renderers actually don't implement the rendering equation properly, so that condition is required to avoid getting specular highlights on the wrong side of an object.
 
As for the rest, this cosm term seems weird. Phong is based around (L⋅reflect(V,N))m, which is equivalent to cos(θr)m, where θr is the angle between the reflection direction and the light direction...

 


It's not cosm, it's cosmαr, where αr is the angle between the light direction and the reflected view direction.




#5292549 What's the advantage of Spherical Gaussians used in The Order vs. Envirom...

Posted by MJP on 19 May 2016 - 06:36 PM

Let's back up a bit here. Our ultimate goal for our baked lightmap/probe data was to store a representation of the incoming radiance on the sphere or hemisphere surrounding a point, so that we can integrate our diffuse and specular BRDF's against that radiance in order to determine the outgoing reflectance for the viewing direction. In simpler terms we need to store the incoming lighting in a way that's convenient to compute the visible light coming out. Spherical Guassians (SG's) are nice for this purpose, since a small number of them can potentially represent the radiance field as long as the radiance is somewhat low-frequency. They can do this because they have a "width", which means that the sum of them can potentially represent the entire surface area of a hemisphere or sphere. SG's also convenient because it's possible to come up with a reasonable closed-form approximation for the integral of (BRDF * SG). This ultimately means that computing the outgoing lighting for a pixel is an O(N) operation, where N is the number of SG's per sample point.

 

This is all somewhat separate from the issue of how the data is actually stored in textures so that it can be accessed at runtime. In our game, we stored the data in 2D textures for static meshes (just like standard lightmaps) and in 3D textures for our probe grids. In practice we actually keep an array of textures, where each texture contains just 1 SG for each texel. Storing as textures means that we can use texture filtering hardware to interpolate spatially between neighboring texels, either along the surface for static meshes or in 3D space for dynamic meshes. Textures also allow you to use block compression formats like BC6H, which can let you save memory.

 

So to finally get your question, you're asking why you wouldn't store the baked data in lots of mini 3x3 or 4x4 environment maps. Let's look at this in terms of the two aspects I just mentioned: how the data is stored, and what you're actually storing. Having a set of environment maps allows you to interpolate in the angular domain, since each texel essentially represents a path of surface area on a sphere or hemisphere. However if we're using SG's, then we don't actually need angular interpolation. Instead we consider each SG individually, which requires no interpolation. However we do want to interpolate spatially, since we'll want to shade pixels at a rate that's likely to be greater than the density of the baked texels. As I mentioned earlier 2D or 3D textures work well for spatial interpolation since we can use hardware filtering, whereas if you used miniature environment maps you would have to manually interpolate between sample points.

 

Now to get the other part: what you actually store in the texture. You seem to be suggesting an approach where the environment contains some kind of single radiance value in each texel, instead of a distribution of radiance like you have with an SG. The problem here is how to do you integrate against your BRDF(s)? If you consider each texel to contain the radiance for an infinitely small solid angle, then you can essentially treat each texel as a directional light and iterate over each. However this is not great since there will be "holes" in the hemisphere that aren't covered by your sparse radiance samples. So instead of doing that, you might try to pre-integrate your BRDF against a whole bunch of radiance samples, and store that result per-texel (I believe that this is what you're suggesting when you say "approximate a cone"). This is pretty much exactly the approach used by most games for their specular IBL probes. The big catch with pre-filtering is that you can't actually pre-integrate a specular BRDF against a radiance field and store the result in a 2D texture, since the view dependence adds too many dimensions. So you're forced to pre-integrate the NDF assuming that V == N, sample the environment map based on the peak specular direction, and apply the fresnel/visibility terms after the fact. Separating the NDF from fresnel/visibility leads to error, and unfortunately this error gets worse as your roughness increases. The pre-integration also assumes a fixed roughness, and so you need to store a mip chain with different roughness values that you interpolate between. This doesn't sound particularly appealing to me due to the error from pre-integration and mip interpolation, and you also lose the benefit of being able to spatially interpolate between your sample points. On top of all of this you still need to handle your diffuse BRDF, which requires an entirely different pre-integration.

 

TL:DR - our approach also us to use hardware filtering to interpolate samples, and we can use an analytical approximation for both diffuse and specular without having to rely on pre-integration.




#5290911 Some questions about cascaded variance shadow mapping

Posted by MJP on 09 May 2016 - 07:33 PM

The key insight here is that all of your cascades are parallel with each other. In other words, they all project along a common local Z axis. With that assumption, you can remove the need for having a completely separate matrix for each cascade. To do this you need to think of your shadow matrix as being 3 separate transforms composed together: a rotation, a scale, and a translation. The rotation is based on the orientation of your directional light: applying it will transform a coordinate so that it's now in a local coordinate relative to the light's direction, where the Z axis is aligned with the light's direction. The translation will transform the position so that it's now relative to the origin of the projection, which is typically center of the projection's near clip plane. Finally, the scale will transform the position such that -1 is the left/bottom of the projection, and 1 is the right/top of the projection. The translation and scale is typically unique for each cascade, since the projections will be different in size (in order to allow cascades to cover increasing amounts of the viewable area) and will also be located at different locations in world space. However the orientation will be the same, since it's purely based on the light direction. This means we can set things up such that we have one shared matrix representing the cascade rotation, while having a unique scale + translation for each cascade.

 

Storing your shadow projections in this manner be a good idea from performance point of view, since the data is more compact than having a full matrix per cascade. However it also allows you to handle gradients in an elegant way. Let's say you were to take a single point, compute the shadow map UV coordinate for two different cascades, and then compute the gradients. The gradients for each cascade would obviously be different, otherwise you wouldn't have issues at cascade boundaries. However since the projections are orthographic, the two gradients will always be proportional to one another. In fact, the ratio between the two is equal to the ratio between the scale components of the respective cascade transforms. So if we store the cascade scales separately, we can use them to "adjust" a gradient based on the cascade that was selected without having to apply the full transform to the original surface position.

 

The way that I implemented this in my shadow demo was to store a single 4x4 matrix that represented the transform for the cascade. Then I would store a translation and scale for all cascades, where that translation and scale represented the values needed to transform a point from the first cascade to the Nth cascade. This means that the translation for the first entry would be 0, and the scale would be 1.0. However they would be different for the following cascades, since they would be larger and centered around different points. Computing these scale and translation values is fairly straightforward: you can do it by transforming points with the first cascade matrix, then transforming by your Nth cascade matrix, and then comparing the values. Applying the transform in the shader is also pretty simple: first transform by the matrix for the first cascade, then apply the scale/translation based on the cascade that was selected for that pixel. If you need gradients for VSM, you just need to compute the gradients after applying the transform from the first matrix, and then scale them by the scale value for the selected cascade. 

 

Regarding your second question about the mip levels: it may be true that only the first cascade uses the highest-resolution mip level. However you wouldn't want to directly rasterize to a lower-resolution mip level, since doing this will give you poor results. If you were to do that, you'll get a lot of aliasing since you're rasterizing at a lower sample rate. The idea with mipmaps is that you rasterize at a high sampling, then pre-filter to lower-resolution mip levels so that you get a nice, stable result. 




#5290200 Irrandiance Volume v.s. 4-Basis PRT in Farcry

Posted by MJP on 05 May 2016 - 12:40 AM

It seems 2 band SH also has ringing. When I implemented my SH2 irr-vol I used an Lanczos window to reduce them. Since only trivial SH co-effs multiplies involved in this windowing ops, from the performance perspectives I feel it doesn't like a big deal. Maybe the storage & filtering performance cost is not be the point about the question.

 

The FarCry's motivation really confused me for a while until by chance I found the Order 1886 (Sig’15 course) also used an multi-basis SG baking solution. One of most interesting things about the course is that they shared some experiences about using SH3 irradiance-cube to represent the HDR lighting, namely, HDR lighting can cause some SH lobes to be very large negative numbers to cancel out the high positive co-effs, which is really bad for baking quality and compression.

 

So finally I find my own answer: Don’t ever use SH irradiance-cube under HDR lighting situation. The irradiance-cube representation by using low-band SH under HDR situation may be far from accuate, and it's not suitable for baking output. Use muli-basis PRT method instead.

 

Indeed, that was the conclusion we eventually came to while working on The Order. SH has some really great properties, but ultimately it doesn't do well for storing arbitrary lighting environments. It's not so bad if you're storing very low-frequency data from indirect lighting, but if ever try to bake in direct lighting from an area light source the result is unusable without filtering. But then once you filter, you completely lose the directionality which also doesn't look right. SG's are much better in this regard, and also have the capability of storing higher-frequency signals. 




#5289015 [D3D12] Binding multiple shader resources

Posted by MJP on 27 April 2016 - 06:44 PM

The CopyDescriptors approach is mostly for convenience and rapid iteration, since it doesn't require you to have descriptors in a contiguous table until you're ready to draw. For a real engine where you care about performance, you'll probably want to pursue something along the lines of what Jesse describes: put your descriptors in contiguous tables from the start, so that you're not constantly copying things around while you're building up your command buffers.

 

I also want to point out that the sample demonstrates another alternative to both approaches in its use of indexing into descriptor tables. In that sample it works by grabbing all of the textures needed to render the entire scene, putting them in one contiguous descriptor table, and then looking up the descriptor indices from a structured buffer using the material ID. Using indices can effectively give you an indirection, which means that your descriptors don't necessarily have to be contiguous inside the descriptor heap.




#5288662 How does material layering work ?

Posted by MJP on 25 April 2016 - 03:21 PM

This seems to be a really nice workflow for artists as they have some kind of material library which they can customize and blend to obtain advanced materials on complicated object. This seems to be the best regarding to performances.

 
Yes, I would say that it has worked out very well for us. It helps divide the responsibility appropriately among the content team: a lot of environment artists can just pull from common material libraries and composite them together in order to create unique level assets. At the same time our texture/shader artists can't author the most low-level material templates, and whenever they make changes they are automatically propagated to the final runtime materials.
 

So you are using some kind of uber shader that accepts multiple albedos, normals, etc. each with his associated tiling and offsets with a masking texture for each layer ?


Yup. We have an ubershader that has a for loop over all of the material layers, but we generate a unique shader for every material with certain constants and additional code compiled in. The number of layers ends up being a hard-coded constant at compile time, and so we unroll the loop that samples the textures for each layer and blends the resulting parameters.
 

You might also have multiple drawcalls from those layers which are not present in the above technics, right ? This has some performances costs, can those be neglected ?


I don't think you would ever want to have multiple draw calls for runtime layer blending. It would likely be quite a bit more expensive than doing it all in a loop in the pixel shader.




#5288474 How does material layering work ?

Posted by MJP on 24 April 2016 - 12:29 PM

I'm not really familiar with the Allegorithmic tools, but I can certainly explain how the material compositing works for The Order. Our compositing process is primarily offline: we have a custom asset processing system that produces runtime assets, and one of the processors is responsible for generating the final composite material textures. The tools expose a compositing stack that's similar to layers in Photoshop: the artists pick a material for each layer in the stack, and each layer is blended with the layer below it. Each layer specifies a material asset ID, a blend mask, and several other parameters that can be used to customize how exactly the layers are composited (for instance, using multiply blending for albedo maps). The compositing itself is done in a pixel shader, but again this is all an offline process. At runtime we just end up with a set of maps containing the result of blending together all of the materials in the stack, so it's ready to be sampled and used for shading. This is nice for runtime performance, since you already did all of the heavy lifting during the build process.

 

The downside of offline compositing is that you're ultimately limited by the final output resolution of your composite texture, so that has to be chosen carefully. To help mitigate that problem we also support up to 4 levels of runtime layer blending, which is mostly used by static geometry to add some variation to tiled textures. So for instance you might have a wall with a brick texture tiled over it 10 times horizontally, which would obviously look tiled if you only had that layer. With runtime blending you can add some moss or some exposed mortar to break up the pattern without having to offline composite a texture that's 10x the size. 

 

With UE4 all of the layers are composited at runtime. So the pixel shader iterates through all layers, determines the blend amount, and if necessary samples textures from that layer so that it can blend the parameters with the previous layer. If you do it this way you avoid needing complex build processes to generate your maps, and you also can decouple the texture resolution of your layers. But on the other hand, it may get expensive to blend lots of layers.




#5288056 [D3D12] About CommandList, CommandQueue and CommandAllocator

Posted by MJP on 21 April 2016 - 04:59 PM

GPU's can also pre-fetch command buffer memory in order to hide any latency. Pre-fetching is easy in this case because the front-end will typically just march forward, since jumps are not common (unless you're talking about the PS3 :D)




#5287886 How to blend World Space Normals

Posted by MJP on 20 April 2016 - 08:48 PM

You'll want to use the vertex normal vector, since this is what determines the Z basis in your tangent frame.




#5287884 [D3D12] About CommandList, CommandQueue and CommandAllocator

Posted by MJP on 20 April 2016 - 08:44 PM

There is no implied copy from CPU->GPU memory when you submit a command list. GPU's are perfectly capable of reading from CPU memory across PCI-e, and on some systems the CPU and GPU may even share the memory.




#5287707 How to blend World Space Normals

Posted by MJP on 19 April 2016 - 07:34 PM

The sample implementation of RNM on that blog post assumes that the "s" vector is a unit z vector, which is the case for tangent-space normal maps. This is represented in equations 5/6/7. If you want to work in world-space, then you need to implement equation 4 as a function that takes s as an additional parameter:

 

float3 ReorientNormal(in float3 u, in float3 t, in float3 s)
{
    // Build the shortest-arc quaternion
    float4 q = float4(cross(s, t), dot(s, t) + 1) / sqrt(2 * (dot(s, t) + 1));
 
    // Rotate the normal
    return u * (q.w * q.w - dot(q.xyz, q.xyz)) + 2 * q.xyz * dot(q.xyz, u) + 2 * q.w * cross(q.xyz, u);
}

 

If you pass float3(0, 0, 1) as the "s" parameter, then you will get the same result as the pre-optimized version. However the compiler may not be able to optimize it as well as the hand-optimized code provided in the blog.




#5286063 D3D alternative for OpenGL gl_BaseInstanceARB

Posted by MJP on 09 April 2016 - 03:09 PM

ExecuteIndirect supports setting arbitrary 32-bit constants through the D3D12_INDIRECT_ARGUMENT_TYPE_CONSTANT argument type. You can use this to specify transform/materialID data per-draw without having to abuse the instance offset. You can also set a root CBV or SRV via a GPU virtual address, which means you can use that to directly specify a pointer to the draw's transform data or material data.




#5285926 PIXBeginEvent and PIXEndEvent member functions on CommandList object

Posted by MJP on 08 April 2016 - 04:55 PM

The documentation you linked to is the old pre-release documentation. The final documentation doesn't list those methods. Instead it has BeginEvent and EndEvent, which are called by the helper functions in pix.h.




#5285343 [D3D12] Synchronization on resources creation. Need a fence?

Posted by MJP on 05 April 2016 - 02:55 PM

Yeah, there's no need to wait for commands to finish executing because they don't actually issue any commands. If you look at some of the other samples, they all have a wait at the end of LoadAssets. They do this so that they can ensure that any GPU copies finish before they destroy upload resources. So for instance if you look at the HelloTexture sample, it goes like this:

 

  • Create upload resource
  • Map upload resource, and fill it with data
  • Issue GPU copy commands on a direct command list
  • Submit the direct command list
  • Wait for the GPU to finish executing the command list
  • ComPtr destructor calls Release on the upload resource, destroying it





PARTNERS