Jump to content

  • Log In with Google      Sign In   
  • Create Account


MJP

Member Since 29 Mar 2007
Offline Last Active Today, 01:25 PM
*****

#5149711 MipMaps as relevant as before?

Posted by MJP on 26 April 2014 - 03:17 PM

Of course. Without mipmaps, your textures will have significant aliasing when they're minimized. Anisotropic filtering is not a replacement for mipmaps, it works alongside it.




#5149521 HLSL postion only semantic, no color

Posted by MJP on 25 April 2014 - 06:18 PM

Back in D3D9 the COLOR semantic was special, and was treated as a low-precision value that was clamped to [0, 1]. It was suitable for RGBA colors, but not general-floating point data. Use TEXCOORD0 instead.




#5149520 Understanding the “sampler array index must be a literal expression” error in...

Posted by MJP on 25 April 2014 - 06:11 PM

You can do it quite easily by setting an appropriate viewport. the D3D11_VIEWPORT structure specifies the width and height of the viewport, as well as the X and Y offset. So for instance, let's say you had a 256x256 render target that and you wanted to render to the top left corner. You would set TopLeftX = 0 and TopLeftY = 0, and then set Width and Height to 128. Then if you wanted to render to the top right corner you would keep the same Width and Height, but set TopLeftX = 128. And so on, until you rendered all 4 corners.




#5149251 IBL Problem with consistency using GGX / Anisotropy

Posted by MJP on 24 April 2014 - 06:46 PM

I just use a compute shader to do cubemap preconvolution. It's generally less of a hassle to set up compared to using a pixel shader, since you don't have have to set up any rendering state.

 

You can certainly generate a diffuse irradiance map by directly convolving the cubemap, but it's a lot faster to project onto 3rd-order spherical harmonics. Projecting onto SH is essentially O(N), and you can then compute diffuse irradiance with an SH dot product. Cubemap convolution is essentially O(N^2). 




#5149237 Screenshot of your biggest success/ tech demo

Posted by MJP on 24 April 2014 - 04:45 PM

This pic from our old E3 trailer is pretty cool, and so is this one from a more recent demo.

 

Most of my tech demos are pretty boring to look at, but a long time ago I was working on an XNA game in my spare time that I never got close to finishing.




#5149205 what's the precondition of hdr postprocess

Posted by MJP on 24 April 2014 - 01:28 PM

The obvious disadvantage is that if you need destination alpha these formats are no good to you.  It's also the case that packing and unpacking a format such as RGBE costs some extra ALU instructions which need to be weighed against the extra bandwidth required by a full 64-bit FP format (which you can now safely assume is supported by all hardware).

 

I'll also add that hardware filtering is generally incorrect for these kinds of "packed" formats, although it may not be too noticable depending on the format and the content.




#5148652 Package File Format

Posted by MJP on 21 April 2014 - 11:04 PM

We call our packages "archives". The format is basically a table of contents containing a map of symbols (hashed string asset names) to a struct containing the offset + size of the actual asset data. All of our asset ID's are flat like in Hodgman's setup. The whole archive is compressed using Oodle (compression middleware by RAD Game Tools), and when we load an archive we stream in chunk by chunk asynchronously and pipeline the decompression in parallel. Once that's done we have to do a quick initialization step, where we mostly just fixup pointers in the data structures (on Windows we also create D3D resources in this step, because you have to do this at runtime).  Once this is done the users of the assets can load assets individually by asset ID, which basically just amounts to a binary search through the map and then returning a pointer once the asset is found.

 

As for loose files vs. packages, we support both for development builds. Building a level always triggers packaging an archive, but when we load an archive we check the current status of the individual assets and load them off disk if we determine that the version on disk is newer. That way you get fast loads by default, but you can still iterate on individual assets if you want to do that.




#5148405 BRDF gone wrong

Posted by MJP on 20 April 2014 - 02:50 PM

The most common cause of NaN in a shader is division by 0. In your case you will get division by 0 whenever NdotL or NdotV is 0, since your denominator has those terms in in it. To make that work with your current setup, you would need to wrap your specular calculations in an if statement that checks if both N dot L and N dot V are greater than 0. However in many cases it's possible to write your code in such a way that there's no chance of division by 0. For instance, take your "implicit G" function. This is meant to cancel out the NdotL * NdotV in the denominator by putting the same terms in the numerator. So in that case, it would be better if you canceled it out in your code by removing the implicitG function and then also removing the N dot L and N dot V from the denominator. 

 

Also I should point out another common mistake that you're making, which is that you need to multiply your entire BRDF by NdotL. If you look up the definition of the BRDF, you'll find that it's the ratio of lighting scattered towards the eye (which is the value you're computing in your fragment shader) relative to the irradiance incident to the surface. When you're dealing point lights/spot lights/directional lights/etc. the irradiance is equal to LightIntensity * LightAttenuation * Shadowing * NdotL. In your case you don't have shadows and you don't seem to be using an attenuation factor (which is fine), you 'll want to multiply your specular by (uLightColor * NdotL). A lot of people tend to associate the NdotL with diffuse, but really it's not part of the diffuse BRDF. A lambertian diffuse BRDF is actually just a constant value, the NdotL is part of the irradiance calculations.




#5147950 Speed - Texture Lookups and Structured Buffers

Posted by MJP on 18 April 2014 - 11:48 AM

Texture reads are expensive (relatively speaking) because the GPU has to fetch the data from off-chip memory and then wait for that memory to be available. Buffer reads have the same problem, so you're not going to avoid it by switching to buffers. When you're bottlenecked by memory access, the performance will heavility depend on your access patterns with regards to cache. In this regard textures have an advantage, because GPU's usually store textures in a "swizzled" pattern that maps the texels to hardware caches when fetched in a pixel shader. Buffers are typically stored linearly, which won't map as well to pixel shaders.




#5147810 Preparing Textures to Reduce Shimmering

Posted by MJP on 17 April 2014 - 11:06 PM

Are you talking about shimmering from specular, or from the colors in the texture itself? If it's the former, you should read the link Promit provided and perhaps also check out this thread where we were discussing the same topic. If it's the later, then you can probably address it by prefiltering the mips differently. Filtering out higher frequencies will reduce aliasing, but will also remove high-frequency details from the texture. Certain filter kernels can counteract this somewhat by adding a "sharpening" effect due to their negative lobes.

In theory you really shouldn't get any aliasing from a "standard" mip chain and trilinear filtering, but GPU's tend to play a little loose with their filtering quality in order to boost performance. You can actually adjust manually on some GPU's in the driver control panel. 




#5147467 When advance in Shadow Mapping

Posted by MJP on 16 April 2014 - 03:16 PM

I agree with Ashaman that voxelization could potentially allow for new ways of approaching shadows, at least if games have that data available to them. The recent trend of calculating occlusion from analytical occluders (like what they're doing in The Last of Us) is also an interesting development, and potentially opens the door to completely different ways of approaching shadows.




#5147047 where to start Physical based shading ?

Posted by MJP on 15 April 2014 - 12:17 AM

As you've just discovered, there has been some good recent research into handling specular aliasing from normal maps. For physically based shading you're pretty much always going to want roughness maps (since roughness is often the most defining aspect of a material), so if you can bake your specular AA into your roughness map you'll essentially get it for free. At times it will be a somewhat weak approximation of a ground truth result, but at the moment most people seem to think it's good enough.

Keep in mind that normal maps aren't the only source of aliasing. Fine geometric detail will also be prone to aliasing once triangles become too small relative to pixels. I've had some success using custom MSAA resolves to combat this issue, but for a lot of people MSAA isn't an option. Temporal antialiasing can also help with or without MSAA.




#5147045 is there any difference about the order of clear rendertargetview and setrend...

Posted by MJP on 15 April 2014 - 12:08 AM

In D3D10/D3D11 clear works regardless of whether or not you bind the render target, so it doesn't matter which order you do it.




#5146750 IBL Problem with consistency using GGX / Anisotropy

Posted by MJP on 13 April 2014 - 01:22 PM

The best you can do with a pre-convolved cubemap is integrate the environment with an isotropic distribution  to get the reflection when V == N, which is the head-on angle. It will give incorrect results as you get to crazing angles, so you won't get that long, vertical "streaky" look that's characteristic of microfacet specular models. If you apply fresnel to the cubemap result you can also get reflections with rather high intensity, and so you have to pursue approximations like the ones proposed on those course notes in order to keep the fresnel from blowing out. It's possible to approximate the streaky reflections with multiple samples from the cubemap if you're willing to take the hit, and you can also use multiple samples along the tangent direction in order to approximate anisotropic reflections

 

For our cloth BRDF we have a duplicate set of our cubemaps that are convolved with the inverted Gaussian distribution used in that BRDF. It's just like the GGX cubemaps where it gets you the correct result when V == N, but at grazing angles.




#5145734 convert depth from linear to projection (deffered particles)

Posted by MJP on 09 April 2014 - 12:06 PM

Just multiply your depth by your projection matrix:

 

float2 zw = mul(float4(0, 0, z, 1.0f), ProjMatrix).zw;
float projDepth = zw.x / zw.y;





PARTNERS