Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


MJP

Member Since 29 Mar 2007
Offline Last Active Yesterday, 11:19 PM

#5003280 EVSM performance tip!

Posted by MJP on 22 November 2012 - 11:56 AM

I'd like to point out that this does work on my GTX 295 which doesn't support DirectX 10.1, so it's actually doable with OpenGL but not with DirectX it seems?


Yeah it was common knowledge that the GTX 200 series supported *most* of the 10.1 feature set, but they never bothered to get it fully compliant.

I would not recommend using non-linear depth. The precision distribution is not made to work well with floats, and the additional steps all reduce the effective precision. The much improved precision of using eye-space distance can be spent on a higher C value to reduce bleeding. However, you are correct that performance is better thanks to early z rejection before the shader is run. It might be worth using both a color attachment for the eye-space distance and a depth buffer for early z rejection if your scene has lots of overdraw.


Well for orthographic projections (directional lights) the depth value is already linear, and for a perspective projection you can flip the near and far planes if you're using a floating point depth buffer, which mostly balances out the precision issue. But of course, your mileage may vary. Posted Image


#5003277 How do I optimize GUI rendering

Posted by MJP on 22 November 2012 - 11:50 AM

10-15 fps change doesnt tell anything without the fps value it affects.

If you take 15 fps away from 1000 fps, i dont think you have a problem...


What this guy said. FPS is not a linear measure of performance. I would suggest always describing things in terms of milliseconds per frame, and always using that metric when profiling.


#5003143 EVSM performance tip!

Posted by MJP on 22 November 2012 - 12:40 AM

The first time I saw this trick mentioned was in the tech post-mortem for Little Big Planet (they rolled it into the blurring pass since they didn't use MSAA), so it's been around for a while! In fact Andrew Lauritzen used it in his sample distribution shadow maps sample. Early on with VSM's it wasn't possible to do so since DX9 hardware didn't support sampling MSAA textures, and DX10 hardware didn't support sampling MSAA depth buffers (DX10.1 added this). But now on DX11-level hardware it's totally doable.

You'd probably get even better performance if you didn't use a fragment shader at all for rendering the shadow maps, since the hardware will run faster with the fragment shader disabled. Explicitly writing out depth through gl_FragDepth will also slow things down a bit, since the hardware won't be able to skip occluded fragments.


#5003140 HLSL distance vs dot

Posted by MJP on 22 November 2012 - 12:31 AM

The hardware may or may not have a native dot product instruction, it would depend on the GPU. The internal ISA actually changes pretty frequently among different chipset versions, and like I mentioned before the latest architectures from both Nvidia and AMD both only use scalar instructions.

distance() does compute the square root of the sum, I just left that part out for brevity.


#5003088 Issue With D3DXLoadSurfaceFromFile() In Loading Pngs

Posted by MJP on 21 November 2012 - 07:24 PM

D3DXCreateTextureFromFile will scale your image to power-of-2 dimensions by default, in order to support older GPU's with poor non-power-of-2 support. If you don't care about that, you can use D3DXCreateTextureFromFileEx and tell it not to scale the image.


#5003043 Issue With D3DXLoadSurfaceFromFile() In Loading Pngs

Posted by MJP on 21 November 2012 - 04:09 PM

StretchRect is not a true "rendering" function. I would strongly recommend using ID3DXSprite, which actually renders geometry and supports alpha-blending and transformations (rotation, scaling, etc.).


#5003037 Shader Functions

Posted by MJP on 21 November 2012 - 04:04 PM

In your first example, you would compile two different shaders. When you compile a shader you specify the entry point function, so you would compile one with "Texture" as the entry point and then the other with "noTexture" as the entry point.

To pass values to a shader, you use a constant buffer. The syntax looks like this:

cbuffer Constants
{
    bool bTex;
}
float4 Texture(PS_INPUT input) : SV_Target
{
    if (bTex)
	    return tex.Sample(linearSampler, input.Tex );
    else
	    return input.Color;
}

Then you also need to handle creating a constant buffer, filling it with data, and binding it in your C++ app code. If you're not familiar with constant buffers, I would simple recommend consulting some of the simple tutorials and samples that come with the SDK.


#5002424 What does the HLSL printf function do?

Posted by MJP on 19 November 2012 - 02:02 PM

I think I sorta fixed the link, but something is still messed up with the text.

Anyway AFAIK nobody actually implements printf in HLSL except for the REF device.


#5002202 directx 11 problem switching textures in simple font engine

Posted by MJP on 18 November 2012 - 08:09 PM

Could you post your pixel shader as well?


#5002132 How to represent a point using spherical harmonics?

Posted by MJP on 18 November 2012 - 04:35 PM

Hey ginkgo! Well, yes - but isn't that exactly the idea behind spherical harmonics - approximating a function using a polynomial of a finite degree (e.g. degree 2 already gives an error rate less than 1%).


Yes, you generally use spherical harmonics as a means of approximating some function defined about a sphere using a compact set of coefficients. The issue that ginkgo was alluding to has to do with the fact that spherical harmonics are essentially a frequency-space representation of a function, where lower coefficients correspond lower-frequency components of the function and the higher coefficients correspond to higher-frequency components. With your typical "punctual" light source (point light, directional light, etc.) the incoming radiance in terms of a sphere surrounding some point in space (such as the surface you're rendering) is essentially a dirac delta function. A delta function would require infinite coefficients to be represented in spherical harmonics, so it's basically impossible. You can get the best approximation for some SH order by directly projecting the direction of the delta onto the basis functions (which is mentioned in Stupid SH tricks), but if you were to display the results for 2nd-order SH you'd find that you basically end up with a big low-frequency blob oriented about the direction. This is why "area" lights that have some volume associated with them work better with SH, since they can be represented better with less coefficients. The same goes for any function defined about a sphere, for instance a BRDF or an NDF.


#5002101 How to nicely handle lots of "define"-related commands

Posted by MJP on 18 November 2012 - 02:53 PM

I suppose the third option would be to pass the macro definitions to the compiler through the command line, which you could set up in your build system.


#5001811 Z-fighting & the depthtest

Posted by MJP on 17 November 2012 - 01:10 PM

Normally for this sort of thing you would enable depth writes and depth tests for your opaque geometry, and then for your transparent geometry you render with depth tests enabled but depth writes disabled. To do that you will need to separate depth-stencil states. For the opaques, you'll want to set DepthEnable to TRUE and DepthWriteMask to D3D11_DEPTH_WRITE_MASK_ALL. Then for the transparents, you'll want to set DepthEnable to TRUE and DepthWriteMask to D3D11_DEPTH_WRITE_MASK_ZERO. This should solve your problems, as long as you draw the terrain first and the clouds second.

To answer your second question, you can enable depth writes without depth tests. Just Set DepthEnable to FALSE, and DepthWriteMask to D3D11_DEPTH_WRITE_MASK_ALL. You can also leave DepthEnable set to TRUE, and set DepthFunc to D3D11_COMPARISON_ALWAYS.


#5000981 Storing non-color data in texture from pixel shader

Posted by MJP on 14 November 2012 - 01:04 PM

What kind of conversions?


The kind of conversions specified in the documentation for the DXGI_FORMAT enumeration.

I just figured that since I was using the asfloat() function to store my UINT values, the texture would accept it as a float - how would the texture know the difference that it is actually a binary representation of a UINT?


It won't know that it's a UINT, which can be a problem. The hardware will assume it's a 32-bit floating point value, and will treat it as such. If any format conversions are applied, it will apply floating point operations to your output value which will be totally invalid. For instance it might clamp to [0, 1] and convert to an unsigned integer, which will result in a bunch of garbage when you sample it an attempt to cast the value as an integer.


#5000827 Create a LightMap with raytracing

Posted by MJP on 14 November 2012 - 02:25 AM


First of all you *need* to create texture coordinates for your lightmap (unless you already have them). For more information on how to unwrap your environment to lightmap read for example here - http://www.blackpawn...exts/lightmaps/

Now for each pixel in the lightmap, there exists "microfacet" in your scene, and you compute light for that microfacet using your ray tracer.


Assumming that your whole world is made out of quads...
Is there any method for packing triangle world in to lightmap? Maybe I should just find every 2 triangles that has similar normal, and then assume those two triangles are one quad?


It's a fairly complex topic. You can read through this to get an idea of the headaches involved, as well as possible solutions.


#5000826 Create a LightMap with raytracing

Posted by MJP on 14 November 2012 - 02:24 AM

I need to read the texels of my "empty" lightmap, find the model position associate with it and then use the raytracing in this position.
Do this for all my texels and thats it ??!!


You actually need the position and normal, since you need to trace rays through the hemisphere surrounding the normal to calculate the total irradiance incident on the texel. Also in practice you'll find that it's actually not so simple to determine the position and normal for any given texel. To do it you need to figure out which triangle that the texel is mapped to, figure out the barycentric coordinates of the texel relative to the triangle, and then use those barycentric coordinates to intepolate the position and normal from the vertices. To make things even more complicated you'll often have multiple triangles overlapping a single texel, which means you have to be able to either consistently choose which triangle to use or you have to blend between the results from both triangles. One common way to do this is to rasterize the geometry in UV space. This lets you perform the interpolation as a normal part of the rasterization process, and you can also use standard rasterization techniques to choose which triangle to use or to even analytically compute the triangle coverage. You can even rasterize on the GPU if you want, although you'll probably want to use more than one sample per pixel since GPU's don't use conservative rasterizers.




PARTNERS