Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Mar 2007
Offline Last Active Yesterday, 07:47 PM

#4978095 Just a quick shader question

Posted by MJP on 08 September 2012 - 02:34 PM

Back in the old days before HDR, it was common to store some "glow" value in either the alpha channel or a separate render target. This value could be derived from some specific glow texture, or you could use the results of your specular lighting, or something along those lines. You can also just use a threshold or lower exposure to get your bloom source with LDR rendering, but in general you'll usually end up with more things blooming than you'd like. With HDR things are more natural since you can naturally express really bright light sources and bright surfaces.

#4978093 General questions on hdr and tonemapping

Posted by MJP on 08 September 2012 - 02:30 PM

I usually just use a floating point texture format like R16_FLOAT or R32_FLOAT for storing log(luminance), and those formats have a sign bit. Which format are you currently using?

#4978092 What do you use for Audio in DirectX applications?

Posted by MJP on 08 September 2012 - 02:25 PM

I'm only including this:

#include <xaudio2.h>
#include <x3daudio.h>
#include <xaudio2fx.h>

#pragma comment (lib, "xaudio2.lib")
#pragma comment (lib, "x3daudio.lib")

It worked fine before with just

#include <xaudio2.h>
#pragma comment (lib, "xaudio2.lib")

However I don't understand why implementing 3D audio causes the problem.I mean the same headers and libraries are included in the SDK 3D Audio sample and I can run it fine,however when I use these functions in my project,the missing DLL thing comes up.Now I've checked project options carefuly at both my project and the SDK sample one,all the settings are the same,I checked all their code,there is no reference in the source for such a dll,so perhaps it's called for in the libs?But how can that be if those libs were made over 2 years ago while this dll that it wants is from Windows 8

Those libraries you're linking to are import libraries. Basically they're just stubs that allow to linker to hook up your code's function calls to the functions exported by the DLL, instead of having to manually ask for the address of the DLL function at runtime. When you link to a DLL's import lib your app now has a dependency on that DLL, which means it will attempt to load that DLL when the app starts up. If it can't find the right DLL, you get that error.

#4977391 Small question about Bloom RT formats

Posted by MJP on 06 September 2012 - 04:22 PM

A R10G10B10A2 backbuffer won't give you any extra dynamic range, it will just give you more precision for the standard displayable range. Either way there's no guarantee that the display won't just convert it to something else when it outputs.

#4977361 Specular Highlight - fx-File - Vertexshader only

Posted by MJP on 06 September 2012 - 02:42 PM

When calculating the half vector, what you want is the view direction. This is a normalized vector pointing from the surface towards the camera. What you're doing is taking the view space position and negating it, which is equivalent to (0, 0, 0) - viewSpacePosition gives you the view direction in view space. However your normal and I'm assuming your light direction are in world space, so you'll want the world space view direction. You should pass in the world space camera position through a constant buffer, then use normalize(g_cameraPositionWorldSpace - worldPos.xyz) when calculating your half vector.

#4977101 Cube Map Rendering only depth ! No color writes. - BUG

Posted by MJP on 06 September 2012 - 01:07 AM

Depth-only rendering is definitely good when you can do it. In some cases the advantage will be nullified because you'll become vertex or triangle setup bound, but I'd still recommend doing it.

You don't have to use SV_Depth to output 1 - z/w. You just need to tweak your projection matrix. In most math libraries you can just reverse the near and far clip planes and you'll get the desired result. I wouldn't recommend any kind of SV_Depth output unless you really have to. Even the conservative depth stuff will still have some performance impact.

#4976157 How to down/up sample a render target

Posted by MJP on 03 September 2012 - 01:42 PM

You can read up on image scaling if you want some background info. The basic idea is that you sample one or more texels from the source render target, apply some sort of filter to those texels, and output the result. You're probably already familar with the "point" and "linear" filtering modes that are built-in to the hardware, which you can use just by taking a single texture sample with the appropriate sampler settings. But you can also implement more complex filters manually in your shader, if you wish.

For downscaling depth, things are a bit more tricky since some of the conventional wisdom used for scaling color images won't necessarily apply. Most people just end up using point filtering to downscale depth, which preserves edges but increases aliasing. To implement that, it really is as easy as you think it is: just use a pixel shader to sample the full-size render target, and output the value to your half-size render target.

#4976123 Move Object Based on Camera's Direction

Posted by MJP on 03 September 2012 - 11:44 AM

A view matrix is the inverse of your camera's transformation matrix. A transformation matrix takes you from an object's local space to world space, while a view matrix takes you from world space to the camera's local space. If you invert the view matrix to get the transformation matrix, the camera's forward direction will be the third row of the matrix (the Z basis). For a view matrix you can also just transpose instead of inverting and grab the third row, which is equivalent to grabbing the _13, _23, and _33 components from the view matrix. Once you have the forward direction, you can just do position += forwardDir * moveAmt.

#4975819 Error #342: device_shader_linkage_semanticname_not_found

Posted by MJP on 02 September 2012 - 02:37 PM

Everything you've posted looks okay. Are you sure you have the right input layout and vertex shader bound when that error message gets output? If you capture a frame in PIX, you can double-click on the error and it will take you to the exact call that caused it. Then you can check the state of your device context, and make sure it's what you expect it to be.

#4975505 Shadow mapping advice

Posted by MJP on 01 September 2012 - 12:35 PM

The "hardware support" for shadow maps amounts to "free" depth comparison and 2x2 PCF filtering when sampling the shadow map. Usually it's implemented by rendering your shadow map depth to a depth buffer, then sampling that depth buffer with a special instruction and/or sampler state in the shader. Unfortunately I'm not familiar with how it's done in GLSL so I couldn't tell you offhand.

#4975502 Shadow Map Depth Issue

Posted by MJP on 01 September 2012 - 12:23 PM

Orthographic projections work just fine with shadow maps. The resulting depth value will actually be linear [0, 1] (where 0 is then near clip and 1 is the far clip) so there's no need to convert or rescale if you want to visualize it. In fact with an orthgraphic projection W is always 1.0, so you don't even need to divide by W like you do with a perspective projection.

People usually use orthographic projections for sunlight shadows, since the light direction is constant for a directional light. Perspective projections are often used for spot lights, since a perspective projection better fits the cone-shaped volume affected by a spot light.

#4975501 Question about BRDF

Posted by MJP on 01 September 2012 - 12:20 PM

Normalized lambertian is actually DiffuseAlbedo / Pi. Without the 1 / Pi factor you won't conserve energy.

To calculate outgoing radiance, you need to integrate BRDF * differential incident irradiance for all directions on the hemisphere surrounding the surface normal. Note that this is not "averaging", it's integrating. From your code it looks like you're attempting to estimate the integral with monte carlo, which is a common way of doing things in graphics. This means you must sum all of your samples, and scale the result by the probability density function (PDF) of your sampling scheme. If you're uniformly sampling about the hemisphere the PDF is 1 / 2 * Pi, since 4 * Pi is the surface area of a sphere with radius of 1. Since this factor is constant can pull it out of your loop if you wish, and apply it at the end. You can also do the same for a Lambertian BRDF, since it's constant for each sample. No matter how you do it you need to be careful about numerical precision, although it probably won't be much of a problem if you're using doubles.

#4975494 Shadow Map Depth Issue

Posted by MJP on 01 September 2012 - 12:06 PM

You have a few problems:
  • You need to do the perspective divide in the pixel shader, otherwise the value won't get interpolated correctly. Output both Z and W from your vertex shader, then divide before outputting the value in your pixel shader.
  • When using a perspective projection, z/w will be highly non-linear. Typically most of your depth range ends up being in the 0.9-1.0 range. If you want to visualize a perspective depth buffer, you should either remap the value from 0.9-1.0 ( like this: depth = saturate((depth - 0.9) * 10); ), or you should convert the value back to a linear Z value like I explained in another thread.
  • 8-bits is not enough precision for a shadow map. You would be a lot better of with D3DFMT_R32F.

#4975203 Star filter post process effect

Posted by MJP on 31 August 2012 - 12:38 PM

If you work in frequency space, you can convolve with arbitrary filter kernels in O(N) time since convolution in frequency space is just a multiplication. But of course means you need to use an FFT to convert to and from frequency space, which is complicated to implement and will have a runtime cost.

#4973304 Creating texture with initial subresource data not working

Posted by MJP on 25 August 2012 - 12:52 PM

You'll only be able to save that DXGI format as a .DDS. If you just want to view it there's a viewer included in DirectXTex that can handle the new DDS format. You can also just capture with PIX and look at the texture in there.