Jump to content

  • Log In with Google      Sign In   
  • Create Account

MJP

Member Since 29 Mar 2007
Offline Last Active Today, 04:22 PM

#5298011 Basic texturing

Posted by on 25 June 2016 - 12:28 PM

When you declare a texture in your HLSL shader code and compile it with the shader compiler, the compiler will assign the texture to a t# register. There are several register types, but the t# registers are always used for shader resource views. By default, the compiler will assign the registers sequentially based on the order in which you declared your textures. So if you TextureA, TextureB, and TextureC all declared in a row, then they'll get assigned to t0, t1, t2 respectively. You can also explicitly tell the compiler which register you'd like to use by using the "register" keyword, like this:

 

Texture2D ObjTexture : register(t0);

 

Now the reason that the registers are important is because they exactly correspond to the binding slots used for PSSetShaderResources. So if you call PSSetShaderResources with StartSlot set to 3 and NumViews set to to 2, then you will bind shader resource views to registers t3 and t4. In your case, the texture will get assigned to t0 so you can just pass 0 for StartSlot and 1 for NumViews, and then pass along a single-element array containing your shader resource view pointer. 

 

Sampler states work exactly the same way, except that they use a different set of registers and binding slots. Samplers will use registers s0 through s15, and they will correspond to the binding slots of PSSetSamplers.

 

The way that the binding slots work is that they're persistent on your device context even if you change shaders. So if you bind shader A, set 3 textures, and then draw, those same 3 textures will still be bound if you bind shader B. If you want to un-bind those textures, you need to do it by passing an array of NULL pointers to PSSetShaderResources (or by calling ID3D11DeviceContext::ClearState, which will clear all bindings for all shader stages).

 

Finally, one thing to keep in mind for advanced scenarios is that it's possible to query a shader's reflection data to find out which textures exist and which registers they were assigned to. To do that, you need to use the ID3D11ShaderReflection interface and use GetResourceBindingDesc/GetResourceBindingDescByName




#5297634 Two constant buffers - cant get it to work

Posted by on 22 June 2016 - 03:14 PM

Does that shader not emit any warnings on compile?

 

The older versions of the shader compiler (pre-Windows 10) didn't warn you at all about this, they would just silently ignore your register assignment and do it automatically. The latest version of d3dcompiler_47 will give you a proper error message.




#5297399 New Post about Gamma Correction

Posted by on 20 June 2016 - 08:35 PM

I would recommend being careful when explaining what sRGB is. A lot of people are under the mistaken impression that it's just the transfer function (AKA the "gamma curve"), but being a RGB color space it also specifies the chromaticities of the primaries. So you can have the situation where perhaps you use the primaries but not the transfer function, which is what people are usually using when they refer to "linear" space. Or you can have other standards (like Rec. 709) that use the same primaries, but have a different transfer function. You generally don't have to worry about that until you need to work in another color space, and then things can get confusing if you don't understand what the color space is actually specifying. 




#5297145 How to get patch id in domain shader.

Posted by on 18 June 2016 - 04:12 PM

SV_PrimitiveID always starts at 0 for every draw call, and then increases for every primitive processed as part of that draw call. So if you only every draw 1 primitive per draw, then it's always going to be 0. If you need some sort of global ID, then you'll need to provide an offset in a constant buffer.




#5297000 gui rendering issue in dx12

Posted by on 17 June 2016 - 12:50 PM

If you'd like, you can tell the debug layer to automatically break into the debugger whenever an error or warning or occurs. I always do this, since it ensures that I notice and fix every issue. You can do it like this during initialization:

 

ID3D12InfoQueue* infoQueue = nullptr;
d3d12Device->QueryInterface(IID_PPV_ARGS(&infoQueue));
infoQueue->SetBreakOnSeverity(D3D12_MESSAGE_SEVERITY_WARNING, TRUE);
infoQueue->SetBreakOnSeverity(D3D12_MESSAGE_SEVERITY_ERROR, TRUE);



#5296832 DirectXMath - storing transform matrices

Posted by on 16 June 2016 - 11:59 AM

_vectorcall is not the default for x64 (the default is _fastcall). You need to either decorate your function with it or set as the default calling convention if you want that behavior. However you need to be careful if you enable it as the default calling convention, since it will also apply to functions declared in headers from third-party libraries. This can lead to a mismatch where your calling code expects _vectorcall convention, but the pre-compiled third-party lib was compiled with _fastcall.




#5296533 Deferred Context Usage

Posted by on 14 June 2016 - 04:47 PM

Yeah I'm going to piggy-back on what phantom said, and advise you to steer clear of deferred contexts in D3D11. Unfortunately the way that the D3D11 API is setup just doesn't work for multithreading, and so deferred contexts were never able to live up to their promise. The biggest problem comes from the fact that the driver often needs the full pipeline state at draw or dispatch time in order patch shaders and set low-level GPU state. With deferred contexts some of the state might be inherited from a previous context, and so the driver ends up having to serialize at submission time in order to figure out the full set of state bits for a draw. The lack of lower-level memory access is also an issue, since it makes the semantics of thinks like Map/Discard more complicated.




#5295575 Phong model BRDF

Posted by on 08 June 2016 - 12:00 AM

1. Reflectance is the ratio of outgoing light to incoming light. In other words, for any beam of light striking the surface it tells you how much of that light will reflect off of it instead of getting absorbed. Since it's a ratio, only [0,1] values make sense if you're going to enforce energy conservation. You can compute it for a BRDF and a light direction by integrating the result of the BRDF over the entire hemisphere of viewing directions. So you can essentially think of that process as summing up all of the light that's reflected in all directions from a single ray of light.

 

2. Rspec is the reflectance of the specular BRDF.

 

3. By "glancing angle" they mean that the vector light source is close to parallel with the surface plane. This is consistent with the common usage of the term glancing angle in the field of optics, where it refers to an incoming ray striking a surface.

 

4. So as that paragraph says, they compute the directional-hemispherical reflectance of the specular BRDF with the light direction held constant at ThetaI = 0. Since the reflectance is highest when θi = 0, you know that the reflectance value represents the maximum possible reflectance value for the BRDF. So by computing the maximum reflectance and then dividing the BRDF by that value, you can be sure that the reflectance of the BRDF never exceeds 1 (as long as Cspec is <= 1).

 

If you want to derive this result yourself, first start with the specular BRDF. This is the right side of 7.46:

 

BRDF.png

 

As we established earlier, we can compute directional hemispherical reflectance by integrating our BRDF about the hemisphere of possible viewing directions. We'll call this set of directions Ωv, and in spherical coordinates we'll refer to the two coordinates as Φv and θ(not to be confused with θi, which refers to our incident lighting direction). The integral we want to evaluate looks like this:

 

Reflectance.png

 

The "sinθv" term is part of the differential element of a spherical surface, which is defined as dS = r2sinθdθdϕ. In our case we're working on a unit hemisphere, so r = 1.

 

Now we're going to evaluate this with θi held constant as zero. In this case α is equal to θv, and so we can make that substitution. We can also pull out Cspec / π, since that part is constant:

 

Result.png

 

You can verify the result of this integral using Wolfram Alpha.

 

EDIT: I accidentally left out the "dΩ" and "dθdϕ" from the integrals in the middle image. Please pretend that I put them in there.  :)




#5295168 SampleLevel not honouring integer texel offset

Posted by on 05 June 2016 - 06:50 PM

It's possible to just use the new compiler and still use the old SDK, if for some reason you're really keen on not switching. If you're using fxc.exe it's easy: just use the new version. If you're linking to the D3DCompiler DLL it's a little trickier, since you will probably have trouble making sure that your app links to the correct import lib. One way to make sure that you use the right version is to not use an import lib at all, and instead manually call LoadLibrary/GetProcAddress to get a pointer to the function you want to use from d3dcompiler_47.dll.




#5294517 [D3D12] Issue pow HLSL function

Posted by on 01 June 2016 - 12:51 PM

If you pass a hard-coded to 0.0f as the exponent parameter of of pow, the compiler is going to optimize away the pow() completely and just replace the whole expression with 1.0f. However if the exponent is not hard-coded and instead comes from a constant buffer or the result of some other computation, then it will need to actually evaluate the pow(). On catch with pow() is that DX bytecode doesn't contain a pow assembly instruction, which is consistent with the native ISA of many GPU's. Instead the compiler will use the following approximation:

pow(x, y) = exp2(y * log2(x))

If you take a look at the generated assembly for your program, you should find a sequence that corresponds to this approximation. Here's a simple example programming and the resulting bytecode:

cbuffer Constants : register(b0)
{
    float x;
    float y;
}

float PSMain() : SV_Target0
{
    return pow(x, y);
}

ps_5_0
dcl_globalFlags refactoringAllowed
dcl_constantbuffer CB0[1], immediateIndexed
dcl_output o0.x
dcl_temps 1
log r0.x, cb0[0].x
mul r0.x, r0.x, cb0[0].y
exp o0.x, r0.x
ret

Notice the log instruction (which is a base-2 logarithm) followed by the exp instruction (which is also base-2).

The one thing you need to watch out for with log instruction is that it will return -INF if passed a value of 0, and NAN if passed a value that's < 0. This is why the compiler will often emit a warning if you don't use saturate() or abs() on the value that you pass as the first parameter to pow(). 

 

In light of all of this, I would take a look at the assembly being generated for your shader. It may reveal why you don't get the results your expect, or possibly an issue with how the compiler is generating the bytecode. You should also double-check that you're not passing a negative value as the first parameter of pow(), which you can avoid by passing saturate(RdotV).




#5294228 (Physically based) Hair shading

Posted by on 30 May 2016 - 02:33 PM

 

 

In fact, I did not solve the IBL problem yet. In my knowledge The Order uses tangent irradiance maps

 

 

We didn't end up shipping with that, since we removed all usage of SH from the game towards the end of the project. Instead we applied specular from the 9 spherical gaussian lobes stored in our probe grid.




#5294083 Question about GI and Pipelines

Posted by on 29 May 2016 - 05:15 PM

As Hodgman already explained, you can implement VCT as part of a deferred or forward renderer. However a deferred renderer will generally give you more flexibility in how you can fit into your rendering pipeline. Back when UE4 was telling everyone that they were going to use VCT, their presentations mentioned that they couldn't afford to perform the cone traces at full resolution. Instead they were were doing specular at half-resolution and diffuse even lower than that, and then upsampling. This is really only feasible with a deferred renderer, since a forward renderer typically rules out any kind of mixed-resolution shading.




#5293844 Still no DX12 Topic Prefix for the forums?

Posted by on 27 May 2016 - 12:49 PM

I don't think I have permission to do this, so I'll need to get in touch with the admins.




#5293843 When will SlimDX be updated so as to contain also DirectX 12 ?

Posted by on 27 May 2016 - 12:49 PM

Promit answered this question yesterday.




#5293675 does video memory cache effect efficiency

Posted by on 26 May 2016 - 03:35 PM

I think you'll find more results if you search for "GPU cache" instead of "video memory cache". This is because the cache structure is really a part of a GPU and not its onboard memory, and also because the term "video memory" is pretty outdated.

 

Unlike CPU's, there's no generic cache structure that's used by all GPU's. Instead GPU's have a mixture of special-case and general-purpose caches where the exact number and details can vary significantly between hardware vendors, or even across different architectures from the same vendor. They also tend to be much smaller and much more transient compared to the CPU caches. CPU's actually dedicate a relatively large portion of their die space to cache, while GPU's tend to dedicate more space to their SIMD ALU units and corresponding register files. Ultimately this all means that the cache behavior ends up being different then what you would expect from a CPU with large L1/L2 caches, and you can't always apply the same rules-of-thumb.






PARTNERS