Sign in to follow this  
stu_pidd_cow

DX11 [D3D11] Constant buffers and textures...

Recommended Posts

stu_pidd_cow    3136
I'm trying to get my head around textures, but I think I'm coming to the realisation that I don't understand constant buffers entirely, either. My biggest trouble is finding how to link access a texture from a shader.

I have created a ID3D11ShaderResourceView via D3DX11CreateShaderResourceViewFromFile and everything has worked fine up to here. I've heard that constant buffers and textures are completely different, and that I have to link it to a shader using CSSetShaderResources. Is this right? How am I then supposed to access this particular texture from the shader?

On a similar note, how do constant buffers work in terms of accessing it from the shader? The way I understand it is the constant buffer is treated as an array (or memory block), and you access a particular part of it by using register( c# ) where # is the start of the memory you want to use. Is this correct?

Thanks.

Share this post


Link to post
Share on other sites
Jason Z    6434
A constant buffer is bound to the pipeline directly instead of needing a resource view like the texture. Using both of these resource types from within HLSL requires the declaration of an appropriate resource object in the HLSL code. For example, here is a sample shader that has a constant buffer declared in it:


cbuffer Transforms
{
matrix WorldViewProjMatrix;
matrix WorldViewMatrix;
};


struct VS_INPUT
{
float3 position : POSITION;
float2 tex : TEXCOORDS0;
float3 normal : NORMAL;
};

struct VS_OUTPUT
{
float4 position : SV_Position;
float2 tex : TEXCOORDS0;
float3 normal : NORMAL;
};


VS_OUTPUT VSMAIN( in VS_INPUT input )
{
VS_OUTPUT output;

output.position = mul( float4( input.position, 1.0f ), WorldViewProjMatrix );
output.tex = input.tex;

float3 ViewSpaceNormals = mul( float4( input.normal, 0.0f ), WorldViewMatrix ).xyz;
output.normal = ViewSpaceNormals * 0.5f + 0.5f;

return output;
}

And here is the matching pixel shader with a texture declared in it:

Texture2D       ColorTexture : register( t0 );           
SamplerState LinearSampler : register( s0 );


struct VS_OUTPUT
{
float4 position : SV_Position;
float2 tex : TEXCOORD0;
float3 normal : NORMAL;
};

float4 PSMAIN( in VS_OUTPUT input ) : SV_Target
{
float4 vValues = ColorTexture.Sample( LinearSampler, input.tex );

return( vValues );
}

You have the option of either sampling a texture or directly loading its memory, both of which are performed with the methods of the resource object that is declared (as shown above).

The constant buffer contents are globally visible within the scope of the file taht they are declared in. So from the first snippet above, the named parameters WorldViewProjMatrix and WorldViewMatrix are both visible as if they weren't declared inside the cbuffer. This means that the names within all of your cbuffers need to unique as well, so watch out for that.

Share this post


Link to post
Share on other sites
MJP    19753
You bind a texture by setting its shader resource view onto the pipeline for whichever stage you want to access it. So PSSetShaderResources for the pixel shader, VSSetShaderResources for the vertex shader, etc. Constant buffers on the other hand are bound with PSSetConstantBuffers or the equivalent for the stage you want to use them in.

Constant buffers in shaders provide a very simple way to access a small number of individual constants in your shader. You access individual constants as if they were global variables. It looks like this:

cbuffer VSConstants : register(cb0)
{
float4x4 World;
float4x4 WorldViewProjection;
}

// Vertex shader
VSOutput VSMain(in VSInput input)
{
float3 worldPos = mul(input.Position, World).xyz;
...
}


Textures are accessed using a Texture object. There's a few types of texture objects (Texture2D, Texture3D, TextureCube, Texture2DMS) depending on the type and MSAA settings for the texture. See the documentation for the methods you can call on a texture object. "Sample" is the most standard, where you provide texture coordinates and a sampler state. This allows you to do your standard texture fetch with linear/aniso filtering + mipmapping. The other methods are useful for more specialized cases.

I would suggest that you have a look through the samples that come with the SDK. Almost all of them use textures and constant buffers.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By gsc
      Hi! I am trying to implement simple SSAO postprocess. The main source of my knowledge on this topic is that awesome tutorial.
      But unfortunately something doesn't work... And after a few long hours I need some help. Here is my hlsl shader:
      float3 randVec = _noise * 2.0f - 1.0f; // noise: vec: {[0;1], [0;1], 0} float3 tangent = normalize(randVec - normalVS * dot(randVec, normalVS)); float3 bitangent = cross(tangent, normalVS); float3x3 TBN = float3x3(tangent, bitangent, normalVS); float occlusion = 0.0; for (int i = 0; i < kernelSize; ++i) { float3 samplePos = samples[i].xyz; // samples: {[-1;1], [-1;1], [0;1]} samplePos = mul(samplePos, TBN); samplePos = positionVS.xyz + samplePos * ssaoRadius; float4 offset = float4(samplePos, 1.0f); offset = mul(offset, projectionMatrix); offset.xy /= offset.w; offset.y = -offset.y; offset.xy = offset.xy * 0.5f + 0.5f; float sampleDepth = tex_4.Sample(textureSampler, offset.xy).a; sampleDepth = vsPosFromDepth(sampleDepth, input.uv).z; const float threshold = 0.025f; float rangeCheck = abs(positionVS.z - sampleDepth) < ssaoRadius ? 1.0 : 0.0; occlusion += (sampleDepth <= samplePos.z + threshold ? 1.0 : 0.0) * rangeCheck; } occlusion = saturate(1 - (occlusion / kernelSize)); And current result: http://imgur.com/UX2X1fc
      I will really appreciate for any advice!
    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
       
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By amadeus12
      I made my obj parser
      and It also calculate tagent space for normalmap.
      it seems calculation is wrong..
      any good suggestion for this?
      I can't upload my pics so I link my question.
      https://gamedev.stackexchange.com/questions/147199/how-to-debug-calculating-tangent-space
      and I uploaded my code here


      ObjLoader.cpp
      ObjLoader.h
    • By Alessandro Pozzer
      Hi guys, 

      I dont know if this is the right section, but I did not know where to post this. 
      I am implementing a day night cycle on my game engine and I was wondering if there was a nice way to interpolate properly between warm colors, such as orange (sunset) and dark blue (night) color. I am using HSL format.
      Thank  you.
    • By thefoxbard
      I am aiming to learn Windows Forms with the purpose of creating some game-related tools, but since I know absolutely nothing about Windows Forms yet, I wonder:
      Is it possible to render a Direct3D 11 viewport inside a Windows Form Application? I see a lot of game editors that have a region of the window reserved for displaying and manipulating a 3D or 2D scene. That's what I am aiming for.
      Otherwise, would you suggest another library to create a GUI for game-related tools?
       
      EDIT:
      I've found a tutorial here in gamedev that shows a solution:
      Though it's for D3D9, I'm not sure if it would work for D3D11?
       
  • Popular Now