Jump to content

  • Log In with Google      Sign In   
  • Create Account


bjornp

Member Since 10 Mar 2013
Offline Last Active Today, 06:24 AM

Topics I've Started

[Solved] Depth not working

07 June 2014 - 11:44 AM

I have a problem where objects behind other objects are rendered.

 

I have set up my depth buffer as follows (code from msdn):

ID3D11Texture2D* pDepthStencil = NULL;
D3D11_TEXTURE2D_DESC descDepth;
descDepth.Width = 640;
descDepth.Height = 480;
descDepth.MipLevels = 1;
descDepth.ArraySize = 1;
descDepth.Format = DXGI_FORMAT_D32_FLOAT_S8X24_UINT;
descDepth.SampleDesc.Count = 1;
descDepth.SampleDesc.Quality = 0;
descDepth.Usage = D3D11_USAGE_DEFAULT;
descDepth.BindFlags = D3D11_BIND_DEPTH_STENCIL;
descDepth.CPUAccessFlags = 0;
descDepth.MiscFlags = 0;
D3DDevice->CreateTexture2D(&descDepth, NULL, &pDepthStencil);
 
D3D11_DEPTH_STENCIL_DESC dsDesc;
 
// Depth test parameters
dsDesc.DepthEnable = true;
dsDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
dsDesc.DepthFunc = D3D11_COMPARISON_LESS;
 
// Stencil test parameters
dsDesc.StencilEnable = true;
dsDesc.StencilReadMask = 0xFF;
dsDesc.StencilWriteMask = 0xFF;
 
// Stencil operations if pixel is front-facing
dsDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR;
dsDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
dsDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
 
// Stencil operations if pixel is back-facing
dsDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR;
dsDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
dsDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
 
// Create depth stencil state
ID3D11DepthStencilState * pDSState;
D3DDevice->CreateDepthStencilState(&dsDesc, &pDSState);
 
// Bind depth stencil state
D3DDeviceContext->OMSetDepthStencilState(pDSState, 1);
 
D3D11_DEPTH_STENCIL_VIEW_DESC descDSV;
descDSV.Format = DXGI_FORMAT_D32_FLOAT_S8X24_UINT;
descDSV.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
descDSV.Texture2D.MipSlice = 0;
descDSV.Flags = 0;
 
// Create the depth stencil view
D3DDevice->CreateDepthStencilView(pDepthStencil, // Depth stencil texture
&descDSV, // Depth stencil desc
&pDSV);  // [out] Depth stencil view
 
// Bind the depth stencil view
D3DDeviceContext->OMSetRenderTargets(1,          // One rendertarget view
&RenderTargetView,      // Render target view, created earlier
pDSV);     // Depth stencil view for the render target
 
And I clear the buffer every frame:
 
D3DDeviceContext->ClearDepthStencilView(pDSV, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
 
I get no warnings in the output, have i forgotten something?

Vector of derived types problem

01 May 2014 - 03:16 AM

Solved! The problem was  Effects.push_back(new Position3NormalColor()); should have been Effects.push_back(new EffectPosition3NormalColor());

 

 

Hi, I have a problem with the following code:

class EffectPosition3NormalColor : virtual public IEffect
{

       //code

}
vector<IEffect*> Effects;
Effects.push_back(new Position3NormalColor());
 
It gives me the following error:
 
1>error C2664: 'void std::vector<_Ty>::push_back(IEffect *&&)' : cannot convert parameter 1 from 'Position3NormalColor *' to 'IEffect *&&'
1>          with
1>          [
1>              _Ty=IEffect *
1>          ]
1>          Reason: cannot convert from 'Position3NormalColor *' to 'IEffect *'

Isnt this possible when Position3NormalColor inherits from IEffect?


[Solved] Need help with a warning when compiling Double to float, loss of data

27 April 2014 - 01:44 AM

Hello, I get a warning when comiling my project. The project works as intended but would be nice to resolve the warning.

 

It says:

warning C4244: 'argument' : conversion from 'double' to 'float', possible loss of data

for each of these lines

DirectX::XMMATRIX rotationMatrix = DirectX::XMMatrixRotationY(0.5f*DirectX::XM_PI);
Eye = DirectX::XMVectorSet( x, 3.0f, z, 0.0f );
At = DirectX::XMVectorSet( x + sin(CameraRotationHorizontal), 3.0f,z + cos(CameraRotationHorizontal), 0.0f );

they are declared as the following types

DirectX::XMVECTOR Eye;
DirectX::XMVECTOR At;
DirectX::XMMATRIX rotationMatrix
 
How do I solve these? They're all predefined types from the DirectX API with their assosciated functions, not sure what i can do here.

How to decouple data from rendering engine?

21 April 2014 - 07:56 AM

Hi, I'm fairly new to C++ and graphics programmming and I'm having some troubles with how to decouple my data from other parts of the engine.

 

I got a Model class which holds all the data of my meshes, shaders, textures, vertex buffers and such. Also i have a graphics class which holds the device and context, and functions assosciated with them. Right now I pass the device and context to the Model class so that it can create the vertexbuffer and shaders; However I would prefer if my model class had no knowledge or dependancy on the device and context. I Guess i could move them to the graphics class and pass it the model, but that would just move the problem.

 

I have a root class that creates all the other components, input, audio, rendering, etc, would it be a good idea to put code that requires information from several parts here?

 

/Björn


[SharpDX][.Net] Rendertargets dont display correctly when used as textures

19 March 2013 - 03:44 PM

Hello gamedev.
I'm having some troubles getting my rendertargets to display correctly when used as textures. I generate all the rendertargets in an earlier stage and pass them on to the drawing stage, according to GPU PerfStudio they look like this when passed to the drawing stage:

post-209015-0-31357500-1363727979_thumb.post-209015-0-62530100-1363727980_thumb.post-209015-0-29194700-1363727982_thumb.

This is what i expect them to look like, so they seem correct. However when trying to draw them as fullscreen textures all I see is this: 

post-209015-0-14150400-1363729072_thumb.

 

This is how i create my rendertargets and shaderresourceviews:

Dim rtdesc As RenderTargetViewDescription
'rtdesc.Format = Format.R32G32B32A32_Float
rtdesc.Format = Format.R8G8B8A8_UNorm
rtdesc.Dimension = RenderTargetViewDimension.Texture2D
rtdesc.Texture2D.MipSlice = 0

Dim srvdesc As ShaderResourceViewDescription
srvdesc.Dimension = ShaderResourceViewDimension.Texture2D
'srvdesc.Format = Format.R32G32B32A32_Float
srvdesc.Format = Format.R8G8B8A8_UNorm
srvdesc.Texture2D.MostDetailedMip = 0
srvdesc.Texture2D.MipLevels = 1

Dim renderTargetColor As Texture2D = New Texture2D(device, rendertargetdesc)
Dim renderTargetNormal As Texture2D = New Texture2D(device, rendertargetdesc)
Dim renderTargetDepth As Texture2D = New Texture2D(device, rendertargetdesc)

Dim rttColorRTV = New RenderTargetView(device, renderTargetColor, rtdesc)
Dim rttColorSRV = New ShaderResourceView(device, renderTargetColor, srvdesc)
Dim rttNormalRTV = New RenderTargetView(device, renderTargetNormal, rtdesc)
Dim rttNormalSRV = New ShaderResourceView(device, renderTargetNormal, srvdesc)
Dim rttDepthRTV = New RenderTargetView(device, renderTargetDepth, rtdesc)
Dim rttDepthSRV = New ShaderResourceView(device, renderTargetDepth, srvdesc)

 

My drawing stage:

context.OutputMerger.ResetTargets()
context.OutputMerger.SetTargets(renderview)
context.VertexShader.Set(vertexShader2)
context.PixelShader.Set(pixelShader2)
context.InputAssembler.InputLayout = layout2d
context.PixelShader.SetShaderResource(0, rttColorSRV)
context.PixelShader.SetShaderResource(1, rttNormalSRV)
context.PixelShader.SetShaderResource(2, rttDepthSRV)
context.InputAssembler.SetVertexBuffers(0, New VertexBufferBinding(quad2dvb, Utilities.SizeOf(Of Vertex2DTextured)(), 0))
context.Draw(6, 0)
swapchain.Present(0, PresentFlags.None)

 

My shader:

SamplerState colorSampler;
Texture2D colorMap;
Texture2D NormalMap;
Texture2D DepthMap;

struct VertexShaderInput
{
    float3 Position : POSITION;
    float2 TexCoord : TEXCOORD0;
};

struct VertexShaderOutput
{
    float4 Position : SV_POSITION;
    float2 TexCoord : TEXCOORD0;
};

VertexShaderOutput VS(VertexShaderInput input)
{
    VertexShaderOutput output;
    output.Position.w = 1.0f;
    output.Position.xyz = input.Position;
    output.TexCoord = input.TexCoord;
    return output;
}

float4 PS(VertexShaderOutput input) : SV_TARGET
{
    return colorMap.Sample(colorSampler, input.TexCoord);
    //return NormalMap.Sample(colorSampler, input.TexCoord);
    //return DepthMap.Sample(colorSampler, input.TexCoord);
}

 

At first i thought the fullscreen quad i use was wrong, but I loaded a texture from a bmp file, and it renders correctly as a textured fullscreen quad. Is there something special I have to do with my ShaderResourceViews before using them? I release the RenderTargetViews before using the ShaderResourceViews, Switching which texture to use in the shader gives the exact same result except it's a different color.


PARTNERS