Followers 0

DX11 Drawing fullscreen triangle without vertex buffers

7 posts in this topic

So I'm doing deferred shading and I need to draw a fullscreen quad/triangle in my vertex shader. I found this old topic (http://www.gamedev.net/topic/609917-full-screen-quad-without-vertex-buffer/) and used the vertex shader posted. For reference, here it is:

FullscreenTriangleVSOut main(uint VertexID: SV_VertexID)
{
FullscreenTriangleVSOut output;

output.mTexcoord = float2((VertexID << 1) & 2, VertexID & 2);
output.mPosition = float4(output.mTexcoord * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);

return output;
}


float4 main(FullscreenTriangleVSOut input) : SV_Target0
{
return float4(1.0f, 0.0f, 0.0f, 0.0f);
}

I expected the whole window to be red, but it's just black.

Here's the other calls I'm doing to set this simple op up:

    void DX11RendererImpl::ShadingPass(const RenderQueue& renderQueue)
{
mContext->OMSetRenderTargets(1, &mBackbuffer, mDepthStencilView);
mContext->OMSetDepthStencilState(mDepthStencilState, 1);

mContext->ClearRenderTargetView(mBackbuffer, gClearColor);
mContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);

// unbind all the buffers and input layout
mContext->IASetVertexBuffers(0, 0, NULL, 0, 0);
mContext->IASetVertexBuffers(1, 0, NULL, 0, 0);
mContext->IASetIndexBuffer(NULL, DXGI_FORMAT_R32_UINT, 0);
mContext->IASetInputLayout(NULL);

mContext->Draw(3, 0);

DXCALL(mSwapchain->Present(0, 0));
}


Any ideas why this isn't drawing my whole screen in red?

0

Share on other sites
1. Is black your clear color?
2. You return alpha of 0.0f, depending on blending settings which we cannot see your pixels might do nothing;
3. Set correct primitive topology (TRIANGLESTRIP);
4. You need to draw 4 vertices to make a quad, at the moment you have only 3;
5. It doesn't seem you need depth buffer at all; don't bind it.
1

Share on other sites

- fullscreen triangle needs only 3 vertices.

Check also:

- viewport

- blending operations

- rasterizer state / culling state

Cheers!

1

Share on other sites

I experimented abit and it turns out I can draw it fine if I set D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST and draw 6 vertices. But surely 3 should suffice?

0

Share on other sites

You only need 3. I use that vertex shader example all the time. However I don't see you setting your viewport in that code. Second, what does your VS output look like? I suggest setting a InputLayout that mimics it even though you only use the vertex ID. 3rd, you don't even need to bind a index buffer.

Finally, try setting the 4th value of your returned color in the pixel shader to 1.0f as that is your alpha channel if you are using one in your render target. Otherwise the quad may be completely transparent if you are doing alpha blending.

1

Share on other sites

Come to think of it, I am using frontface = CCW, that might be the cause of it?

        D3D11_RASTERIZER_DESC rasterizerDesc;
ZeroMemory(&rasterizerDesc, sizeof(D3D11_RASTERIZER_DESC));
rasterizerDesc.FillMode = D3D11_FILL_SOLID;
rasterizerDesc.CullMode = D3D11_CULL_BACK;
rasterizerDesc.FrontCounterClockwise = true;
rasterizerDesc.DepthClipEnable = true;
rasterizerDesc.ScissorEnable = false;
rasterizerDesc.MultisampleEnable = false;
rasterizerDesc.AntialiasedLineEnable = false;
DXCALL(mDevice->CreateRasterizerState(&rasterizerDesc, &mRasterizerState));

EDIT: I altered the vertex shader as follows:

FullscreenTriangleVSOut main(uint VertexID: SV_VertexID)
{
FullscreenTriangleVSOut output;

output.mTexcoord = float2((VertexID << 1) & 2, VertexID == 0);
output.mPosition = float4(output.mTexcoord * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);

return output;
}


The positions should now be:

[-1, -3]

[3, -1]

[-1, 1]

However, not the whole screen is red... this is how it looks:

Edited by KaiserJohan
0

Share on other sites

As for the viewport, I think it is no problem. Like this:

D3D11_VIEWPORT viewport;
ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT));
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = static_cast<float>(swapChainDesc.BufferDesc.Width);
viewport.Height = static_cast<float>(swapChainDesc.BufferDesc.Height);
viewport.MinDepth = 0.0f;
viewport.MaxDepth = 1.0f;



EDIT: I fixed it, the vertex shader math was slightly wrong. Heres the more condensed version for CCW frontface rendering

float4 main(uint VertexID: SV_VertexID) : SV_POSITION
{
return float4(float2(((VertexID << 1) & 2) * 2.0f, (VertexID == 0) * -4.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
}

Edited by KaiserJohan
0

Create an account

Register a new account

Followers 0

• Similar Content

• I am working on a game (shameless plug: Cosmoteer) that is written in a custom game engine on top of Direct3D 11. (It's written in C# using SharpDX, though I think that's immaterial to the problem at hand.)
The problem I'm having is that a small but understandably-frustrated percentage of my players (about 1.5% of about 10K players/day) are getting frequent device hangs. Specifically, the call to IDXGISwapChain::Present() is failing with DXGI_ERROR_DEVICE_REMOVED, and calling GetDeviceRemovedReason() returns DXGI_ERROR_DEVICE_HUNG. I'm not ready to dismiss the errors as unsolveable driver issues because these players claim to not be having problems with any other games, and there are more complaints on my own forums about this issue than there are for games with orders of magnitude more players.
My first debugging step was, of course, to turn on the Direct3D debug layer and look for any errors/warnings in the output. Locally, the game runs 100% free of any errors or warnings. (And yes, I verified that I'm actually getting debug output by deliberately causing a warning.) I've also had several players run the game with the debug layer turned on, and they are also 100% free of errors/warnings, except for the actual hung device:
[MessageIdDeviceRemovalProcessAtFault] [Error] [Execution] : ID3D11Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_DEVICE_HUNG: The Device took an unreasonable amount of time to execute its commands, or the hardware crashed/hung. As a result, the TDR (Timeout Detection and Recovery) mechanism has been triggered. The current Device Context was executing commands when the hang occurred. The application may want to respawn and fallback to less aggressive use of the display hardware). So something my game is doing is causing the device to hang and the TDR to be triggered for a small percentage of players. The latest update of my game measures the time spent in IDXGISwapChain::Present(), and indeed in every case of a hung device, it spends more than 2 seconds in Present() before returning the error. AFAIK my game isn't doing anything particularly "aggressive" with the display hardware, and logs report that average FPS for the few seconds before the hang is usually 60+.
So now I'm pretty stumped! I have zero clues about what specifically could be causing the hung device for these players, and I can only debug post-mortem since I can't reproduce the issue locally. Are there any additional ways to figure out what could be causing a hung device? Are there any common causes of this?
Here's my remarkably un-interesting Present() call:
SwapChain.Present(_vsyncIn ? 1 : 0, PresentFlags.None); I'd be happy to share any other code that might be relevant, though I don't myself know what that might be. (And if anyone is feeling especially generous with their time and wants to look at my full code, I can give you read access to my Git repo on Bitbucket.)
1. The errors happen on all OS'es my game supports (Windows 7, 8, 10, both 32-bit and 64-bit), GPU vendors (Intel, Nvidia, AMD), and driver versions. I've been unable to discern any patterns with the game hanging on specific hardware or drivers.
2. For the most part, the hang seems to happen at random. Some individual players report it crashes in somewhat consistent places (such as on startup or when doing a certain action in the game), but there is no consistency between players.
3. Many players have reported that turning on V-Sync significantly reduces (but does not eliminate) the errors.
4. I have assured that my code never makes calls to the immediate context or DXGI on multiple threads at the same time by wrapping literally every call to the immediate context and DXGI in a mutex region (C# lock statement). (My code *does* sometimes make calls to the immediate context off the main thread to create resources, but these calls are always synchronized with the main thread.) I also tried synchronizing all calls to the D3D device as well, even though that's supposed to be thread-safe. (Which did not solve *this* problem, but did, curiously, fix another crash a few players were having.)
5. The handful of places where my game accesses memory through pointers (it's written in C#, so it's pretty rare to use raw pointers) are done through a special SafePtr that guards against out-of-bounds access and checks to make sure the memory hasn't been deallocated/unmapped. So I'm 99% sure I'm not writing to memory I shouldn't be writing to.
6. None of my shaders use any loops.
Thanks for any clues or insights you can provide. I know there's not a lot to go on here, which is part of my problem. I'm coming to you all because I'm out of ideas for what do investigate next, and I'm hoping someone else here has ideas for possible causes I can investigate.
Thanks again!

• By thmfrnk
Hello,
I am working on a Deferred Shading Engine, which actually uses MSAA for Antialising. Apart from the big G-Buffer ressources its working fine. But the intention of my engine is not only realtime-rendering as also render Screenshots as well as Videos. In that case I've enough time to do everything to get the best results. While using 8x MSAA, some scenes might still flicker.. especially on vegetations. Unfortunately 8x seems to be the maximum on DX11 Hardware, so there is no way to get better results, even if don't prefer realtime.
So finally I am looking for a solution, which might offer an unlimited Sample count. The first thing I thought about was to find a way to manually manipulate MSAA Sample locations, in order to be able to render multiple frames with different patterns and combining them. I found out that NVIDIA did something equal with TXAA. However, I only found a solution to use NVAPI, in order to change sample locations. https://mynameismjp.wordpress.com/2015/09/13/programmable-sample-points/
While I am working on .NET and SlimDX I've no idea how hard it would to implement the NVIDIA API and if its possible to use it together with SlimDX. And this approach would be also limited to NV.
Does anyone have an idea or maybe a better approach I could use?
Thanks, Thomas

• For vector operations which mathematically result in a single scalar f (such as XMVector3Length or XMPlaneDotCoord), which of the following extractions from an XMVECTOR is preferred:
1. The very explicit store operation
const XMVECTOR v = ...; float f; XMStoreFloat(&f, v); 2. A shorter but less explicit version (note that const can now be used explicitly)
const XMVECTOR v = ...; const float f = XMVectorGetX(v);

• Hi guys,
this is a exam question regarding alpha blending, however there is no official solution, so i am wondering  whether my solution is right or not... thanks in advance...

my idea:
BS1:
since BS1 with BlendEnable set as false, just write value into back buffer.
-A : (0.4, 0.4, 0.0, 0.5)
-B : (0.2, 0.4, 0.8, 0.5)

BS2:

backbuffer.RGB: = (0.4, 0.0, 0.0) * 1 + (0.0, 0.0, 0.0) * (1-0.5)      = ( 0.4, 0.0, 0.0)
backbuffer.Alpha = 1*1 + 0*0   =1

A.RGB = (0.4, 0.4, 0.0)* 0.5 + (0.4, 0.0, 0.0)* ( 1-0.5)   = (0.4,0.2,0.0)
A.Alpha=0.5*1+1*(1-0.5) = 1

B.RGB = (0.2, 0.4, 0.8) * 0.5 + (0.4, 0.2, 0.0) * (1-0.5)  = (0.3, 0.3, 0.4)
B.Alpha = 0.5 * 1 + 1*(1-0.5)  = 1

==========================
BS3:

backbuffer.RGB = (0.4, 0.0, 0.0) + (0.0, 0.0, 0.0)  = (0.4, 0.0, 0.0)
backbuffer.Alpha = 0

A.RGB = (0.4, 0.4, 0.0) + (0.4, 0.0, 0.0) = (0.8, 0.4, 0.0)
A.Alpha = 0

B.RGB = (0.2, 0.4, 0.8) + (0.8, 0.4, 0.0) = (1.0, 0.8, 0.8)
B.Alpha = 0

• Hi Guys,
I am revisiting an old DX11 framework I was creating a while back and am scratching my head with a small issue.
I am trying to set the pixel shader resources and am getting the following error on every loop.
As you can see in the below code, I am clearing out the shader resources as per the documentation. (Even going overboard and doing it both sides of the main PSSet call). But I just can't get rid of the error. Which results in the render target not being drawn.
ID3D11ShaderResourceView* srv = { 0 }; d3dContext->PSSetShaderResources(0, 1, &srv); for (std::vector<RenderTarget>::iterator it = rtVector.begin(); it != rtVector.end(); ++it) { if (it->szName == name) { //std::cout << it->srv <<"\r\n"; d3dContext->PSSetShaderResources(0, 1, &it->srv); break; } } d3dContext->PSSetShaderResources(0, 1, &srv);
I am storing the RT's in a vector and setting them by name. I have tested the it->srv and am retrieving a valid pointer.
At this stage I am out of ideas.
Any help would be greatly appreciated

• 11
• 19
• 14
• 23
• 11