Sign in to follow this  

DX11 Drawing fullscreen triangle without vertex buffers

Recommended Posts

KaiserJohan    2317

So I'm doing deferred shading and I need to draw a fullscreen quad/triangle in my vertex shader. I found this old topic ( and used the vertex shader posted. For reference, here it is:

FullscreenTriangleVSOut main(uint VertexID: SV_VertexID)
    FullscreenTriangleVSOut output;

    output.mTexcoord = float2((VertexID << 1) & 2, VertexID & 2);
    output.mPosition = float4(output.mTexcoord * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);

    return output;

I then added the following simple pixel shader:


float4 main(FullscreenTriangleVSOut input) : SV_Target0
    return float4(1.0f, 0.0f, 0.0f, 0.0f);

I expected the whole window to be red, but it's just black.


Here's the other calls I'm doing to set this simple op up:

    void DX11RendererImpl::ShadingPass(const RenderQueue& renderQueue)
        mContext->OMSetRenderTargets(1, &mBackbuffer, mDepthStencilView);
        mContext->OMSetDepthStencilState(mDepthStencilState, 1);

        mContext->ClearRenderTargetView(mBackbuffer, gClearColor);
        mContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);

        // unbind all the buffers and input layout
        mContext->IASetVertexBuffers(0, 0, NULL, 0, 0);
        mContext->IASetVertexBuffers(1, 0, NULL, 0, 0);
        mContext->IASetIndexBuffer(NULL, DXGI_FORMAT_R32_UINT, 0);

        // the fxaa vertex shader and above posted pixel shader
        mContext->VSSetShader(mVertexShader, NULL, NULL);
        mContext->PSSetShader(mPixelShader, NULL, NULL);
        mContext->Draw(3, 0);

        DXCALL(mSwapchain->Present(0, 0));

Any ideas why this isn't drawing my whole screen in red?

Share this post

Link to post
Share on other sites
Zaoshi Kaba    8434
  1. Is black your clear color?
  2. You return alpha of 0.0f, depending on blending settings which we cannot see your pixels might do nothing;
  3. Set correct primitive topology (TRIANGLESTRIP);
  4. You need to draw 4 vertices to make a quad, at the moment you have only 3;
  5. It doesn't seem you need depth buffer at all; don't bind it.

Share this post

Link to post
Share on other sites
kauna    2922

- fullscreen triangle needs only 3 vertices.


Check also:


- viewport

- blending operations

- rasterizer state / culling state



Share this post

Link to post
Share on other sites
Naruto-kun    442

You only need 3. I use that vertex shader example all the time. However I don't see you setting your viewport in that code. Second, what does your VS output look like? I suggest setting a InputLayout that mimics it even though you only use the vertex ID. 3rd, you don't even need to bind a index buffer.


Finally, try setting the 4th value of your returned color in the pixel shader to 1.0f as that is your alpha channel if you are using one in your render target. Otherwise the quad may be completely transparent if you are doing alpha blending.

Share this post

Link to post
Share on other sites
KaiserJohan    2317

Come to think of it, I am using frontface = CCW, that might be the cause of it?


        D3D11_RASTERIZER_DESC rasterizerDesc;
        ZeroMemory(&rasterizerDesc, sizeof(D3D11_RASTERIZER_DESC));
        rasterizerDesc.FillMode = D3D11_FILL_SOLID;
        rasterizerDesc.CullMode = D3D11_CULL_BACK;
        rasterizerDesc.FrontCounterClockwise = true;
        rasterizerDesc.DepthClipEnable = true;
        rasterizerDesc.ScissorEnable = false;
        rasterizerDesc.MultisampleEnable = false;
        rasterizerDesc.AntialiasedLineEnable = false;
        DXCALL(mDevice->CreateRasterizerState(&rasterizerDesc, &mRasterizerState));

EDIT: I altered the vertex shader as follows:

FullscreenTriangleVSOut main(uint VertexID: SV_VertexID)
    FullscreenTriangleVSOut output;

    output.mTexcoord = float2((VertexID << 1) & 2, VertexID == 0);
    output.mPosition = float4(output.mTexcoord * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);

    return output;

The positions should now be:


[-1, -3]

[3, -1]

[-1, 1]


However, not the whole screen is red... this is how it looks:



Edited by KaiserJohan

Share this post

Link to post
Share on other sites
KaiserJohan    2317

As for the viewport, I think it is no problem. Like this:

D3D11_VIEWPORT viewport;
ZeroMemory(&viewport, sizeof(D3D11_VIEWPORT));
viewport.TopLeftX = 0;
viewport.TopLeftY = 0;
viewport.Width = static_cast<float>(swapChainDesc.BufferDesc.Width);
viewport.Height = static_cast<float>(swapChainDesc.BufferDesc.Height);
viewport.MinDepth = 0.0f;
viewport.MaxDepth = 1.0f;
mContext->RSSetViewports(1, &viewport);

EDIT: I fixed it, the vertex shader math was slightly wrong. Heres the more condensed version for CCW frontface rendering

float4 main(uint VertexID: SV_VertexID) : SV_POSITION
    return float4(float2(((VertexID << 1) & 2) * 2.0f, (VertexID == 0) * -4.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
Edited by KaiserJohan

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By isu diss
       I'm trying to code Rayleigh part of Nishita's model (Display Method of the Sky Color Taking into Account Multiple Scattering). I get black screen no colors. Can anyone find the issue for me?
      #define InnerRadius 6320000 #define OutterRadius 6420000 #define PI 3.141592653 #define Isteps 20 #define Ksteps 10 static float3 RayleighCoeffs = float3(6.55e-6, 1.73e-5, 2.30e-5); RWTexture2D<float4> SkyColors : register (u0); cbuffer CSCONSTANTBUF : register( b0 ) { float fHeight; float3 vSunDir; } float Density(float Height) { return exp(-Height/8340); } float RaySphereIntersection(float3 RayOrigin, float3 RayDirection, float3 SphereOrigin, float Radius) { float t1, t0; float3 L = SphereOrigin - RayOrigin; float tCA = dot(L, RayDirection); if (tCA < 0) return -1; float lenL = length(L); float D2 = (lenL*lenL) - (tCA*tCA); float Radius2 = (Radius*Radius); if (D2<=Radius2) { float tHC = sqrt(Radius2 - D2); t0 = tCA-tHC; t1 = tCA+tHC; } else return -1; return t1; } float RayleighPhaseFunction(float cosTheta) { return ((3/(16*PI))*(1+cosTheta*cosTheta)); } float OpticalDepth(float3 StartPosition, float3 EndPosition) { float3 Direction = normalize(EndPosition - StartPosition); float RayLength = RaySphereIntersection(StartPosition, Direction, float3(0, 0, 0), OutterRadius); float SampleLength = RayLength / Isteps; float3 tmpPos = StartPosition + 0.5 * SampleLength * Direction; float tmp; for (int i=0; i<Isteps; i++) { tmp += Density(length(tmpPos)-InnerRadius); tmpPos += SampleLength * Direction; } return tmp*SampleLength; } static float fExposure = -2; float3 HDR( float3 LDR) { return 1.0f - exp( fExposure * LDR ); } [numthreads(32, 32, 1)] //disptach 8, 8, 1 it's 256 by 256 image void ComputeSky(uint3 DTID : SV_DispatchThreadID) { float X = ((2 * DTID.x) / 255) - 1; float Y = 1 - ((2 * DTID.y) / 255); float r = sqrt(((X*X)+(Y*Y))); float Theta = r * (PI); float Phi = atan2(Y, X); static float3 Eye = float3(0, 10, 0); float ViewOD = 0, SunOD = 0, tmpDensity = 0; float3 Attenuation = 0, tmp = 0, Irgb = 0; //if (r<=1) { float3 ViewDir = normalize(float3(sin(Theta)*cos(Phi), cos(Theta),sin(Theta)*sin(Phi) )); float ViewRayLength = RaySphereIntersection(Eye, ViewDir, float3(0, 0, 0), OutterRadius); float SampleLength = ViewRayLength / Ksteps; //vSunDir = normalize(vSunDir); float cosTheta = dot(normalize(vSunDir), ViewDir); float3 tmpPos = Eye + 0.5 * SampleLength * ViewDir; for(int k=0; k<Ksteps; k++) { float SunRayLength = RaySphereIntersection(tmpPos, vSunDir, float3(0, 0, 0), OutterRadius); float3 TopAtmosphere = tmpPos + SunRayLength*vSunDir; ViewOD = OpticalDepth(Eye, tmpPos); SunOD = OpticalDepth(tmpPos, TopAtmosphere); tmpDensity = Density(length(tmpPos)-InnerRadius); Attenuation = exp(-RayleighCoeffs*(ViewOD+SunOD)); tmp += tmpDensity*Attenuation; tmpPos += SampleLength * ViewDir; } Irgb = RayleighCoeffs*RayleighPhaseFunction(cosTheta)*tmp*SampleLength; SkyColors[DTID.xy] = float4(Irgb, 1); } }  
    • By amadeus12
      I made my obj parser
      and It also calculate tagent space for normalmap.
      it seems calculation is wrong..
      any good suggestion for this?
      I can't upload my pics so I link my question.
      and I uploaded my code here

    • By Alessandro Pozzer
      Hi guys, 

      I dont know if this is the right section, but I did not know where to post this. 
      I am implementing a day night cycle on my game engine and I was wondering if there was a nice way to interpolate properly between warm colors, such as orange (sunset) and dark blue (night) color. I am using HSL format.
      Thank  you.
    • By thefoxbard
      I am aiming to learn Windows Forms with the purpose of creating some game-related tools, but since I know absolutely nothing about Windows Forms yet, I wonder:
      Is it possible to render a Direct3D 11 viewport inside a Windows Form Application? I see a lot of game editors that have a region of the window reserved for displaying and manipulating a 3D or 2D scene. That's what I am aiming for.
      Otherwise, would you suggest another library to create a GUI for game-related tools?
      I've found a tutorial here in gamedev that shows a solution:
      Though it's for D3D9, I'm not sure if it would work for D3D11?
    • By Cyndanera
      in directx I need to know how would I play the animation in directx rendering frames for each part of the model moving parts - like the bones, how would would that work?
      I'm new to writing a animation player, for a model parser.
      I'm asking so I understand how to do this, I'm new to loading a model and playing\moving the bones to effect the mesh.
  • Popular Now