# DX11 Transparency Blending in Directx 11?

## Recommended Posts

Trying to set up a floor with transparency but not getting transparency.

Here is my code so far.

float4 PS(VertexOut pin,
uniform int gLightCount,
uniform bool gUseTexure,
uniform bool gAlphaClip,
uniform bool gFogEnabled,
uniform bool gReflectionEnabled) : SV_Target
{
// Interpolating normal can unnormalize it, so normalize it.
pin.NormalW = normalize(pin.NormalW);

// The toEye vector is used in lighting.
float3 toEye = gEyePosW - pin.PosW;

// Cache the distance to the eye from this surface point.
float distToEye = length(toEye);

// Normalize.
toEye /= distToEye;

// Default to multiplicative identity.
float4 texColor = float4(1, 1, 1, 1);
if(gUseTexure)
{
// Sample texture.
texColor = gDiffuseMap.Sample( samAnisotropic, pin.Tex );

if(gAlphaClip)
{
// Discard pixel if texture alpha < 0.1.  Note that we do this
// test as soon as possible so that we can potentially exit the shader
// early, thereby skipping the rest of the shader code.
clip(texColor.a - 0.1f);
}
}

//
// Lighting.
//

float4 litColor = texColor;
if( gLightCount > 0  )
{
float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 spec    = float4(0.0f, 0.0f, 0.0f, 0.0f);

// Sum the light contribution from each light source.
[unroll]
for(int i = 0; i < gLightCount; ++i)
{
float4 A, D, S;
ComputeDirectionalLight(gMaterial, gDirLights[i], pin.NormalW, toEye,
A, D, S);

ambient += A;
diffuse += D;
spec    += S;
}

litColor = texColor*(ambient + diffuse) + spec;

if( gReflectionEnabled )
{
float3 incident = -toEye;
float3 reflectionVector = refract(incident, pin.NormalW, 1.51);
float4 reflectionColor  = gCubeMap.Sample(samAnisotropic, reflectionVector);

litColor += gMaterial.Reflect*reflectionColor;
}
}

//
// Fogging
//

if( gFogEnabled )
{
float fogLerp = saturate( (distToEye - gFogStart) / gFogRange );

// Blend the fog color and the lit color.
litColor = lerp(litColor, gFogColor, fogLerp);
}

// Common to take alpha from diffuse material and texture.
litColor.a = gMaterial.Diffuse.a * texColor.a;

return litColor;
}

D3D11_BLEND_DESC transparentDesc = {0};
transparentDesc.AlphaToCoverageEnable = false;
transparentDesc.IndependentBlendEnable = false;

transparentDesc.RenderTarget[0].BlendEnable = true;
transparentDesc.RenderTarget[0].SrcBlend       = D3D11_BLEND_SRC_ALPHA;
transparentDesc.RenderTarget[0].DestBlend      = D3D11_BLEND_INV_SRC_ALPHA;
transparentDesc.RenderTarget[0].SrcBlendAlpha  = D3D11_BLEND_ONE;
transparentDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;

HR(device->CreateBlendState(&transparentDesc, &TransparentBS));

 D3DX11_TECHNIQUE_DESC techDesc;
activeTexTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
md3dImmediateContext->IASetVertexBuffers(0, 1, &mShapesVB, &stride, &offset);
md3dImmediateContext->IASetIndexBuffer(mShapesIB, DXGI_FORMAT_R32_UINT, 0);

// Draw the grid.
worldInvTranspose = MathHelper::InverseTranspose(world);
worldViewProj = world*view*proj;

Effects::BasicFX->SetWorld(world);
Effects::BasicFX->SetWorldInvTranspose(worldInvTranspose);
Effects::BasicFX->SetWorldViewProj(worldViewProj);
Effects::BasicFX->SetTexTransform(XMMatrixScaling(6.0f, 8.0f, 1.0f));
Effects::BasicFX->SetMaterial(mGridMat);
Effects::BasicFX->SetDiffuseMap(mFloorTexSRV);
md3dImmediateContext->OMSetBlendState(RenderStates::TransparentBS,blendFactor,0xffffff);
activeTexTech->GetPassByIndex(p)->Apply(0, md3dImmediateContext);
md3dImmediateContext->DrawIndexed(mGridIndexCount, mGridIndexOffset, mGridVertexOffset);

}



Does anybody see why my code is not working?

Edited by terryeverlast

##### Share on other sites

Have you tried setting the BlendState after calling "Apply"?

I'm not familiar with FX11, but it sounds like the sort of thing an Effect might do.

##### Share on other sites

Putting OMSetBlendState after apply failed. I tried that and also tried just blending in the shader.

BlendState transparentBlend
{
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = INV_SRC_ALPHA;
SrcBlendAlpha = ZERO;
DestBlendAlpha = ZERO;
};


So putting(omsetblend) after apply gave me the original image(did not work) then,. Trying to set up in the shader gave me a grayish image.

Edited by terryeverlast

##### Share on other sites

Tranparent working but not on my CubeMap texture(the background in the distance). Here is my material code

m.Ambient  = XMFLOAT4(0.2f, 0.2f, 0.2f, 0.3f);
m.Diffuse  = XMFLOAT4(0.2f, 0.2f, 0.2f, 0.2f);
m.Specular = XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f);
m.Reflect  = XMFLOAT4(0.5f, 0.5f, 0.5f, 0.3f);



Instead of being tranparent when the land hits the cubemap background texture, it is white.Any ideas why so?

Does the cubmap background have to have alpha channel ?

Edited by terryeverlast

##### Share on other sites

Question to everyone:

If you were making a sphere refract some image, would you use blending and make the sphere transparent first?

Or just apply refraction. without tranparent blending

Edited by terryeverlast

never mind

##### Share on other sites

I wouldn't use blending, but rather just look up in the cube map where the refraction points to.  That can of course be combined with partial reflection as well, but I wouldn't use alpha blending!

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
628283
• Total Posts
2981823
• ### Similar Content

• I'm attempting to implement some basic post-processing in my "engine" and the HLSL part of the Compute Shader and such I think I've understood, however I'm at a loss at how to actually get/use it's output for rendering to the screen.
Assume I'm doing something to a UAV in my CS:
RWTexture2D<float4> InputOutputMap : register(u0); I want that texture to essentially "be" the backbuffer.

I'm pretty certain I'm doing something wrong when I create the views (what I think I'm doing is having the backbuffer be bound as render target aswell as UAV and then using it in my CS):

DXGI_SWAP_CHAIN_DESC scd; ZeroMemory(&scd, sizeof(DXGI_SWAP_CHAIN_DESC)); scd.BufferCount = 1; scd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; scd.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT | DXGI_USAGE_SHADER_INPUT | DXGI_USAGE_UNORDERED_ACCESS; scd.OutputWindow = wndHandle; scd.SampleDesc.Count = 1; scd.Windowed = TRUE; HRESULT hr = D3D11CreateDeviceAndSwapChain(NULL, D3D_DRIVER_TYPE_HARDWARE, NULL, NULL, NULL, NULL, D3D11_SDK_VERSION, &scd, &gSwapChain, &gDevice, NULL, &gDeviceContext); // get the address of the back buffer ID3D11Texture2D* pBackBuffer = nullptr; gSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), (LPVOID*)&pBackBuffer); // use the back buffer address to create the render target gDevice->CreateRenderTargetView(pBackBuffer, NULL, &gBackbufferRTV); // set the render target as the back buffer CreateDepthStencilBuffer(); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); //UAV for compute shader D3D11_UNORDERED_ACCESS_VIEW_DESC uavd; ZeroMemory(&uavd, sizeof(uavd)); uavd.Format = DXGI_FORMAT_R8G8B8A8_UNORM; uavd.ViewDimension = D3D11_UAV_DIMENSION_TEXTURE2D; uavd.Texture2D.MipSlice = 1; gDevice->CreateUnorderedAccessView(pBackBuffer, &uavd, &gUAV); pBackBuffer->Release();
After I render the scene, I dispatch like this:
gDeviceContext->OMSetRenderTargets(0, NULL, NULL); m_vShaders["cs1"]->Bind(); gDeviceContext->CSSetUnorderedAccessViews(0, 1, &gUAV, 0); gDeviceContext->Dispatch(32, 24, 0); //hard coded ID3D11UnorderedAccessView* nullview = { nullptr }; gDeviceContext->CSSetUnorderedAccessViews(0, 1, &nullview, 0); gDeviceContext->OMSetRenderTargets(1, &gBackbufferRTV, depthStencilView); gSwapChain->Present(0, 0); Worth noting is the scene is rendered as usual, but I dont get any results from the CS (simple gaussian blur)
I'm sure it's something fairly basic I'm doing wrong, perhaps my understanding of render targets / views / what have you is just completely wrong and my approach just makes no sense.

If someone with more experience could point me in the right direction I would really appreciate it!

On a side note, I'd really like to learn more about this kind of stuff. I can really see the potential of the CS aswell as rendering to textures and using them for whatever in the engine so I would love it if you know some good resources I can read about this!

Thank you <3

P.S I excluded the .hlsl since I cant imagine that being the issue, but if you think you need it to help me just ask

P:P:S. As you can see this is my first post however I do have another account, but I can't log in with it because gamedev.net just keeps asking me to accept terms and then logs me out when I do over and over

• I was wondering if anyone could explain the depth buffer and the depth stencil state comparison function to me as I'm a little confused
So I have set up a depth stencil state where the DepthFunc is set to D3D11_COMPARISON_LESS, but what am I actually comparing here? What is actually written to the buffer, the pixel that should show up in the front?
I have these 2 quad faces, a Red Face and a Blue Face. The Blue Face is further away from the Viewer with a Z index value of -100.0f. Where the Red Face is close to the Viewer with a Z index value of 0.0f.
When DepthFunc is set to D3D11_COMPARISON_LESS the Red Face shows up in front of the Blue Face like it should based on the Z index values. BUT if I change the DepthFunc to D3D11_COMPARISON_LESS_EQUAL the Blue Face shows in front of the Red Face. Which does not make sense to me, I would think that when the function is set to D3D11_COMPARISON_LESS_EQUAL the Red Face would still show up in front of the Blue Face as the Z index for the Red Face is still closer to the viewer
Am I thinking of this comparison function all wrong?
Vertex data just in case
//Vertex date that make up the 2 faces Vertex verts[] = { //Red face Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 0.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(0.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), Vertex(Vector4(100.0f, 100.0f, 0.0f), Color(1.0f, 0.0f, 0.0f)), //Blue face Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 0.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(0.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), Vertex(Vector4(100.0f, 100.0f, -100.0f), Color(0.0f, 0.0f, 1.0f)), };
• By mellinoe
Hi all,
First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.

• If I do a buffer update with MAP_NO_OVERWRITE or MAP_DISCARD, can I just write to the buffer after I called Unmap() on the buffer? It seems to work fine for me (Nvidia driver), but is it actually legal to do so? I have a graphics device wrapper and I don't want to expose Map/Unmap, but just have a function like void* AllocateFromRingBuffer(GPUBuffer* buffer, uint size, uint& offset); This function would just call Map on the buffer, then Unmap immediately and then return the address of the buffer. It usually does a MAP_NO_OVERWRITE, but sometimes it is a WRITE_DISCARD (when the buffer wraps around). Previously I have been using it so that the function expected the data upfront and would copy to the buffer between Map/Unmap, but now I want to extend functionality of it so that it would just return an address to write to.

• Trying to write a multitexturing shader in DirectX11 - 3 textures work fine, but adding 4th gets sampled as black!
Could you please look at the textureClass.cpp line 79? - I'm guess its D3D11_TEXTURE2D_DESC settings are wrong,
but no idea how to set it up right. I tried changing ArraySize from 1 to 4, but does nothing. If thats not the issue, please look
at the LightShader_ps - maybe doing something wrong there? Otherwise, no idea.
// Setup the description of the texture.
textureDesc.Height = height;
textureDesc.Width = width;
textureDesc.MipLevels = 0;
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc.Count = 1;
textureDesc.SampleDesc.Quality = 0;
textureDesc.Usage = D3D11_USAGE_DEFAULT;