• 9
• 9
• 10
• 10
• 9
• Similar Content

• By stale
I'm continuing to learn more about terrain rendering, and so far I've managed to load in a heightmap and render it as a tessellated wireframe (following Frank Luna's DX11 book). However, I'm getting some really weird behavior where a large section of the wireframe is being rendered with a yellow color, even though my pixel shader is hard coded to output white.

The parts of the mesh that are discolored changes as well, as pictured below (mesh is being clipped by far plane).

Here is my pixel shader. As mentioned, I simply hard code it to output white:
float PS(DOUT pin) : SV_Target { return float4(1.0f, 1.0f, 1.0f, 1.0f); } I'm completely lost on what could be causing this, so any help in the right direction would be greatly appreciated. If I can help by providing more information please let me know.

• Hello,
i try to implement voxel cone tracing in my game engine.
At first step i try to emplement the easiest "poor mans" method
a.  my test scene "Sponza Atrium" is voxelized completetly in a static voxel grid 128^3 ( structured buffer contains albedo)
b. i dont care about "conservative rasterization" and dont use any sparse voxel access structure
c. every voxel does have the same color for every side ( top, bottom, front .. )
d.  one directional light injects light to the voxels ( another stuctured buffer )
I will try to say what i think is correct ( please correct me )
GI lighting a given vertecie  in a ideal method
A.  we would shoot many ( e.g. 1000 ) rays in the half hemisphere which is oriented according to the normal of that vertecie
B.  we would take into account every occluder ( which is very much work load) and sample the color from the hit point.
C. according to the angle between ray and the vertecie normal we would weigth ( cosin ) the color and sum up all samples and devide by the count of rays
Voxel GI lighting
In priciple we want to do the same thing with our voxel structure.
Even if we would know where the correct hit points of the vertecie are we would have the task to calculate the weighted sum of many voxels.
Saving time for weighted summing up of colors of each voxel
To save the time for weighted summing up of colors of each voxel we build bricks or clusters.
Every 8 neigbour voxels make a "cluster voxel" of level 1, ( this is done recursively for many levels ).
The color of a side of a "cluster voxel" is the average of the colors of the four containing voxels sides with the same orientation.

After having done this we can sample the far away parts just by sampling the coresponding "cluster voxel with the coresponding level" and get the summed up color.
Actually this process is done be mip mapping a texture that contains the colors of the voxels which places the color of the neighbouring voxels also near by in the texture.
Cone tracing, howto ??
Here my understanding is confus ?? How is the voxel structure efficiently traced.
I simply cannot understand how the occlusion problem is fastly solved so that we know which single voxel or "cluster voxel" of which level we have to sample.
Supposed,  i am in a dark room that is filled with many boxes of different kind of sizes an i have a pocket lamp e.g. with a pyramid formed light cone
- i would see some single voxels near or far
- i would also see many different kind of boxes "clustered voxels" of different sizes which are partly occluded
How do i make a weighted sum of this ligting area ??
e.g. if i want to sample a "clustered voxel level 4" i have to take into account how much per cent of the area of this "clustered voxel" is occluded.
Please be patient with me, i really try to understand but maybe i need some more explanation than others
best regards evelyn

• Hi guys, when I do picking followed by ray-plane intersection the results are all wrong. I am pretty sure my ray-plane intersection is correct so I'll just show the picking part. Please take a look:

// get projection_matrix DirectX::XMFLOAT4X4 mat; DirectX::XMStoreFloat4x4(&mat, projection_matrix); float2 v; v.x = (((2.0f * (float)mouse_x) / (float)screen_width) - 1.0f) / mat._11; v.y = -(((2.0f * (float)mouse_y) / (float)screen_height) - 1.0f) / mat._22; // get inverse of view_matrix DirectX::XMMATRIX inv_view = DirectX::XMMatrixInverse(nullptr, view_matrix); DirectX::XMStoreFloat4x4(&mat, inv_view); // create ray origin (camera position) float3 ray_origin; ray_origin.x = mat._41; ray_origin.y = mat._42; ray_origin.z = mat._43; // create ray direction float3 ray_dir; ray_dir.x = v.x * mat._11 + v.y * mat._21 + mat._31; ray_dir.y = v.x * mat._12 + v.y * mat._22 + mat._32; ray_dir.z = v.x * mat._13 + v.y * mat._23 + mat._33;
That should give me a ray origin and direction in world space but when I do the ray-plane intersection the results are all wrong.
If I click on the bottom half of the screen ray_dir.z becomes negative (more so as I click lower). I don't understand how that can be, shouldn't it always be pointing down the z-axis ?
I had this working in the past but I can't find my old code

• Hi,
I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.

• Hello,
in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh.
I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes)  and "bone driven"  "corrective morphs" (= morph is dependent from a bending or twisting bone)
But now i have no idea which is the best method to implement a brush painting system.
Just some proposals
a.  i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index
b.  the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index
c.  calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off
d.  the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh
but my problem is that there could be several  vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit.
I think the given problem is quite typical an there are standard approaches that i dont know.
Any help or tutorial are welcome
P.S. I am working with SharpDX, DirectX11

DX11 Transparency Blending in Directx 11?

This topic is 722 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

Trying to set up a floor with transparency but not getting transparency.

Here is my code so far.

float4 PS(VertexOut pin,
uniform int gLightCount,
uniform bool gUseTexure,
uniform bool gAlphaClip,
uniform bool gFogEnabled,
uniform bool gReflectionEnabled) : SV_Target
{
// Interpolating normal can unnormalize it, so normalize it.
pin.NormalW = normalize(pin.NormalW);

// The toEye vector is used in lighting.
float3 toEye = gEyePosW - pin.PosW;

// Cache the distance to the eye from this surface point.
float distToEye = length(toEye);

// Normalize.
toEye /= distToEye;

// Default to multiplicative identity.
float4 texColor = float4(1, 1, 1, 1);
if(gUseTexure)
{
// Sample texture.
texColor = gDiffuseMap.Sample( samAnisotropic, pin.Tex );

if(gAlphaClip)
{
// Discard pixel if texture alpha < 0.1.  Note that we do this
// test as soon as possible so that we can potentially exit the shader
// early, thereby skipping the rest of the shader code.
clip(texColor.a - 0.1f);
}
}

//
// Lighting.
//

float4 litColor = texColor;
if( gLightCount > 0  )
{
float4 ambient = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 diffuse = float4(0.0f, 0.0f, 0.0f, 0.0f);
float4 spec    = float4(0.0f, 0.0f, 0.0f, 0.0f);

// Sum the light contribution from each light source.
[unroll]
for(int i = 0; i < gLightCount; ++i)
{
float4 A, D, S;
ComputeDirectionalLight(gMaterial, gDirLights[i], pin.NormalW, toEye,
A, D, S);

ambient += A;
diffuse += D;
spec    += S;
}

litColor = texColor*(ambient + diffuse) + spec;

if( gReflectionEnabled )
{
float3 incident = -toEye;
float3 reflectionVector = refract(incident, pin.NormalW, 1.51);
float4 reflectionColor  = gCubeMap.Sample(samAnisotropic, reflectionVector);

litColor += gMaterial.Reflect*reflectionColor;
}
}

//
// Fogging
//

if( gFogEnabled )
{
float fogLerp = saturate( (distToEye - gFogStart) / gFogRange );

// Blend the fog color and the lit color.
litColor = lerp(litColor, gFogColor, fogLerp);
}

// Common to take alpha from diffuse material and texture.
litColor.a = gMaterial.Diffuse.a * texColor.a;

return litColor;
}

D3D11_BLEND_DESC transparentDesc = {0};
transparentDesc.AlphaToCoverageEnable = false;
transparentDesc.IndependentBlendEnable = false;

transparentDesc.RenderTarget[0].BlendEnable = true;
transparentDesc.RenderTarget[0].SrcBlend       = D3D11_BLEND_SRC_ALPHA;
transparentDesc.RenderTarget[0].DestBlend      = D3D11_BLEND_INV_SRC_ALPHA;
transparentDesc.RenderTarget[0].SrcBlendAlpha  = D3D11_BLEND_ONE;
transparentDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_ZERO;

HR(device->CreateBlendState(&transparentDesc, &TransparentBS));

 D3DX11_TECHNIQUE_DESC techDesc;
activeTexTech->GetDesc( &techDesc );
for(UINT p = 0; p < techDesc.Passes; ++p)
{
md3dImmediateContext->IASetVertexBuffers(0, 1, &mShapesVB, &stride, &offset);
md3dImmediateContext->IASetIndexBuffer(mShapesIB, DXGI_FORMAT_R32_UINT, 0);

// Draw the grid.
worldInvTranspose = MathHelper::InverseTranspose(world);
worldViewProj = world*view*proj;

Effects::BasicFX->SetWorld(world);
Effects::BasicFX->SetWorldInvTranspose(worldInvTranspose);
Effects::BasicFX->SetWorldViewProj(worldViewProj);
Effects::BasicFX->SetTexTransform(XMMatrixScaling(6.0f, 8.0f, 1.0f));
Effects::BasicFX->SetMaterial(mGridMat);
Effects::BasicFX->SetDiffuseMap(mFloorTexSRV);
md3dImmediateContext->OMSetBlendState(RenderStates::TransparentBS,blendFactor,0xffffff);
activeTexTech->GetPassByIndex(p)->Apply(0, md3dImmediateContext);
md3dImmediateContext->DrawIndexed(mGridIndexCount, mGridIndexOffset, mGridVertexOffset);

}



Does anybody see why my code is not working?

Edited by terryeverlast

Share on other sites

Have you tried setting the BlendState after calling "Apply"?

I'm not familiar with FX11, but it sounds like the sort of thing an Effect might do.

Share on other sites

Putting OMSetBlendState after apply failed. I tried that and also tried just blending in the shader.

BlendState transparentBlend
{
BlendEnable[0] = TRUE;
SrcBlend = SRC_ALPHA;
DestBlend = INV_SRC_ALPHA;
SrcBlendAlpha = ZERO;
DestBlendAlpha = ZERO;
};


So putting(omsetblend) after apply gave me the original image(did not work) then,. Trying to set up in the shader gave me a grayish image.

Edited by terryeverlast

Share on other sites

Tranparent working but not on my CubeMap texture(the background in the distance). Here is my material code

m.Ambient  = XMFLOAT4(0.2f, 0.2f, 0.2f, 0.3f);
m.Diffuse  = XMFLOAT4(0.2f, 0.2f, 0.2f, 0.2f);
m.Specular = XMFLOAT4(0.8f, 0.8f, 0.8f, 1.0f);
m.Reflect  = XMFLOAT4(0.5f, 0.5f, 0.5f, 0.3f);



Instead of being tranparent when the land hits the cubemap background texture, it is white.Any ideas why so?

Does the cubmap background have to have alpha channel ?

Edited by terryeverlast

Share on other sites

Question to everyone:

If you were making a sphere refract some image, would you use blending and make the sphere transparent first?

Or just apply refraction. without tranparent blending

Edited by terryeverlast

never mind