• Content count

  • Joined

  • Last visited

Community Reputation

124 Neutral

About simco50

  • Rank

Personal Information

  • Interests
  1. Hi guys,   I am trying to get alpha blending to work using SharpDX (C#). I am using DirectX 10 to be able to use the FX Framework. I have been looking all over google trying to find my mistake in my BlendState but I don't think that the problem is there because the exact same shader works like intended in my C++ engine. Currently, everything gets rendered but there is simply no blending going on. Below is my blendstate, written in the .fx file BlendState AlphaBlending { BlendEnable[0] = TRUE; SrcBlend = SRC_ALPHA; DestBlend = INV_SRC_ALPHA; BlendOp = ADD; SrcBlendAlpha = ONE; DestBlendAlpha = ZERO; BlendOpAlpha = ADD; RenderTargetWriteMask[0] = 0x0f; }; I have tried creating and setting the blend state in code but that doesn't work either. BlendStateDescription desc = new BlendStateDescription(); desc.IsBlendEnabled[0] = true; desc.BlendOperation = BlendOperation.Add; desc.SourceAlphaBlend = BlendOption.One; desc.DestinationAlphaBlend = BlendOption.Zero; desc.AlphaBlendOperation = BlendOperation.Add; desc.RenderTargetWriteMask[0] = ColorWriteMaskFlags.All; desc.SourceBlend = BlendOption.SourceAlpha; desc.DestinationBlend = BlendOption.InverseSourceAlpha;  _context.Device.OutputMerger.SetBlendState(_blendState, new Color4(0, 0, 0, 0), -1);  I was wondering if it might have something to do with the texture itself (maybe the alpha channel is not loaded properly?) but I am unsure about that _textureResourceView = ShaderResourceView.FromFile(_context.Device, ParticleSystem.ImagePath); I wouldn't know where the problem could lie. Is there anyone with experience with SharpDX that might see the problem?   Thanks in advance!
  2. Hello guys,   I have been looking at the GDC paper about screen space fluids and I've been trying to imitate it. As of now, I have been stuck for quite a while. I am trying to create billboarding quads from a pointlist and make them look like spheres so I can write the depth of those spheres to the rendertarget. However somewhere in the calculation of the depth, there is a mistake and I can't seem to figure it out. In the paper, the eyePosition is calculated but I don't understand why. I don't understand why I can't just do the ViewProjection transformation in the pixelshader like you would in the vertexshader. Is there someone experienced with this? The full shader is below (HLSL) float3 gLightDirection = float3(-0.577f, -0.577f, 0.577f); float gScale = 1.0f; float4x4 gViewProj : VIEWPROJ; float4x4 gViewInv : VIEWINV; struct VS_DATA { float3 pos : POSITION; }; struct GS_DATA { float4 pos : SV_POSITION; float3 wPos : WORLDPOS; float2 texCoord: TEXCOORD0; }; DepthStencilState DSS { DepthEnable = TRUE; DepthWriteMask = ALL; }; SamplerState samLinear { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; RasterizerState RS { CullMode = BACK; }; VS_DATA Main_VS(VS_DATA input) { return input; } void CreateVertex(inout TriangleStream<GS_DATA> triStream, float3 pos, float3 offset, float2 texCoord) { GS_DATA data = (GS_DATA)0; offset = mul(offset, (float3x3)gViewInv); float3 cornerPos = pos + offset; data.wPos = pos; data.pos = mul(float4(cornerPos, 1.0f), gViewProj); data.texCoord = texCoord; triStream.Append(data); } [maxvertexcount(4)] void Main_GS(point VS_DATA vertex[1], inout TriangleStream<GS_DATA> triStream) { float3 topLeft, topRight, bottomLeft, bottomRight; topLeft = float3(-gScale / 2.0f, gScale / 2.0f, 0.0f); topRight = float3(gScale / 2.0f, gScale / 2.0f, 0.0f); bottomLeft = float3(-gScale / 2.0f, -gScale / 2.0f, 0.0f); bottomRight = float3(gScale / 2.0f, -gScale / 2.0f, 0.0f); CreateVertex(triStream, vertex[0].pos, bottomLeft, float2(0, 1)); CreateVertex(triStream, vertex[0].pos, topLeft, float2(0, 0)); CreateVertex(triStream, vertex[0].pos, bottomRight, float2(1, 1)); CreateVertex(triStream, vertex[0].pos, topRight, float2(1, 0)); } float4 Main_PS(GS_DATA input) : SV_TARGET { //Calculate the normal float3 normal; normal.xy = input.texCoord * 2.0f - 1.0f; float r2 = dot(normal.xy, normal.xy); //Clip if the pixel falls out the sphere clip(r2 > 1.0f ? -1 : 1); normal.z = sqrt(1.0f - r2); //calculate the depth float4 worldPos = float4(input.wPos + normalize(normal) * gScale, 1.0f); float4 clipPos = mul(worldPos, gViewProj); float d = clipPos.z / clipPos.w; return float4(d, 0, 0, 1); } technique11 Default { pass P0 { SetRasterizerState(RS); SetDepthStencilState(DSS, 0); SetVertexShader(CompileShader(vs_4_0, Main_VS())); SetGeometryShader(CompileShader(gs_4_0, Main_GS())); SetPixelShader(CompileShader(ps_4_0, Main_PS())); } } Thanks in advance!
  3. Visualize points as spheres

    Thanks for the resposes! The spheres indeed have shadows so probably they're meshes. I've got it working by using instancing, that does the trick perfectly.
  4. Hello,   I am working on a framework that works with particles and I am wondering what the best way to visualize the particles is. I've seen the Nvidia Flex demo and it has a really nice result although I don't understand how it is done (the shader is really unreadable). Do you guys have any ideas what the best approach is?
  5. Hello everyone,   I was wondering how to get the amount of draw calls each frame during run time? Is there maybe a DirectX method for that?   Thanks in advance