Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

125 Neutral

About simco50

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I agree with turanskij. Anyway, I've used the FX library once before and if I understand the question correctly, I suppose something like this is what you want: XMFLOAT4 lightDirs[4]; m_pEffect->GetVariableByName("lightDirs")->AsScalar()->SetFloatArray(reinterpret_cast<float*>(&lightDirs[0]), 0, 16); I'm also a bit confused about your question. You're talking about an array of matrices while you have an array of float4's.
  2. Riot games has a great and easy to understand article about how they do fog of war: https://engineering.riotgames.com/news/story-fog-and-war
  3. simco50

    Urho3D Graphics abstraction

    Yeah, that's what I saw. I actually had no idea DirectX solves it by itself and the warning doesn't really break anything.
  4. simco50

    Urho3D Graphics abstraction

    That sounds like the right thing to do. What I do now, is before a conflict, flush the staged states (submit the state changes to the context with PrepareDraw() in my case) That works but I wonder if there's a smarter solution for this.
  5. simco50

    Urho3D Graphics abstraction

    That sounds pretty straight forward and I've thought of doing that as well. Although: The downside of this, is that the graphics system would be doing "magic" just to make it work behind the back of the user while it might not have been intended behavior. It wouldn't point out when the user made an actual mistake, it would just solve it by unbinding the problematic resource while the user might not realize that. I wonder if there's a cleaner solution for this.
  6. Hello, I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure. I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes. This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context. It looks something like this (pseudo): void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it. I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT. Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example). You would do something like this: //Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget. What is usually the solution to this? Thanks!
  7. Hi guys,   I am trying to get alpha blending to work using SharpDX (C#). I am using DirectX 10 to be able to use the FX Framework. I have been looking all over google trying to find my mistake in my BlendState but I don't think that the problem is there because the exact same shader works like intended in my C++ engine. Currently, everything gets rendered but there is simply no blending going on. Below is my blendstate, written in the .fx file BlendState AlphaBlending { BlendEnable[0] = TRUE; SrcBlend = SRC_ALPHA; DestBlend = INV_SRC_ALPHA; BlendOp = ADD; SrcBlendAlpha = ONE; DestBlendAlpha = ZERO; BlendOpAlpha = ADD; RenderTargetWriteMask[0] = 0x0f; }; I have tried creating and setting the blend state in code but that doesn't work either. BlendStateDescription desc = new BlendStateDescription(); desc.IsBlendEnabled[0] = true; desc.BlendOperation = BlendOperation.Add; desc.SourceAlphaBlend = BlendOption.One; desc.DestinationAlphaBlend = BlendOption.Zero; desc.AlphaBlendOperation = BlendOperation.Add; desc.RenderTargetWriteMask[0] = ColorWriteMaskFlags.All; desc.SourceBlend = BlendOption.SourceAlpha; desc.DestinationBlend = BlendOption.InverseSourceAlpha;  _context.Device.OutputMerger.SetBlendState(_blendState, new Color4(0, 0, 0, 0), -1);  I was wondering if it might have something to do with the texture itself (maybe the alpha channel is not loaded properly?) but I am unsure about that _textureResourceView = ShaderResourceView.FromFile(_context.Device, ParticleSystem.ImagePath); I wouldn't know where the problem could lie. Is there anyone with experience with SharpDX that might see the problem?   Thanks in advance!
  8. Hello guys,   I have been looking at the GDC paper about screen space fluids and I've been trying to imitate it. As of now, I have been stuck for quite a while. I am trying to create billboarding quads from a pointlist and make them look like spheres so I can write the depth of those spheres to the rendertarget. However somewhere in the calculation of the depth, there is a mistake and I can't seem to figure it out. In the paper, the eyePosition is calculated but I don't understand why. I don't understand why I can't just do the ViewProjection transformation in the pixelshader like you would in the vertexshader. Is there someone experienced with this? The full shader is below (HLSL) float3 gLightDirection = float3(-0.577f, -0.577f, 0.577f); float gScale = 1.0f; float4x4 gViewProj : VIEWPROJ; float4x4 gViewInv : VIEWINV; struct VS_DATA { float3 pos : POSITION; }; struct GS_DATA { float4 pos : SV_POSITION; float3 wPos : WORLDPOS; float2 texCoord: TEXCOORD0; }; DepthStencilState DSS { DepthEnable = TRUE; DepthWriteMask = ALL; }; SamplerState samLinear { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; RasterizerState RS { CullMode = BACK; }; VS_DATA Main_VS(VS_DATA input) { return input; } void CreateVertex(inout TriangleStream<GS_DATA> triStream, float3 pos, float3 offset, float2 texCoord) { GS_DATA data = (GS_DATA)0; offset = mul(offset, (float3x3)gViewInv); float3 cornerPos = pos + offset; data.wPos = pos; data.pos = mul(float4(cornerPos, 1.0f), gViewProj); data.texCoord = texCoord; triStream.Append(data); } [maxvertexcount(4)] void Main_GS(point VS_DATA vertex[1], inout TriangleStream<GS_DATA> triStream) { float3 topLeft, topRight, bottomLeft, bottomRight; topLeft = float3(-gScale / 2.0f, gScale / 2.0f, 0.0f); topRight = float3(gScale / 2.0f, gScale / 2.0f, 0.0f); bottomLeft = float3(-gScale / 2.0f, -gScale / 2.0f, 0.0f); bottomRight = float3(gScale / 2.0f, -gScale / 2.0f, 0.0f); CreateVertex(triStream, vertex[0].pos, bottomLeft, float2(0, 1)); CreateVertex(triStream, vertex[0].pos, topLeft, float2(0, 0)); CreateVertex(triStream, vertex[0].pos, bottomRight, float2(1, 1)); CreateVertex(triStream, vertex[0].pos, topRight, float2(1, 0)); } float4 Main_PS(GS_DATA input) : SV_TARGET { //Calculate the normal float3 normal; normal.xy = input.texCoord * 2.0f - 1.0f; float r2 = dot(normal.xy, normal.xy); //Clip if the pixel falls out the sphere clip(r2 > 1.0f ? -1 : 1); normal.z = sqrt(1.0f - r2); //calculate the depth float4 worldPos = float4(input.wPos + normalize(normal) * gScale, 1.0f); float4 clipPos = mul(worldPos, gViewProj); float d = clipPos.z / clipPos.w; return float4(d, 0, 0, 1); } technique11 Default { pass P0 { SetRasterizerState(RS); SetDepthStencilState(DSS, 0); SetVertexShader(CompileShader(vs_4_0, Main_VS())); SetGeometryShader(CompileShader(gs_4_0, Main_GS())); SetPixelShader(CompileShader(ps_4_0, Main_PS())); } } Thanks in advance!
  9. simco50

    Visualize points as spheres

    Thanks for the resposes! The spheres indeed have shadows so probably they're meshes. I've got it working by using instancing, that does the trick perfectly.
  10. Hello,   I am working on a framework that works with particles and I am wondering what the best way to visualize the particles is. I've seen the Nvidia Flex demo and it has a really nice result although I don't understand how it is done (the shader is really unreadable). Do you guys have any ideas what the best approach is?
  11. Hello everyone,   I was wondering how to get the amount of draw calls each frame during run time? Is there maybe a DirectX method for that?   Thanks in advance
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!