Jump to content
  • Advertisement

simco50

Member
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

131 Neutral

About simco50

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hello, I recently found out about the "#line" directive in HLSL. Because I handle #include's manually, the line numbers in shader compilation errors are incorrect and dynamically adding these "#line" directives while loading the shader solves this problem which saves me a lot of time as I know precisely where to look when I make an error. However I noticed when I enable this, I can no longer debug shaders using Visual Studio Graphics Debugger. f I want to debug eg. the vertex shader, it asks me for the source file (while the right source file is there in the background! See the image). is there some kind of bug with Visual Studio or is this just an annoying side-effect from using "#line"? Anyone has experience with this? Cheers!
  2. I've found the culprit. Turns out the source link and vinterberg were right! Besides the two arguments being in the wrong order, there wasn't an issue in the shader at all. The problem lied within the function that created a dual quaternion from a position and a rotation. Below the code before and after: //Before DualQuaternion(const Quaternion& rotation, const Vector3& position) { rotation.Normalize(Real); Dual = Quaternion(position, 0) * Real * 0.5f; } //After DualQuaternion(const Quaternion& rotation, const Vector3& position) { rotation.Normalize(Real); Dual = 0.5f * Real * Quaternion(position, 0); } So problem solved It's really easy to make mistakes like this... (Any way to mark this thread as "Solved"?)
  3. Update: I've isolated the problem a bit more. I've created a function to convert the blended dual quaternion to a matrix and then transform the vertex positions/normals with that (instead of using QuaternionRotateVector and DualQuatTransformPoint above) and it produces the same exact artifacts: float4 QuaternionMultiply(float4 q1, float4 q2) { return float4( (q2[3] * q1[0]) + (q2[0] * q1[3]) + (q2[1] * q1[2]) - (q2[2] * q1[1]), (q2[3] * q1[1]) - (q2[0] * q1[2]) + (q2[1] * q1[3]) + (q2[2] * q1[0]), (q2[3] * q1[2]) + (q2[0] * q1[1]) - (q2[1] * q1[0]) + (q2[2] * q1[3]), (q2[3] * q1[3]) - (q2[0] * q1[0]) - (q2[1] * q1[1]) - (q2[2] * q1[2])); } float4x4 DualQuaternionToMatrix(float4 quatReal, float4 quatDual) { float x = quatReal.x; float y = quatReal.y; float z = quatReal.z; float w = quatReal.w; float4x4 m; m._11 = w * w + x * x - y * y - z * z; m._12 = 2 * x * y + 2 * w * z; m._13 = 2 * x * z - 2 * w * y; m._14 = 0; m._21 = 2 * x * y - 2 * w * z; m._22 = w * w + y * y - x * x - z * z; m._23 = 2 * y * z + 2 * w * x; m._24 = 0; m._31 = 2 * x * z + 2 * w * y; m._32 = 2 * y * z - 2 * w * x; m._33 = w * w + z * z - x * x - y * y; m._34 = 0; float4 conj = float4(-quatReal.x, -quatReal.y, -quatReal.z, quatReal.w); float4 t = QuaternionMultiply((quatDual * 2.0f), conj); m._41 = t.x; m._42 = t.y; m._43 = t.z; m._44 = 1; return m; } This makes me think that there's either something wrong with the input quaternions or the function that does the blending. However I'm quite certain the input data is right because when converting the dual quaternions back to matrices on the CPU, they match (Did this to test my ToMatrix function).
  4. Well spotted, I've been looking at this before too and following his methods exactly creates a worse result:
  5. Hello, I have regular matrix-based skinning on the GPU working for quite a while now and I stumbled upon an implementation of Dual Quaternion skinning. I've had a go at implementing this in a shader and after spending a lot of time on making small changes to the formulas to make it work, I sort of got it working but there seems to be an issue when blending bones. I found this pretty old topic on GameDev.net ( ) which, I think, describes my problem pretty well but I haven't been able to find the problem. Like in that post, if the blendweight of a vertex is 1, there is no problem. Once there is blending, I get artifacts Just for the sake of just focussing on the shader side of things first, I upload the dual quaternions to GPU which are converted from regular matrices (because I knew they should work). Below an image comparison between matrix skinning (left) and dual quaternion skinning (right): As you can see, especially on the shoulders, there are some serious issues. It might be because of a silly typo however I'm surprised some parts of the mesh look perfectly fine. Below some snippets: //Blend bones float2x4 BlendBoneTransformsToDualQuaternion(float4 boneIndices, float4 boneWeights) { float2x4 dual = (float2x4)0; float4 dq0 = cSkinDualQuaternions[boneIndices.x][0]; for(int i = 0; i < MAX_BONES_PER_VERTEX; ++i) { if(boneIndices[i] == -1) { break; } if(dot(dq0, cSkinDualQuaternions[boneIndices[i]][0]) < 0) { boneWeights[i] *= -1; } dual += boneWeights[i] * cSkinDualQuaternions[boneIndices[i]]; } return dual / length(dual[0]); } //Used to transform the normal/tangent float3 QuaternionRotateVector(float3 v, float4 quatReal) { return v + 2.0f * cross(quatReal.xyz, quatReal.w * v + cross(quatReal.xyz, v)); } //Used to transform the position float3 DualQuatTransformPoint(float3 p, float4 quatReal, float4 quatDual) { float3 t = 2 * (quatReal.w * quatDual.xyz - quatDual.w * quatReal.xyz + cross(quatDual.xyz, quatReal.xyz)); return QuaternionRotateVector(p, quatReal) + t; } I've been staring at this for quite a while now so the solution might be obvious however I fail to see it. Help would be hugely appreciated Cheers
  6. I agree with turanskij. Anyway, I've used the FX library once before and if I understand the question correctly, I suppose something like this is what you want: XMFLOAT4 lightDirs[4]; m_pEffect->GetVariableByName("lightDirs")->AsScalar()->SetFloatArray(reinterpret_cast<float*>(&lightDirs[0]), 0, 16); I'm also a bit confused about your question. You're talking about an array of matrices while you have an array of float4's.
  7. Riot games has a great and easy to understand article about how they do fog of war: https://engineering.riotgames.com/news/story-fog-and-war
  8. simco50

    Urho3D Graphics abstraction

    Yeah, that's what I saw. I actually had no idea DirectX solves it by itself and the warning doesn't really break anything.
  9. simco50

    Urho3D Graphics abstraction

    That sounds like the right thing to do. What I do now, is before a conflict, flush the staged states (submit the state changes to the context with PrepareDraw() in my case) That works but I wonder if there's a smarter solution for this.
  10. simco50

    Urho3D Graphics abstraction

    That sounds pretty straight forward and I've thought of doing that as well. Although: The downside of this, is that the graphics system would be doing "magic" just to make it work behind the back of the user while it might not have been intended behavior. It wouldn't point out when the user made an actual mistake, it would just solve it by unbinding the problematic resource while the user might not realize that. I wonder if there's a cleaner solution for this.
  11. Hello, I've stumbled upon Urho3D engine and found that it has a really nice and easy to read code structure. I think the graphics abstraction looks really interesting and I like the idea of how it defers pipeline state changes until just before the draw call to resolve redundant state changes. This is done by saving the state changes (blendEnabled/SRV changes/RTV changes) in member variables and just before the draw, apply the actual state changes using the graphics context. It looks something like this (pseudo): void PrepareDraw() { if(renderTargetsDirty) { pD3D11DeviceContext->OMSetRenderTarget(mCurrentRenderTargets); renderTargetsDirty = false } if(texturesDirty) { pD3D11DeviceContext->PSSetShaderResourceView(..., mCurrentSRVs); texturesDirty = false } .... //Some more state changes } This all looked like a great design at first but I've found that there is one big issue with this which I don't really understand how it is solved in their case and how I would tackle it. I'll explain it by example, imagine I have two rendertargets: my backbuffer RT and an offscreen RT. Say I want to render my backbuffer to the offscreen RT and then back to the backbuffer (Just for the sake of the example). You would do something like this: //Render to the offscreen RT pGraphics->SetRenderTarget(pOffscreenRT->GetRTV()); pGraphics->SetTexture(diffuseSlot, pDefaultRT->GetSRV()) pGraphics->DrawQuad() pGraphics->SetTexture(diffuseSlot, nullptr); //Remove the default RT from input //Render to the default (screen) RT pGraphics->SetRenderTarget(nullptr); //Default RT pGraphics->SetTexture(diffuseSlot, pOffscreenRT->GetSRV()) pGraphics->DrawQuad(); The problem here is that the second time the application loop comes around, the offscreen rendertarget is still bound as input ShaderResourceView when it gets set as a RenderTargetView because in Urho3D, the state of the RenderTargetView will always be changed before the ShaderResourceViews (see top code snippet) even when I set the SRV to nullptr before using it as a RTV like above causing errors because a resource can't be bound to both input and rendertarget. What is usually the solution to this? Thanks!
  12. Hi guys,   I am trying to get alpha blending to work using SharpDX (C#). I am using DirectX 10 to be able to use the FX Framework. I have been looking all over google trying to find my mistake in my BlendState but I don't think that the problem is there because the exact same shader works like intended in my C++ engine. Currently, everything gets rendered but there is simply no blending going on. Below is my blendstate, written in the .fx file BlendState AlphaBlending { BlendEnable[0] = TRUE; SrcBlend = SRC_ALPHA; DestBlend = INV_SRC_ALPHA; BlendOp = ADD; SrcBlendAlpha = ONE; DestBlendAlpha = ZERO; BlendOpAlpha = ADD; RenderTargetWriteMask[0] = 0x0f; }; I have tried creating and setting the blend state in code but that doesn't work either. BlendStateDescription desc = new BlendStateDescription(); desc.IsBlendEnabled[0] = true; desc.BlendOperation = BlendOperation.Add; desc.SourceAlphaBlend = BlendOption.One; desc.DestinationAlphaBlend = BlendOption.Zero; desc.AlphaBlendOperation = BlendOperation.Add; desc.RenderTargetWriteMask[0] = ColorWriteMaskFlags.All; desc.SourceBlend = BlendOption.SourceAlpha; desc.DestinationBlend = BlendOption.InverseSourceAlpha;  _context.Device.OutputMerger.SetBlendState(_blendState, new Color4(0, 0, 0, 0), -1);  I was wondering if it might have something to do with the texture itself (maybe the alpha channel is not loaded properly?) but I am unsure about that _textureResourceView = ShaderResourceView.FromFile(_context.Device, ParticleSystem.ImagePath); I wouldn't know where the problem could lie. Is there anyone with experience with SharpDX that might see the problem?   Thanks in advance!
  13. Hello guys,   I have been looking at the GDC paper about screen space fluids and I've been trying to imitate it. As of now, I have been stuck for quite a while. I am trying to create billboarding quads from a pointlist and make them look like spheres so I can write the depth of those spheres to the rendertarget. However somewhere in the calculation of the depth, there is a mistake and I can't seem to figure it out. In the paper, the eyePosition is calculated but I don't understand why. I don't understand why I can't just do the ViewProjection transformation in the pixelshader like you would in the vertexshader. Is there someone experienced with this? The full shader is below (HLSL) float3 gLightDirection = float3(-0.577f, -0.577f, 0.577f); float gScale = 1.0f; float4x4 gViewProj : VIEWPROJ; float4x4 gViewInv : VIEWINV; struct VS_DATA { float3 pos : POSITION; }; struct GS_DATA { float4 pos : SV_POSITION; float3 wPos : WORLDPOS; float2 texCoord: TEXCOORD0; }; DepthStencilState DSS { DepthEnable = TRUE; DepthWriteMask = ALL; }; SamplerState samLinear { Filter = MIN_MAG_MIP_LINEAR; AddressU = Wrap; AddressV = Wrap; }; RasterizerState RS { CullMode = BACK; }; VS_DATA Main_VS(VS_DATA input) { return input; } void CreateVertex(inout TriangleStream<GS_DATA> triStream, float3 pos, float3 offset, float2 texCoord) { GS_DATA data = (GS_DATA)0; offset = mul(offset, (float3x3)gViewInv); float3 cornerPos = pos + offset; data.wPos = pos; data.pos = mul(float4(cornerPos, 1.0f), gViewProj); data.texCoord = texCoord; triStream.Append(data); } [maxvertexcount(4)] void Main_GS(point VS_DATA vertex[1], inout TriangleStream<GS_DATA> triStream) { float3 topLeft, topRight, bottomLeft, bottomRight; topLeft = float3(-gScale / 2.0f, gScale / 2.0f, 0.0f); topRight = float3(gScale / 2.0f, gScale / 2.0f, 0.0f); bottomLeft = float3(-gScale / 2.0f, -gScale / 2.0f, 0.0f); bottomRight = float3(gScale / 2.0f, -gScale / 2.0f, 0.0f); CreateVertex(triStream, vertex[0].pos, bottomLeft, float2(0, 1)); CreateVertex(triStream, vertex[0].pos, topLeft, float2(0, 0)); CreateVertex(triStream, vertex[0].pos, bottomRight, float2(1, 1)); CreateVertex(triStream, vertex[0].pos, topRight, float2(1, 0)); } float4 Main_PS(GS_DATA input) : SV_TARGET { //Calculate the normal float3 normal; normal.xy = input.texCoord * 2.0f - 1.0f; float r2 = dot(normal.xy, normal.xy); //Clip if the pixel falls out the sphere clip(r2 > 1.0f ? -1 : 1); normal.z = sqrt(1.0f - r2); //calculate the depth float4 worldPos = float4(input.wPos + normalize(normal) * gScale, 1.0f); float4 clipPos = mul(worldPos, gViewProj); float d = clipPos.z / clipPos.w; return float4(d, 0, 0, 1); } technique11 Default { pass P0 { SetRasterizerState(RS); SetDepthStencilState(DSS, 0); SetVertexShader(CompileShader(vs_4_0, Main_VS())); SetGeometryShader(CompileShader(gs_4_0, Main_GS())); SetPixelShader(CompileShader(ps_4_0, Main_PS())); } } Thanks in advance!
  14. simco50

    Visualize points as spheres

    Thanks for the resposes! The spheres indeed have shadows so probably they're meshes. I've got it working by using instancing, that does the trick perfectly.
  15. Hello,   I am working on a framework that works with particles and I am wondering what the best way to visualize the particles is. I've seen the Nvidia Flex demo and it has a really nice result although I don't understand how it is done (the shader is really unreadable). Do you guys have any ideas what the best approach is?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!