BlackJoker

Members
  • Content count

    388
  • Joined

  • Last visited

Community Reputation

1329 Excellent

About BlackJoker

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. I am working on my UI framework and potentially think about possibility that my window can be used as render target for another game engine, for ex. And I just want to switch off my own embedded rendering for my window in that case just not to waste resources and made conflicts. The same as WPF does when someone start rendering to its Window (I don`t know how it handle such cases exactly, maybe there are more simple way to do it using Win API)
  2. Is it possible to detect when DirectX/OGL/Vulkan application starts drawing to a Win32 window? I want to detect such thing, but cannot find any info is it actually possible.
  3. After digging a lot I found that root cause of my issue was not in the shaders, but in my initialization code. I used inconsistent pixel types for my render targets (normals and depth). After fixing that, most of functionality (except point and spot lights) began working correctly
  4. Hi, I am trying to port J.Coluna XNA sample https://jcoluna.wordpress.com/page/2/ for LLP to my DX engine and I faced with some problems: 1. - For now I am using inversed depth buffer in my engine to increase depth precision and I dont know in which places it will affect my renderer, but obviously it is not working now, because in normal and depth rendertargets I see nothing and my model is rendering without depth (just ambient). In Clear GBuffer shader I left Depth as float4(1,1,1,1), because depth is in view space, so I dont need to invert it as I understand. struct VertexShaderInput { float4 Position : SV_POSITION; }; struct VertexShaderOutput { float4 Position : SV_POSITION; }; VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; input.Position.w = 1; output.Position = input.Position; return output; } struct PixelShaderOutput { float4 Normal : SV_TARGET0; float4 Depth : SV_TARGET1; }; PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output; //this value depends on your normal encoding method. //on our example, it will generate a (0,0,-1) normal output.Normal = float4(0.5, 0.5, 0.5, 0); //max depth output.Depth = float4(1, 1, 1, 1); return output; } technique Clear { pass ClearPass { Profile = 11; VertexShader = VertexShaderFunction; PixelShader = PixelShaderFunction; } } And here is my LPP main shader VertexShaderOutput VertexShaderFunction(MeshVertexInput input) { VertexShaderOutput output; input.Position.w = 1; float3 viewSpacePos = mul(input.Position, WorldView); output.Position = mul(input.Position, WorldViewProjection); output.TexCoord = input.UV0; //pass the texture coordinates further //we output our normals/tangents/binormals in viewspace output.Normal = normalize(mul(input.Normal, (float3x3)WorldView)); output.Tangent = normalize(mul(input.Tangent.xyz, (float3x3) WorldView)); output.Binormal = normalize(cross(output.Normal, output.Tangent) * input.Tangent.w); output.Depth = viewSpacePos.z; //pass depth return output; } PixelShaderOutput PixelShaderFunction(VertexShaderOutput input) { PixelShaderOutput output = (PixelShaderOutput) 1; //read from our normal map half4 normalMap = NormalMap.Sample(NormalMapSampler, input.TexCoord); half3 normalViewSpace = NormalMapToSpaceNormal(normalMap.xyz, input.Normal, input.Binormal, input.Tangent); output.Normal.rg = EncodeNormal(normalize(normalViewSpace)); //our encoder output in RG channels output.Normal.b = normalMap.a; //our specular power goes into B channel output.Normal.a = 1; //not used output.Depth.r = -input.Depth / FarClip; //output Depth in linear space, [0..1] return output; } But as a result I receive the following image (see attach). I the top of the window must be 3 render targets (normals, depth and light). Light is working (more or less), but normals and depth is not displaying anything at all. As I understand I receive only color for my model. First screenshot is from my forward renderer with directional light Second - from my LPP renderer with directional light. As you can see in the top of the window must be 3 rendertargets for debug, but there is no any output for them. I cannot understand what I am doing wrong here. Maybe someone could point me on the right direction.
  5. No one from over 200 viewers cannot help me with this issue?
  6. Hi, I am trying to implement correct user experience for rotation tool in my game engine. I want it visually behave like the same tool in Maya or Unity. When I rotate an object, rotation tool should also rotate, BUT its axis should always be on the near camera plane and never go to another side of the tool like described in 2 first attached screenshots from Unity. You can see here that X axis (red) go to the up part of the tool instead of the back. The same for Y and Z axis. Currently I implement something similar, but my code has huge limitation - it gave me correct quaternion for rotation, BUT to have correct axis alignment I must rewrite my existing tool rotation, so I cannot accumulate rotation and I cannot implement correct visual experience for tool. (See next 2 screenshots). As you can see there is no difference between visual tool representation despite I rotate an object itself. Here is code I am using currently: /// <summary> /// Calculate Quaternion which will define rotation from one point to another face to face /// </summary> /// <param name="objectPosition">objectPosition is your object's position</param> /// <param name="targetPosition">objectToFacePosition is the position of the object to face</param> /// <param name="upVector">upVector is the nominal "up" vector (typically Vector3.Y)</param> /// <remarks>Note: this does not work when objectPosition is straight below or straight above objectToFacePosition</remarks> /// <returns></returns> public static QuaternionF RotateToFace(ref Vector3F objectPosition, ref Vector3F targetPosition, ref Vector3F upVector) { Vector3F D = (objectPosition - targetPosition); Vector3F right = Vector3F.Normalize(Vector3F.Cross(upVector, D)); Vector3F backward = Vector3F.Normalize(Vector3F.Cross(right, upVector)); Vector3F up = Vector3F.Cross(backward, right); Matrix4x4F rotationMatrix = new Matrix4x4F(right.X, right.Y, right.Z, 0, up.X, up.Y, up.Z, 0, backward.X, backward.Y, backward.Z, 0, 0, 0, 0, 1); QuaternionF orientation; QuaternionF.RotationMatrix(ref rotationMatrix, out orientation); return orientation; } And I am using some hack to correctly rotate all axis and keep them 90 degrees to each other: private void TransformRotationTool(Entity current, Camera camera) { var m = current.Transform.GetRotationMatrix(); if (current.Name == "RightAxisManipulator") { var rot = QuaternionF.RotateToFace(current.GetRelativePosition(camera), Vector3F.Zero, m.Right); rot.Z = rot.Y = 0; rot.Normalize(); current.Transform.SetRotation(rot); } if (current.Name == "UpAxisManipulator") { var rot = QuaternionF.RotateToFace(current.GetRelativePosition(camera), Vector3F.Zero, m.Up); rot.X = rot.Z = 0; rot.Normalize(); current.Transform.SetRotation(rot); } if (current.Name == "ForwardAxisManipulator") { var rot = QuaternionF.RotateToFace(current.GetRelativePosition(camera), Vector3F.Zero, m.Forward); rot.X = rot.Y = 0; rot.Normalize(); current.Transform.SetRotation(rot); } if (current.Name == "CurrentViewManipulator" || current.Name == "CurrentViewCircle") { var billboardMatrix = Matrix4x4F.BillboardLH( current.GetRelativePosition(camera), Vector3F.Zero, camera.Up, camera.Forward); var rot = MathHelper.GetRotationFromMatrix(billboardMatrix); current.Transform.SetRotation(rot); } } As you can see I am zeroing 2 of 3 axis and renormalize quaternion to keep axis perpendicular to each other. And when I try to accumulate rotation, I am receiving completely incorrect result. On the last image you see what happening with my tool when I try to apply rotation to face with some basic rotation. This issue is driving me crazy. Could anyone help me to implement correct behaviour?
  7. Ok, seams ortho projection was enough and I just passed incorrect znear/zfar. I thought that max zfar could be 1, but obviously I was mistaken.
  8. Hi, I am trying to render my 3d object in exact screen space coordinates with correct shading. I tried for this orthoOffCenterLH matrix, but it produces strange render results (object in front part in disappearing and appearing during rotation). I think it because of nature of ortho projection (but maybe I am wrong). I want to correctly display my 3d object like when I render with perspective projection. To achieve this I tried to use PerspectiveOffCenterLH matrix, but when I apply it, I can see nothing. Here is parameters I pass to the method: PerspectiveOffScreenProjection = Matrix4x4F.PerspectiveOffCenterLH(0, Width, Height, 0, 100, 1); ZFar and ZNear are flipped because I am using reversed depth buffer. And here is how I build matrix itself public static void PerspectiveOffCenterLH(float left, float right, float bottom, float top, float znear, float zfar, out Matrix4x4F result) { float zRange = zfar / (zfar - znear); result = new Matrix4x4F(); result.M11 = 2.0f * znear / (right - left); result.M22 = 2.0f * znear / (top - bottom); result.M31 = (left + right) / (left - right); result.M32 = (top + bottom) / (bottom - top); result.M33 = zRange; result.M34 = 1.0f; result.M43 = -znear * zRange; } Also I render it without View matrix to fix object position and dont let it move somewhere. Also if I flip Znear and ZFar back on their places, it will resnder distored like on first screenshot, but it should render like on the second screenshot Can anyone help me to fix this issue?
  9. Hi, I am trying to implement rotation tool in my own engine, which will have same user experience as rotation tool in unity or maya. When user rotate object by tool, it also rotates, but its axis visually does not rotate on the back side of the tool - they are always near camera view +/-90 on the X/Y/Z axes. I used this code to achieve rotation to face for each of the axes: // objectPosition is your object's position // objectToFacePosition is the position of the object to face // upVector is the nominal "up" vector (typically Vector3.Y) // Note: this does not work when objectPosition is straight below or straight above objectToFacePosition QuaternionF RotateToFace(Vector3F objectPosition, Vector3F objectToFacePosition, Vector3F upVector) { Vector3F D = (objectPosition - objectToFacePosition); Vector3F right = Vector3F.Normalize(Vector3F.Cross(upVector, D)); Vector3F backward = Vector3F.Normalize(Vector3F.Cross(right, upVector)); Vector3F up = Vector3F.Cross(backward, right); Matrix4x4F rotationMatrix = new Matrix4x4F(right.X, right.Y, right.Z, 0, up.X, up.Y, up.Z, 0, backward.X, backward.Y, backward.Z, 0, 0, 0, 0, 1); QuaternionF orientation; QuaternionF.RotationMatrix(ref rotationMatrix, out orientation); return orientation; } And then I just apply resulting rotateToFaceQuaternion for each axis. If I apply rotateToFaceQuaternion, axis always move towards to the camera (which is correct), but when I try to rotate the whole tool by some quaternion, it will not rotate correctly: current_rotation * rotateToFaceQuaternion = incorrect result, because after applying this rotation to the axis of the tool, they will rotate to the other side of the tool (which is not correct). My question is: how to rotate axis of the tool in such way, that they always rotates towards to the camera, but also could be rotated +/- 90 degrees and if current axis reaches edge of the tool from one side, it will appear on the other side immediately (like in unity or maya) i. e. not to allow axis to appear on the other side of the tool.
  10. Dual contouring implementation on GPU

    Thanks for the link!
  11. Dual contouring implementation on GPU

    Anfaenger      It would be cool if QEF_SOLVER_SVD2 have at least few comments to understand what is going on in code. I assume that there is no good enought commented code for QEF in whole Internet
  12. Dual contouring implementation on GPU

    Thanks for that. After googling a little I found this article from GPU Gems:   http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter37.html   Will study this more detail.
  13. Hi. I`ve recently started to study dual contouring/manifold dual contouring algorithms and see that they are using an octree.   But because of the fact that HLSL/GLSL has no pointers, it is impossible to implement octree in shaders at least the same way it exists on CPU.   So I wanted to ask does someone already faced with such issue and could say how to implement such algorithms completely on GPU side?   Maybe there is a way to replace octree with data structure more suitable for shaders or something like that?
  14. Swapchain presentation freezing issue

    Hah. That really interesting. I started thinking already that this is a bug :)
  15. Hi. I found that when creating Swapchain with swapEffect = discard, then dispose it and create new swapchain with swapEffect = flipsequental, then dispose it and create again swapchain with swapeffect = discard, It will stop presenting anything on the screen and will be no error. Image just stop updating. But if I create new swapchain with swapEffect = flipsequental, presenting starts working again, but from that moment you cannot see anything if you create swapchain with discard effect during current lifecycle. Seems this issue causing only for discard -> flipsequental -> discard sequence. Could someone say is this by design? Does anyone faced similar issue in DirectX 11?   P.S. I forgot to mention that I did this with Desktop window and Windows 8.1/10.