Jump to content
  • Advertisement

Koen

Member
  • Content Count

    210
  • Joined

  • Last visited

Community Reputation

707 Good

About Koen

  • Rank
    Member

Personal Information

  • Role
    DevOps
    Programmer
    UI/UX Designer
  • Interests
    Design
    DevOps
    Programming
    QA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Koen

    Move view(camera) matrix

    You have to translate by the vector between pos and m_pos, so use m_pos - pos instead of just m_pos. (But in this case I'd simply call D3DXMatrixLookAtRH again with m_pos instead of pos.)
  2. In this case this works because the only alpha values are zero and one. With this change you're adding source alpha to destination alpha, so as long as at least one of them is one, the end result will be an alpha value of one. Take the pixels between the feet of the knight in front. The render target contains red or in rgba (1,0,0,1). The sprite's transparent pixel is (0,0,0,0). With your first blend state you'd have result alpha = D3D11_BLEND_ONE * source alpha + D3D11_BLEND_ZERO * destination alpha or result alpha = D3D11_BLEND_ONE * 0 + D3D11_BLEND_ZERO * 1 This leads to an alpha value of zero. So even though the red backgroud rgb is still there, it gets blended away when drawing the render target to the final backbuffer, because of the 0 alpha value. When you replace D3D11_BLEND_ZERO like you did, the resulting alpha value is 1, and the red rgb is fully visible. With alpha values different from zero or one this will not work. Try making the background of your sprites semi-transparent and see what happens.
  3. For the color channels blending you take source alpha into account, but for the alpha channel, you simply replace the destination by the source alpha. So when you draw a sprite to the render target, all covered pixels outside of the knight will end up with an alpha value of zero, so fully transparent. Try this instead: omDesc.RenderTarget[0].SrcBlend = D3D11_BLEND_SRC_ALPHA; omDesc.RenderTarget[0].DestBlend = D3D11_BLEND_INV_SRC_ALPHA; omDesc.RenderTarget[0].BlendOp = D3D11_BLEND_OP_ADD; omDesc.RenderTarget[0].SrcBlendAlpha = D3D11_BLEND_SRC_ALPHA; omDesc.RenderTarget[0].DestBlendAlpha = D3D11_BLEND_INV_SRC_ALPHA; omDesc.RenderTarget[0].BlendOpAlpha = D3D11_BLEND_OP_ADD; It can be easier to just look at what's happening by using a debugger like renderdoc. It lets you see the history of a pixel.
  4. If I understand correctly, you draw both sprites to the red render target and then draw that one to the final window/backbuffer. The blend equation for the alpha value replaces whatever is in the render target's alpha channel by the sprite's alpha value. So the front sprite's surrounding pixels will all have an alpha value of zero, meaning full transparency. If you'd check their rgb values, they will probably be correct. But the alpha value of zero simply throws them all away. I guess using the same blend equation for color and alpha should fix this?
  5. Yes, I did. Sorry about that. Funny thing is I always wondered why my brother confuses left and right when he has to think quickly, while it seems I can't even get it right in a slowly typed forum post ๐Ÿ™‚
  6. That's correct. In opengl, normalized device coordinates (more or less the output written to gl_Position in your shader) have x pointing left right, y pointing up and z pointing into the screen, with the origin in the center of the screen. uv coordinates for texture sampling have u pointing left right and v pointing up, with the origin in the bottom left corner of the texture. That way everything will be consistent. DirectX has uv upside down, should that be of interest.
  7. Has anyone ever tried to draw with one of the D3D11_PRIMITIVE_TOPOLOGY_XX_CONTROL_POINT_PATCHLIST primitive topologies, when only a vertex, geometry and pixel shader are active? Practical Rendering and Computation with Direct3D 11, microsoft's documentation for the InputPatch hlsl type and this (old) blog post seem to suggest it should be possible. But when I try, an error occurs when drawing: D3D11 ERROR: ID3D11DeviceContext::Draw: Rasterization Unit is enabled (PixelShader is not NULL or Depth/Stencil test is enabled and RasterizedStream is not D3D11_SO_NO_RASTERIZED_STREAM) but the input topology is Patch Control Points. You need either a Hull Shader and Domain Shader, or a Geometry Shader. [ EXECUTION ERROR #349: DEVICE_DRAW_INPUTLAYOUT_NOT_SET] D3D11: **BREAK** enabled for the previous message, which was: [ ERROR EXECUTION #349: DEVICE_DRAW_INPUTLAYOUT_NOT_SET ] I'm sure I did bind a geometry shader (and renderdoc agrees with me ๐Ÿ˜‰). The OpenGL and Vulkan documentation seem to explicitly *not* allow this, though, so maybe it's simply not possible ๐Ÿ™‚ If you've ever managed to do this, were there some special details to take into account to get it working? Thanks! PS: for completeness, here's my test shader code, although I don't think it does anything special: // // vertex shader // struct VertexShaderInputData { float4 pos : position; }; struct VertexShaderOutputData { float4 pos : position; }; VertexShaderOutputData vs_main(VertexShaderInputData inputData) { VertexShaderOutputData outputData; outputData.pos = inputData.pos; return outputData; } // // geometry shader // struct GeometryShaderInputData { float4 pos : position; }; struct GeometryShaderOutputData { float4 pos : SV_Position; }; [maxvertexcount(8)] void gs_main(in InputPatch<GeometryShaderInputData, 8> inputData, uint input_patch_id : SV_PrimitiveID, uint gs_instance_id : SV_GSInstanceID, inout TriangleStream<GeometryShaderOutputData> output_stream) { GeometryShaderOutputData output_vertex; output_vertex.pos = inputData[0].pos; output_stream.Append(output_vertex); ...and so on... } // // pixel shader // struct PixelShaderOutputData { float4 color : SV_Target0; }; PixelShaderOutputData main() { PixelShaderOutputData outputData; outputData.color = float4(1.0,1.0,1.0,1.0); return outputData; }
  8. Koen

    Post Processing on quads

    If you want both objects to occlude each other in the same 3d space, you'll have to reuse the depth buffer for both objects, and not clear it in between. Suppose for a given pixel, the box is at depth value 0.3, and your haze effect should be invisible because it is at depth 0.6, then the 0.3 has to be in the depth buffer while drawing the haze quad, so the 0.6 result from the haze shader can be compared to it, and ignored. If you clear the depth buffer, the depth occlusion information gets erased. edit: I think I misunderstood. I guess the goal of the cutout pass is to add distortion to some texture. But that happens for all pixels covered by the geometry you draw during this pass. And you are drawing the box in this pass, while during the 'normal scene' pass you draw the box with a different transformation. So it seems these two passes are basically unrelated. If I understand correctly, you should first draw the normal scene box. Then keep the depth buffer and view-projection matrix, and draw whatever haze geometry you want for the cutout pass. Then only the visible haze pixels will have received distortion, so only those pixels will be distorted during the composition pass.
  9. Nvidia recently released a standalone graphics debugger - Nsight Graphics. Haven't used it yet, just pointing it out
  10. Yup, I already noticed My api/gpu abstraction uses a PSO approach with almost all state grouped together, so I can easily detect a request for anti-aliased lines and triangle edges, and create a different rasterizer state for those cases. I still have to think about how to properly handle tessellation and geometry shaders, though. For now I'll just assume they always produce solid triangles
  11. My hero! It sure did fix the issue. I don't see how you came to that conclusion, though Or is it just: tried everything else, so it must be a pipeline state? Also, if anyone happens to have an explanation for this interesting phenomenon where you have to disable msaa to get proper msaa I'd be happy to hear. Thanks again for all the help!
  12. I downgraded to quadro driver version 385.90 (from November 2017). Didn't help. Oh well, it was worth a try
  13. Sounds reasonable, but hard to check Yes. 4x and 2x are equally bad. Increasing Quality (at 8 samples, 32 is max on my gpu) actually reduces - but doesn't fix - the problem. I have been sticking to Quality == 0 because that makes it easier to implement. If you want to use the maximum quality level, in theory you'd need to check the minimal maximum quality level for each group of render targets used at the same time. Which is quite some bookkeeping. And if I understood correctly, a higher quality level does not necessarily mean better AA. So I thought: let's just stick to level 0 - which is always valid - and assume the manufacturers have put a decent default at that level Me too But thanks again for all help and suggestions. I guess I'll just assume this is a gpu/driver issue. I hope once I switch to (vertex cache optimized) indexed rendering, the problem will disappear mostly anyway...
  14. No worries. I'm already grateful you're willing to take a look :-) Here it is double_patch.rdc (captured with v1.0).
  15. Sure! It doesn't fix the issue :-) I had already tried '1' before - also no improvement.
  • Advertisement
ร—

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net isย your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!