Jump to content
  • Advertisement

Koen

Member
  • Content count

    204
  • Joined

  • Last visited

Community Reputation

703 Good

About Koen

  • Rank
    Member

Personal Information

  • Role
    DevOps
    Programmer
    UI/UX Designer
  • Interests
    Design
    DevOps
    Programming
    QA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Has anyone ever tried to draw with one of the D3D11_PRIMITIVE_TOPOLOGY_XX_CONTROL_POINT_PATCHLIST primitive topologies, when only a vertex, geometry and pixel shader are active? Practical Rendering and Computation with Direct3D 11, microsoft's documentation for the InputPatch hlsl type and this (old) blog post seem to suggest it should be possible. But when I try, an error occurs when drawing: D3D11 ERROR: ID3D11DeviceContext::Draw: Rasterization Unit is enabled (PixelShader is not NULL or Depth/Stencil test is enabled and RasterizedStream is not D3D11_SO_NO_RASTERIZED_STREAM) but the input topology is Patch Control Points. You need either a Hull Shader and Domain Shader, or a Geometry Shader. [ EXECUTION ERROR #349: DEVICE_DRAW_INPUTLAYOUT_NOT_SET] D3D11: **BREAK** enabled for the previous message, which was: [ ERROR EXECUTION #349: DEVICE_DRAW_INPUTLAYOUT_NOT_SET ] I'm sure I did bind a geometry shader (and renderdoc agrees with me 😉). The OpenGL and Vulkan documentation seem to explicitly *not* allow this, though, so maybe it's simply not possible 🙂 If you've ever managed to do this, were there some special details to take into account to get it working? Thanks! PS: for completeness, here's my test shader code, although I don't think it does anything special: // // vertex shader // struct VertexShaderInputData { float4 pos : position; }; struct VertexShaderOutputData { float4 pos : position; }; VertexShaderOutputData vs_main(VertexShaderInputData inputData) { VertexShaderOutputData outputData; outputData.pos = inputData.pos; return outputData; } // // geometry shader // struct GeometryShaderInputData { float4 pos : position; }; struct GeometryShaderOutputData { float4 pos : SV_Position; }; [maxvertexcount(8)] void gs_main(in InputPatch<GeometryShaderInputData, 8> inputData, uint input_patch_id : SV_PrimitiveID, uint gs_instance_id : SV_GSInstanceID, inout TriangleStream<GeometryShaderOutputData> output_stream) { GeometryShaderOutputData output_vertex; output_vertex.pos = inputData[0].pos; output_stream.Append(output_vertex); ...and so on... } // // pixel shader // struct PixelShaderOutputData { float4 color : SV_Target0; }; PixelShaderOutputData main() { PixelShaderOutputData outputData; outputData.color = float4(1.0,1.0,1.0,1.0); return outputData; }
  2. Koen

    Post Processing on quads

    If you want both objects to occlude each other in the same 3d space, you'll have to reuse the depth buffer for both objects, and not clear it in between. Suppose for a given pixel, the box is at depth value 0.3, and your haze effect should be invisible because it is at depth 0.6, then the 0.3 has to be in the depth buffer while drawing the haze quad, so the 0.6 result from the haze shader can be compared to it, and ignored. If you clear the depth buffer, the depth occlusion information gets erased. edit: I think I misunderstood. I guess the goal of the cutout pass is to add distortion to some texture. But that happens for all pixels covered by the geometry you draw during this pass. And you are drawing the box in this pass, while during the 'normal scene' pass you draw the box with a different transformation. So it seems these two passes are basically unrelated. If I understand correctly, you should first draw the normal scene box. Then keep the depth buffer and view-projection matrix, and draw whatever haze geometry you want for the cutout pass. Then only the visible haze pixels will have received distortion, so only those pixels will be distorted during the composition pass.
  3. Nvidia recently released a standalone graphics debugger - Nsight Graphics. Haven't used it yet, just pointing it out
  4. Yup, I already noticed My api/gpu abstraction uses a PSO approach with almost all state grouped together, so I can easily detect a request for anti-aliased lines and triangle edges, and create a different rasterizer state for those cases. I still have to think about how to properly handle tessellation and geometry shaders, though. For now I'll just assume they always produce solid triangles
  5. My hero! It sure did fix the issue. I don't see how you came to that conclusion, though Or is it just: tried everything else, so it must be a pipeline state? Also, if anyone happens to have an explanation for this interesting phenomenon where you have to disable msaa to get proper msaa I'd be happy to hear. Thanks again for all the help!
  6. I downgraded to quadro driver version 385.90 (from November 2017). Didn't help. Oh well, it was worth a try
  7. Sounds reasonable, but hard to check Yes. 4x and 2x are equally bad. Increasing Quality (at 8 samples, 32 is max on my gpu) actually reduces - but doesn't fix - the problem. I have been sticking to Quality == 0 because that makes it easier to implement. If you want to use the maximum quality level, in theory you'd need to check the minimal maximum quality level for each group of render targets used at the same time. Which is quite some bookkeeping. And if I understood correctly, a higher quality level does not necessarily mean better AA. So I thought: let's just stick to level 0 - which is always valid - and assume the manufacturers have put a decent default at that level Me too But thanks again for all help and suggestions. I guess I'll just assume this is a gpu/driver issue. I hope once I switch to (vertex cache optimized) indexed rendering, the problem will disappear mostly anyway...
  8. No worries. I'm already grateful you're willing to take a look :-) Here it is double_patch.rdc (captured with v1.0).
  9. Sure! It doesn't fix the issue :-) I had already tried '1' before - also no improvement.
  10. Thanks for all your suggestions. Count is always 8, quality is always 0. Coincidentally there was a new RenderDoc release yesterday, and it allows viewing all samples separately (maybe this was already in before, but I hadn't seen it). This is all the samples for one of the offending pixels (it's a wide image - I hope it doesn't mess up layout too much): So according to RenderDoc, the depth values are actually ok, but for some reason the latest primitive is always used, regardless of depth. Depth-stencil and blend states all seem fine: depth test less or equal, depth write enabled, blending and stencil test disabled. This doesn't seem to be the problem. It is disabled in the nvidia control panel, and I create my render targets with a sample count of 8, and a quality level of 0. It could explain the weird sample results seen in the picture above, though... I explicitly checked this on cpu side, and in RenderDoc: the triangles use identical vertices, and there are no t-sections. Also, zooming in makes the problem go away instead of making it worse :-) Running through the code once more, I noticed my depth buffer has the D3D11_BIND_SHADER_RESOURCE flag set (because I 'manually' resolve the depth buffer to non-msaa for dual-depth-peeling transparent objects), which could maybe be a shady practice :-) but removing the flag didn't solve the issue. For the rest, all resources and views seem ok: Color render target D3D11_TEXTURE2D_DESC Width 500 Height 417 MipLevels 1 ArraySize 1 Format DXGI_FORMAT_B8G8R8A8_UNORM SampleDesc DXGI_SAMPLE_DESC() Count 8 Quality 0 Usage D3D11_USAGE_DEFAULT BindFlags D3D11_BIND_RENDER_TARGET CPUAccessFlags 0 MiscFlags 0 Render target view D3D11_RENDER_TARGET_VIEW_DESC Format DXGI_FORMAT_B8G8R8A8_UNORM ViewDimension D3D11_RTV_DIMENSION_TEXTURE2DMS Texture2DMS D3D11_TEX2DMS_RTV() Depth buffer D3D11_TEXTURE2D_DESC Width 500 Height 417 MipLevels 1 ArraySize 1 Format DXGI_FORMAT_R32_TYPELESS SampleDesc DXGI_SAMPLE_DESC() Count 8 Quality 0 Usage D3D11_USAGE_DEFAULT BindFlags D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_DEPTH_STENCIL CPUAccessFlags 0 MiscFlags 0 Depth stencil view D3D11_DEPTH_STENCIL_VIEW_DESC Format DXGI_FORMAT_D32_FLOAT ViewDimension D3D11_DSV_DIMENSION_TEXTURE2DMS Flags 0 Texture2DMS D3D11_TEX2DMS_DSV()
  11. You'll have to split up your model. For each material, you add all its triangles consecutively to your vertex buffer object. Then for each material set the correct uniforms, and draw the correct subrange of the VBO. Something like this struct Batch { Material material; GLint first; GLsizei count; }; std::vector<Batch> batches; for (each material in your obj) { Batch batch; batch.material = material; batch.first = // current vbo vertex count // add vertices to vbo here batch.count = // number of vertices added for this material batches.push_back(batch); } // assuming everything still fits into one vbo // you only need to bind a vao once glBindVertexArray( vao_id ); for (const auto& batch : batches) { glsl->setUniform( "Ka" , batch.material.ambient ); glsl->setUniform( "Kd" , batch.material.diffuse ); glsl->setUniform( "Ks" , batch.material.specular ); glsl->setUniform( "mat_shine" , batch.material.shininess ); glDrawArrays( GL_TRIANGLES , batch.first , batch.count); } glBindVertexArray( 0 );
  12. I think to do what I mean with point topology, you'd have to change this propery of your mesh
  13. Maybe you are still rendering triangles? The GS expects points, so you should render points, by calling IASetPrimitiveTopology( D3D11_PRIMITIVE_TOPOLOGY_POINTLIST ) on your device context. edit: just spotted the unity tag. I'm not familiar with unity, but probably you don't have access to the direct3d device context. I guess instead you'll have to tell unity to treat your vertex buffers/geometry/mesh as points, and not triangles...
  14. I've run into a puzzling rendering issue where triangles in the back, bleed through triangles in front of them. This screenshot shows the issue: The leftmost picture is drawn with no msaa and shows the expected result. The middle picture is exactly the same, except with msaa enabled. Notice the red pixels bleeding through at some triangle edges. The right picture shows a slightly rotated view, revealing the red surface in the back. In the rotated view, the artefact goes away, apparently because the triangles are no longer directly facing the camera. The issue only occurs on certain gpus (as far as I am currently aware, nvidia quadro K600 and intel integrated chips). When using a WARP device or the D3d11 reference rasterizer, the problem does not occur. The triangle mesh also affects the result. The two surfaces in the screenshot are pieces of a skull, triangulated from a 3d image scan using some form of marching cubes. So the triangles are in a very specific order. You can also see this in the following RenderDoc pixel history: RenderDoc shows the 'broken' pixel covered by two triangles -- I'd actually expect at least three: two adjacing the edge, and at least one from the back surface. The two triangles affecting the final pixel color are consecutive primitives. The front one has a shader depth output of 0.42414, the back one has a depth of 0.73829, but still ends up affecting the final color. If I change the order of the triangles -- for instance by splitting the surfaces and rendering each surface with its own draw call -- the problem also goes away. I understand that msaa changes rasterization, but shouldn't adjacing triangles still be completely without gaps? All sample positions within a pixel should be covered by the triangles on both sides of the edge, so no background should bleed through, right? For the record: I did check that there are no actual gaps/cracks in the mesh. Is this a driver/gpu bug? Am I misunderstanding the rasterization rules? Thanks!
  15. Note that it's still worth 'unifying' your code. Once you start using render targets as textures, the cancellation will not happen (since there is no loading of an image), and you'll need to have the texture coordinates correct for both apis.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!