• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

103 Neutral

About IonThrust

  • Rank

Personal Information

  • Interests
  1. DX11 Clip points hidden by the model

    Great, thank you guys. Thats a lot already. I think I'll have to go with a mixture of some methods. Meanwhile I do render the circles in 3D thus I have a z-coordinate for each circle. I use billboard matrices to render them rotated towards the camera. I'm suprised that it is that difficult to get that working correctly. Input side I figured it be the easiest way to just cast a ray through using the 2D screenspace point (click), the view matrix and then check against all circles whether they get hit. That way I always get the right one and with my ~50-200 circles it should be no time problem. Render side I'm not yet sure. Right now I disable the depth-buffer (or previously just set z to 0 in the Vertex Shader) when rendering the circles on top of the model so they're fully visible (and not partially inside the model). But then again I'd see all circles, even those behind the model. The best way I guess would be rendering with the depth buffer enabled but before rendering the whole circle I'd have to check if the center points z-value would be less than the depth-buffers value at that point. But is it possible to skip rendering the whole thing if a specific point (not all) are clipped during depth-test? I guess that would be the easiest thing to do. Otherwise I think I have to go with Raytracing. Thanks so far, this is really helping me!
  2. DX11 Clip points hidden by the model

    I am not developing a game, so this is totally fine. Well .. okay that would be a way to go. But considering a point on the opposite side - how can I be sure it's visible or not? Calculating the distance from the selected point towards the selected triangle and using a threshold maybe? How can I calculate / get the triangle ID and let the Pixel Shader draw that one colored? I mean the PS is working in screenspace with pixels ... sorry, I'm haven't worked much with more advanced stuff when it comes to shaders.
  3. Following issue: I got a model and data which basically tells me where on the model surface I got points. When the user taps on that point, he sees some information. Now you can rotate around the model and there are easily 100+ points per model. So I need to distinguish whether a point is actually visible or not. The points are visualized by circles. Is there any fast algorithm which can check based on the camera position whether the point is covered by the model or not? Or (which would likely be more performant) can I somehow realize this in a shader? Right now I'm using StripLists to render the circles. I see no possible way to use the depth buffer so I can completely clip some circles if they're on the opposite side of the model. Any ideas? I guess creating a ray and testing against all vertices (400k) of the model would be way to intense for the CPU. In the end I need to have some feedback CPU-side so I can distinguish at input-detection whether the tapped point is visible or not. Otherwise I'd end up having ghost points which can be tapped (or maybe even overlay visible ones) which can be tapped but aren't visible ...
  4. Since it's a vector-translation it's okay that way. There are plenty of samples which show this exact approach. And it does work fine. Like I said, it was most likely a mapping issue - the w-coordinate had the value that was currently at this memory location.
  5. Well, I initially tried to render a quad which resulted in the same behavior: No PS ran but I could see the result in the IA and VS stages when debugging. Nope I forgot about that initially but did randomly try it with transposed matrices. I've read only recently that HLSL uses a different alignment by default - so now I do use "SetMatrixTranspose()" on the effect parameters. But it does not change anything. I still don't understand why the vertex shader now skips the transformations ... I run over it with a debugger and it exactly skips the three transformations. This since I changed the input layout from float4 to float3. My guess now is that this is some strange issue with Intel Integrated Graphics. I'm currently installing VS 17 and NVidia NSight on our CAD laptop which has a Nvidia Quadro in it. Maybe I'll find out more with that. Moreover I'm not sure what could possibly be the source of the issue since the Visual Studio Graphics Debugger is so incredibly instable. Exceptions and crashes on and on. Alternatively it might be a problem with SharpDX and UWP. It's rather unusual since the codebase should be the same, but that's my only guess so far... UWP support in SharpDX seems to be rather untested and might have some issues nonetheless. Probably my last resort to contact xoofx directly ... EDIT: Major break-through! The non-existing output was my own fault. I set local variables for each transformation step as test yesterday. I forgot to set the last position value into the output, though. And now I even see the PS running! PixelShaderInput VS_Ambient(VertexShaderInput input) { PixelShaderInput i; // Normalize first float4 world = mul(input.Position, World); float4 worldView = mul(world, View); float4 worldViewProj = mul(worldView, Projection); i.Position = worldViewProj; i.Texture = input.Texture; i.Color = input.Color; return i; } Can it be that HLSL does not support writing into the input struct's fields? Since I explicitly set world, worldView and worldViewProj it works ... The result is still some weird mix of random lines but it's something. EDIT2: Nope. Works, too. Okay. I'm highly confused... that's something im my 13 years of programming: Randomly fixing an issue that took me days and then trying to revert it to find out what the problem was... EDIT3: I think I know the issue. It was some mapping issue from the vertex struct to the actual buffer. It's obviously not enough to have a Vector3 field and explicitly setting a offset. My guess is that the graphics card does not set the w-coordinate to 0 by default but uses the value that is currently at that memory location. Thus the w-value was invalid and the whole thing broke. Now I do set a Vector3 by constructor and create a Vector4 out of it with w set to 1.
  6. I set the FieldOffset in my vertex-struct explicitly so it should actually map the Vector3 into a float4 without misaligning the following ones. Nonetheless the VS stage output does look correct and meanwhile me and our mathematician checked the matrices. It does seem to be correct. Weird thing though is that I can't move further away than ~1000f from the car. It just clamps to that. Yeah I already figured so. But as long as the PS won't even run it doesn't matter anyway ... EDIT: I changed it to Vector3 but it doesn't change anything at all. Moreover ever since I placed the camera behind the car model looking at it, I don't get any values anymore. EVERYTHING output by the VS is now 0 (except passed through values for color and texture). Debugging it I only see that it completely skips the matrix multiplication. Allocates the output structure and jumps right down to where I pass through the PS-required values and that's it... Seriously. Whats wrong with it all?! That's beyond weird behavior...
  7. The model was exported from a CAD program with metric units. Would it help to normalize the model? I do need a preserved coordinate system since I need to highlight certain points which I only have in the original scale.
  8. Just take a look at the shader (relatively at the start of the post I linked it on pastebin). Not the most simple one, but it basically just outputs the input color. At least it would output something, but since it never even runs it doesn't matter anyway. For the world matrix: All I do is using Matrix.RotationX(MathUtil.DegreesToRadians(-90f)); so it should be correct. Else it would be a bug on SharpDX's side which I doubt. And as you can see in the debugger: It seems to be the right matrix, or why else would it render it correctly positioned and rotated? EDIT: I just tried setting the X-position of the view matrix to 0f: Everything's black in the VS output. Set to 500f: A closer shot than in the screenshot above but everything upside-down. Set to 1000f it's a bit further away and not upside-down. Hell, I'm not good at maths but what the heck is going on here? Why does the view matrix swap when I change the X coordinate of the camera? It just doesn't make any sense to me...
  9. Ah sorry. Misunderstood that. I forgot and disabled it now, but the PS stage still won't run. Meanwhile I do get the following result in the debugger (at least something): Based on the view I can ensure that the view would be absolutely correct. It was intentional to render the front of the car. But somehow no matter if I set the X position to -1000f or -1000f I do get the same result, like this was the maxmimum I can move away from the car ... I expected it to be fully visible (even small) with -10000f. Nonetheless the World-View seems correct. Question is: Why is there still no PS stage running? I'm absolutely lost here... World: Row1: {X:1 Y:0 Z:0 W:0} Row2: {X:0 Y:-4,371139E-08 Z:-1 W:0} Row3: {X:0 Y:1 Z:-4,371139E-08 W:0} Row4: {X:0 Y:0 Z:0 W:1} View: Row1: {X:0 Y:0 Z:-1 W:0} Row2: {X:0 Y:1 Z:0 W:0} Row3: {X:1 Y:0 Z:0 W:0} Row4: {X:-250 Y:0 Z:-10000 W:1} Projection Row1: {X:1,434897 Y:0 Z:0 W:0} Row2: {X:0 Y:2,414213 Z:0 W:0} Row3: {X:0 Y:0 Z:-1,00002 W:-1} Row4: {X:0 Y:0 Z:-0,100002 W:0} I'm not sure if this is correct. I'm lacking the background knowledge for this to validate... And it's been years since I actively worked with DirectX / game dev, so I'm a bit rusty. But I'm doing everything rather standard-ish, so it shouldn't be so hard...
  10. I already disabled depth-stencil because I initially thought this would be the problem. But it seems like the rasterizer still clips them... the mysterious thing is I can't even find any source on the internet stating why "State did not run. No output." shows up at the Pixel Shader Stage. This is rather unusual. So far I've managed to move the camera so that I actually see the whole (correctly rendered) model in the preview of the vertex shader stage. So now I really am confused why the pixel shader won't run...
  11. I did as one of the first things. But I can ensure that the viewport is correct. It's about 1400x800. Yes, I checked this. Moreover it's set in the pass of the effect shader explicitly. I do not. Don't need them in this scenario and I know this "trap" so I disabled them from the start of. Not really - as you can see in the image I attached. All components of the SV_Position output are >1. But the vertex shader only does the basic matrix multiplication. So there can't be much wrong unless the matrices themselves are wrong. This is my number one thing, but I don't get what could be wrong ... Nope, everything clear. Just tried to manually create the view matrix with fixed values. Now I do get values clamped between 0 and 1 for z and w but 1.14 and -0.5 for x and y... Still no pixel shader running. _viewMatrix.SetMatrix(Matrix.LookAtRH(new Vector3(500f, 500f, -1000f), new Vector3(500f, 500f, 0f), Vector3.Up));
  12. Hello, I'm currently working on a visualisation program which should render a low-poly car model in real-time and highlight some elements based on data I get from a database. So far no difficulty here. So far I managed to set up my UWP project so that I have a working DirectX 11 device and real-time swapchain-based output. When I debug my Input Assembler stage shows a valid model - the whole thing is correctly rendered in the VS Graphics Analyzer. The Vertex Shader output seems okay, too - but it's likely that the camera view-matrix is a bit wrong (too close, maybe the world matrix is rotated wrong as well) but the VS-GA does show some output there. Now the Pixel Shader does not run, though. My assumption is, that the coordinates I get from the Vertex Shader are bad (high values which I guess should actually be between -1.0f and 1.0f). So obviously the Rasterizer clips all vertices and nothing ends up rendered. I've been struggling with this for days now and since I really need to get this working soon I hoped someone here has the knowledge to help me fix this. Here's a screenshot of the debugger: I'm currently just using a simple ambient light shader (see here). And that's my code for rendering. The model class I'm using simply loads the vertices from a STL-file and creates the vertex buffer for it. It sets it and renders the model indexed or not (right now I don't have a index buffer since I haven't figured out how I calculate the indices for the model ... but that'll do for now). public override void Render() { Device.Clear(Colors.CornflowerBlue, DepthStencilClearFlags.Depth, 1, 0); _camera.Update(); ViewportF[] viewports = Device.Device.ImmediateContext.Rasterizer.GetViewports<ViewportF>(); _projection = Matrix.PerspectiveFovRH((float) Math.PI / 4.0f, viewports[0].Width / viewports[0].Height, 0.1f, 5000f); _worldMatrix.SetMatrix(Matrix.RotationX(MathUtil.DegreesToRadians(-90f))); _viewMatrix.SetMatrix(_camera.View); _projectionMatrix.SetMatrix(_projection); Device.Context.InputAssembler.InputLayout = _inputLayout; Device.Context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList; EffectTechnique tech = _shadingEffect.GetTechniqueByName("Ambient"); for (int i = 0; i < tech.Description.PassCount; i++) { EffectPass pass = tech.GetPassByIndex(i); pass.Apply(Device.Context); _model.Render(); } } The world is rotated around the X-axis since the model coordinate system is actually a right-handed CS where X+ is depth and Y is the horizontal and Z the vertical coordinate. Couldn't figure out if it was right that way, but should be theoretically. public void Update() { _rotation = Quaternion.RotationYawPitchRoll(Yaw, Pitch, Roll); Vector3.Transform(ref _target, ref _rotation, out _target); Vector3 up = _up; Vector3.Transform(ref up, ref _rotation, out up); _view = Matrix.LookAtRH(Position, _target, up); } The whole camera setup (static since no input has been implemented) is for now: _camera = new Camera(Vector3.UnitZ); _camera.SetView(new Vector3(0, 0, 5000f), new Vector3(0, 0, 0), MathUtil.DegreesToRadians(-90f)); So I tried to place the camera above the origin, looking down (thus the UnitZ as up-vector). So can anybody explain me why the vertex shader ouput is so wrong and obviously all vertices get clipped? Thank you very much!
  • Advertisement