Clip points hidden by the model

Started by
10 comments, last by JoeJ 6 years, 1 month ago

Following issue: I got a model and data which basically tells me where on the model surface I got points. When the user taps on that point, he sees some information. Now you can rotate around the model and there are easily 100+ points per model. So I need to distinguish whether a point is actually visible or not. The points are visualized by circles.

Is there any fast algorithm which can check based on the camera position whether the point is covered by the model or not? Or (which would likely be more performant) can I somehow realize this in a shader? Right now I'm using StripLists to render the circles. I see no possible way to use the depth buffer so I can completely clip some circles if they're on the opposite side of the model.

Any ideas? I guess creating a ray and testing against all vertices (400k) of the model would be way to intense for the CPU. In the end I need to have some feedback CPU-side so I can distinguish at input-detection whether the tapped point is visible or not. Otherwise I'd end up having ghost points which can be tapped (or maybe even overlay visible ones) which can be tapped but aren't visible ...

Advertisement

One way is to do a render pass and instead of rendering textures render a colour for the ID of each triangle. Then if you read the colour under the cursor you know which triangle was picked, then you can do a ray / triangle test, find nearest corner etc. This is useful for tools but may not be so good in a game.

I am not developing a game, so this is totally fine. Well .. okay that would be a way to go. But considering a point on the opposite side - how can I be sure it's visible or not? Calculating the distance from the selected point towards the selected triangle and using a threshold maybe? How can I calculate / get the triangle ID and let the Pixel Shader draw that one colored? I mean the PS is working in screenspace with pixels ... sorry, I'm haven't worked much with more advanced stuff when it comes to shaders.

There will be numerous articles on this if you google, but here's one:

http://www.opengl-tutorial.org/miscellaneous/clicking-on-objects/picking-with-an-opengl-hack/

With this technique the depth buffer automatically makes sure the triangle colour is the ID of the one on top.

I don't know what API you are using but the process is pretty agnostic. You can probably just reuse your frame buffer rather than needing a separate render target. Your triangle ID is the number of your triangle in your list of triangles making up your model.

You can encode the triangle ID with a simple shader (you can store a value 0 to 255 in each of R, G, B, for instance so store ID % 256, ID / 256, ID / 65536 etc). You can decode this colour with the reverse.

Once you have rendered you need to read the pixels back from the API to your main memory and just decode the value at the mouse coords.

Mouse coords can be done in a few ways, but essentially, you know the width and height of your render target, you know the x and y offset in pixels from top left of the window (from your mouse click position), so you know which pixel to read back. You can also calculate the e.g. OpenGL coords as these are from -1 to +1 on each axis across the render target, although I don't think this is necessary.

Where this technique shines is you can use it to 'draw' on triangles with very high performance, which is useful in tools, which is what I used it for last.

You can probably do all kinds of optimizations too (like only render around the mouse click), depending on your use. The downside is that you are reading back data from the GPU, which may be a couple of frames behind, which can introduce a stall, which is why it is more useful for tools than in game, where you might do something like, e.g. pick the bones on the CPU.

An alternative would be to precompute directional visibility. As a preprocess you send out many rays from the points and see which ones go to infinity or are blocked by the model. From that you could calculate a cone or something more detailed like a hemispherical depth map encoding visibilty. At runtime you lookup this data if current camera position is visible from a given point. There will be some false positives of course.

Realtime raytracing should do as well. I made a quick raytracer including BVH tree in 3 hours or so, and it's fast enough to trace 256*256 image of 5000 vertices model at 30 fps single threaded. You would backface cull / frustum clip points first and for the remainder you could timeslice the work, e.g. trace max 100 rays per frame - users would not notice if a point disappears / appears few frames late. 

I do my mouse clicking on objects entirely in my standard pixelshader. Its fast and is accurate to individual pixels.

Basically if the pixel coordinate is the mouse coordinate, I send back information about the object to a UAV which the cpu then picks up.

If Im using [earlydepthstencil] it already copes with clicking on the closest thing and not things behind it

 

If Im NOT using [earlydepthstencil], I use InterlockedMin() to make sure only the closest object at the location I clicked is sent back to the cpu

To do this, for every 16 bits of information I want to send back to the cpu, I construct a uint where the high 16 bits are the depth and the low 16 bits are the data i want to send back. By using interlockedmin(), the ones with the lowest depth will be what is sent at the pixel.

 

3 hours ago, IonThrust said:

The points are visualized by circles.

...

I guess creating a ray and testing against all vertices (400k) of the model would be way to intense for the CPU...

if you give a depth to the circles, you could use occlusion queries, the GPU would tell you whether any pixel of the circle was rendered. it's quite simple to implement and very accurate (as it really is what is visible, not a ray approximation or something).

 

But, making one ray query to 400k triangles on a click event isn't that much of a drama either, I'd guess it will take 0.1s (on a desktop CPU) for a brute force run, if you use e.g. Möller-Trumbore.

11 hours ago, Krypt0n said:

But, making one ray query to 400k triangles on a click event isn't that much of a drama either, I'd guess it will take 0.1s (on a desktop CPU) for a brute force run, if you use e.g. Möller-Trumbore.

Even a simple implementation of BVH will reduce that time by several orders of magnitude. The fastest ray tracing algorithms can trace millions of rays per second on 1 desktop CPU thread.

8 hours ago, Aressera said:

Even a simple implementation of BVH will reduce that time by several orders of magnitude. The fastest ray tracing algorithms can trace millions of rays per second on 1 desktop CPU thread.

sure, there are a lot of cheap improvements. Tho, my point was, the most vanilla implementation might already be good enough for his case, hence he should not hesitate to go this route if he only worries about runtime (but prefers it due to the simplicity of integration and low implementation time).

if perf is a concern, I'd suggest to rather use https://embree.github.io/ , the integration time is most likely less than implementing (+bug-fixing) your own BVH, if he's doing it the first time. And you can be sure, the resulting perf is as good as it gets (without investigating 10years into the topic, as the creators of embree did).

Great, thank you guys. Thats a lot already. I think I'll have to go with a mixture of some methods.

Meanwhile I do render the circles in 3D thus I have a z-coordinate for each circle. I use billboard matrices to render them rotated towards the camera. I'm suprised that it is that difficult to get that working correctly.

Input side I figured it be the easiest way to just cast a ray through using the 2D screenspace point (click), the view matrix and then check against all circles whether they get hit. That way I always get the right one and with my ~50-200 circles it should be no time problem.

Render side I'm not yet sure. Right now I disable the depth-buffer (or previously just set z to 0 in the Vertex Shader) when rendering the circles on top of the model so they're fully visible (and not partially inside the model). But then again I'd see all circles, even those behind the model.

The best way I guess would be rendering with the depth buffer enabled but before rendering the whole circle I'd have to check if the center points z-value would be less than the depth-buffers value at that point. But is it possible to skip rendering the whole thing if a specific point (not all) are clipped during depth-test? I guess that would be the easiest thing to do. Otherwise I think I have to go with Raytracing.

Thanks so far, this is really helping me!

This topic is closed to new replies.

Advertisement