Following issue: I got a model and data which basically tells me where on the model surface I got points. When the user taps on that point, he sees some information. Now you can rotate around the model and there are easily 100+ points per model. So I need to distinguish whether a point is actually visible or not. The points are visualized by circles.
Is there any fast algorithm which can check based on the camera position whether the point is covered by the model or not? Or (which would likely be more performant) can I somehow realize this in a shader? Right now I'm using StripLists to render the circles. I see no possible way to use the depth buffer so I can completely clip some circles if they're on the opposite side of the model.
Any ideas? I guess creating a ray and testing against all vertices (400k) of the model would be way to intense for the CPU. In the end I need to have some feedback CPU-side so I can distinguish at input-detection whether the tapped point is visible or not. Otherwise I'd end up having ghost points which can be tapped (or maybe even overlay visible ones) which can be tapped but aren't visible ...