You seem to sample the light depth map with the same texture coords with which you sample the screen depth map. You need to transform these coords into light space first, much like sampling the depth map for the shadows.
In general you should check out gaussian-bilateral filters: https://en.wikipedia.org/wiki/Bilateral_filter
... but in your case it seems, that you have bound the shadowmap (=depth map of light source) instead of the depthmap of your camera to the sampler.
Visual scripting is more for people uncomfortable with coding. Coding is more flexible in the long run and a good coder will produce better results in less time.
But many people interested in doing game design are not good enough with coding or aren't able to code at all. These people will love VS, because it will open up new opportunities. Think of the artists who wants to design a game or a game designer who wants to prototype a new game mechnism (thought a game designer should be able to code to some degree )
Well, what about testing the 3 triangle vertices. If all three are facing away, then the triangle is not visible. The normal I'm refering to is the one which points from the sphere center to the vertex.
To check it, just check if the dot(view_direction,vertex_normal) is positive, with view_direction = normalize(vertex_position-view_position).
Thought you should do this only for larger sections of the sphere and let otherwise the GPU handle the rest.